National external moderation summary reports

550 Business Computing - 2015 semester 2

Introduction

This report provides a national perspective on the moderation of 550 Business Computing.

Assessment materials from 31 Tertiary Education Organisations (TEOs) were moderated for this prescription. 

Of the 31 submissions:

  • 15 met all key assessment requirements
  • a further seven met key assessment requirements overall, but require some modification before next course delivery
  • the remaining nine did not meet key assessment requirements and modification/redevelopment and post-assessment resubmission is required.

Key areas where modification or redevelopment was required were:

  • assessment activities not meeting all prescription requirements (15 submissions)
  • assessment conditions and instructions not clear or sufficiently detailed (three submissions)
  • assessment schedules not providing sufficient guidance to ensure accurate and consistent marking (three submissions)
  • assessor decisions not accurately or consistently reflecting the quality of the learner work (four submissions).

Presentation of assessment materials

Twenty-two TEOs submitted their material electronically with the remainder sending in bound materials supported by files on electronic media.

The quality of electronic submissions varied. The best included appropriately labelled folders and files with clear names that signaled the contents to moderators. All relevant files were uploaded including the starter files provided to learners for practical assessments, and the actual files learners created, rather than PDF versions of these files.

Some electronic submissions fell well short of this good practice, omitting to upload the starter files provided to learners, and/or providing only PDF versions of learner work so moderators could not see how individual learners had created their documents. For example it could not be determined whether a table of contents in a word processed document had been generated automatically by using styles, or if a formula had been used to obtain correct output in a spreadsheet.

The three most difficult electronic examples to moderate had packed all moderation materials into two long PDF documents, the first containing all the background material (statistical summary, assessment grid, course outline, assessments and schedules), with the second containing all learner samples. These long documents were very difficult to navigate, for example moving between an assessment and its marking schedule.

Several of the PDF documents submitted were scanned copies of documents, and the poor quality made reading difficult in some cases. Where PDF files were included, a PDF file created from the original document was generally clearer that a scanned document.

The hard copy submissions were all clearly organised and well bound.

10 submissions were incomplete lacking one or more of the documents required to gain a full picture of assessment activities and learner achievement. In all cases the most significant issue was the omission of starter files provided to learners, and/or the electronic versions of the files learners created.

Sixteen TEOs recommended a text book for the theory learning outcomes with the majority of these making it a recommended rather than compulsory reading. The O’Leary and O’Leary Computing Essentials was recommended by 12 TEOs, with three recommending Parson and Oja’s New Perspectives on Computer Concepts. Twelve TEOs included a prescribed text or manual for the practical work with six of these making it compulsory.

The Microsoft Office suite was used for assessing the practical application by all except one TEO which used Open Office. One TEO used the Pearson myitlab.

Assessment grids

The best assessment grids were accurate and clearly showed the relationship between individual assessment tasks and the prescription, with a clear indication of weighting and any variance from the prescription calculated. 

Most of the grids submitted complied with this, however, a small number either had too little or too much detail, were badly organised (for example spread over multiple pages with large gaps between sections) or inaccurate.

Assessment activities

The key considerations for moderators were whether tasks:

  • assessed all prescription learning outcomes, with appropriate weightings
  • were at the appropriate level.

Learning outcomes

Learning outcomes indicate assessment outcomes and specify what learners need to know and be able to do. Key elements indicate assessment coverage and specify how the related learning outcome must be evidenced. Key element assessment evidence must be provided in the context of the learning outcome, not in isolation. 

Many TEOs provided assessment tasks demonstrating a sound understanding of the intent of the prescription, and offered learners interesting challenges. Often in these cases, a realistic scenario was provided, and learners were required to apply their knowledge (learning outcomes 1-3) or skills (learning outcome 4) to solve problems.

Some TEOs set assessment tasks that integrated learning outcomes or key elements: a case study scenario for the theory components was common, and a small number of submissions integrated part of the practical requirements with the theory. For example a theory assignment required a report, integrating many of the key elements of learning outcome 4a) word processing. A small number also integrated key elements of learning outcome 4 with an assessment blending word processing, spreadsheet, and database skills.

There were also a number of excellent assessments where learning outcomes were assessed discretely.

14 submissions did not meet all prescription requirements. Some of the issues appeared to be because TEOs were using adapted (or not) assessment tasks relating to version 1 of the prescription, and these did not align closely enough to version 2.

Common issues included:

  • In learning outcomes 1 and 2, assessments failed to acknowledge the requirement to ‘evaluate…to meet business requirements’. This means that assessment tasks require a specific business context in which the key elements can be explained or discussed, and then evaluated. Identifying, defining, listing, unpacking acronyms or providing brief descriptions are not sufficient.
  • Learning outcome 2: Several TEOs supplied assessments that skipped very lightly over ‘network planning and implementation’ from key element a). For example a question asking learners to define or explain a LAN, MAN and WAN is not adequate on its own.
  • Learning outcome 3: A number of TEOs had assessment tasks that more closely reflected version 1 of the prescription with an emphasis on ergonomics (still relevant but as a part of key element a), not a key element in itself), and little or no acknowledgement of disaster recovery planning. It is important when assessing disaster recovery planning to ensure tasks relate to information technology, not the broader aspects of an organisation’s disaster recovery such as plant.
  • Learning outcome 4: Software tools and features. The best assessment tasks relating to the practical requirement of this prescription provided learners with a realistic business context and identified the output required without being overly prescriptive. This allowed learners opportunities for problem solving and some flexibility in how they presented their work. Less effective were assessments where learners were given detailed descriptions of exactly how their output should look, including screenshots of ‘model answers’, or step by step instructions and precise formatting details.
  • There is flexibility in how learning outcome four, the practical component, is assessed. Most TEOs divided the 70% fairly evenly between word processing, spreadsheet and database, with around 5% allocated to key element d), other features. A small minority allocated more to one area, for example word processing, and considerably less to another, for example database.
  • Learning outcome 4a) word processing: Most TEOs managed an adequate coverage, however, a number did not assess ‘review and collaboration tools’ as they should have. Assessments where this was well done included learners editing, annotating and saving a document from another source, usually supplied by the tutor. In a very small number of submissions, this was achieved by peer review of another class member’s work.  
  • Learning outcome 4b) spreadsheet: this key element clearly defines the spreadsheet functions and features that must be assessed, however, some TEOs still did not assess it fully. Commonly omitted were range or cell names, linked worksheets and two of the listed data analysis tools. Where date was assessed, it was often just inserting a date, rather than using that date in a calculation, for example to generate another date such as a payment due date.
  • Learning outcome 4c) database: most TEOs covered the key element adequately, with several assessments providing relevant ‘real world’ situations for learners to demonstrate their skills. Good practice assessments included at least two queries, each requiring different types of criteria. Some included calculations, and parameter queries. The most common weakness with this key element was report tasks where learners were not asked to include summary options as the prescription requires.
  • Learning outcome 4d) other features: this key element was most frequently inadequately assessed. In two cases it was left out altogether, while in many others, one or more of the bulleted points was omitted or not assessed correctly. The best assessment tasks blended this key element with other practical assessment, and learners were required to create and save a document as a template, and insert an object into a document by both linking, and embedding. For this latter task, simply inserting an image or similar was not sufficient. File conversions were most commonly converting a word processed document or spreadsheet worksheet to a PDF file.

A 10 per cent aggregate variance is allowed in assessment weightings. That is, the percentage variation in total across all learning outcomes should not be more than 10 per cent. 

The majority of TEOs stayed within the prescription weightings. Two showed significant variances while several others had lesser variance. A significant factor contributing to a variance for six submissions was the allocation of a significant amount of marks for non-prescription practical tasks in learning outcome 4. These included basic formatting in word processing and spreadsheet worksheets. While there is an expectation that learners will create documents that look professional, it is anticipated that the vast majority of marks are allocated for the features and functions identified in the prescription.

Level

A number of TEOs assessed below the level expected in learning outcomes 1, 2 and 3. Such tasks required little more than a list or simple description relating to the topics, rather than the discussion, evaluation and recommendations required. Learning outcome 4 assessments generally better reflected the level required, although as indicated above. In a small number of cases, learners were provided with more guidance than is appropriate at Level five.

Assessment conditions and instructions

The key consideration for moderators was whether assessment conditions and instructions were clear, appropriate and fair to learners.

The majority of TEOs assessed this prescription with a blend of controlled and uncontrolled assessment tasks.

Most submissions met the minimum standard for clear and appropriate assessment conditions and instructions. Some submissions were very well written and would readily engage learners in the tasks required. Three submissions did not meet key assessment requirements in this section, and some other submissions included minor errors. Issues included:

  • No information provided about assessment conditions, for example whether the assessment was to be completed in closed or open book/file conditions, or the timeframes permitted. It is important for moderators to know the conditions in which learners will be assessed. The tasks and questions that may be appropriate in a closed book controlled assessment could be inappropriate in an open book or uncontrolled environment.
  • Some practical assessments did not specify where learners were to save their test files.
  • Unclear or ambiguous instructions, where, for example weighting or overview in a course outline differed from that on the assessment itself.
  • Typographical and proof reading errors that could lead to ambiguity or confusion for learners. For example in two submissions, the names of people or organisations changed partway through a case study. In other submissions, poor English created a barrier to understanding the assessment task.

Assessment note 1 states ‘good industry/business practices’ are required of learners. The same good practice should be demonstrated in both the writing and the formatting of all course documents given to learners.

Assessments from several TEOs used their standard institutional cover sheet for all assessments, even though some conditions outlined on the cover sheet were not applicable for the actual assessment. Examples included stipulating the nature of a handwritten submission or stating the use of calculators was not permitted. Such conditions are not applicable to a practical computer based assessment submitted electronically.

Assessment schedules

The key considerations for moderators were whether schedules:

  • provided statements that specify evidence expectations that meet prescription requirements (e.g. sample answers and/or a range of appropriate answers, and/or quality criteria for answers)
  • provided a sufficiently detailed breakdown of marks to ensure consistent marking.

Most TEOs provided clear appropriate schedules with suggested solutions where required. Three submissions did not meet this requirement, with one providing no marking schedules and others providing schedules that did not align to the assessment tasks.

Model answers for practical tasks were generally supplied, however, some submissions did not include these, or included them only in PDF format, so it was not evident how the documents were created, including formulas.

A number of submissions provided schedules with one or more of the following issues:

  • no guidance for how larger amounts of marks (five or more) may be allocated for full or partial learner responses
  • long ‘model answers’ for theory tasks, but no indication of how much was required for a full answer
  • insufficient space for learners to write answers in theory test or exam booklets.

Assessor decisions

The key considerations for moderators were whether:

  • marking rewarded a similar quality of work with similar marks
  • marking rewarded learner work in a manner consistent with prescription requirements.

Marking was considered consistent in all but four submissions. Of these four, one TEO did not submit learner samples and two submitted them only in PDF format so it was not possible to check how learners had created their documents. Other issues included inconsistent marking where learners may have received the same mark for work that differed in quality.

Several submissions indicated generous marking - awarding high marks for work of mediocre quality. 

In general, a strong marking schedule with clear guidance for the distribution of marks, and an appropriate model answer or guide for content led to consistent and reliable marking.

The best examples of marked learner work provided clear, constructive feedback to learners, with the allocated marks clearly shown and some indication of how the marking decision was made. In most cases this was a brief comment.

Conclusions

This was the first moderation round for version 2 of the 550 Business Computing prescriptions. It was pleasing to see that many of the issues that arose in assessing version 1 had disappeared with half the submissions of a consistently good quality in every respect. Many of this group submitted robust assessments that addressed prescription requirements in a thorough, intelligent, and sometimes innovative or elegant manner. A further 20% required some modification while 30% were required to resubmit.

For TEOs with serious or even minor issues, a careful assessment design process, beginning with a detailed grid matching assessment tasks to learning outcomes would help to ensure compliance with the prescription. This grid must reflect the correct version of the prescription - echoes of the previous version were evident in too many submissions.

Including a balance of uncontrolled assessment tasks (assignments) and controlled tests provides opportunity for learners to research and think about work they submit as assignment tasks, while proving they can complete tasks in a defined timeframe and produce authentic results.

Robust marking schedules clearly outlining an allocation of all marks and the quality of work or answer required would help ensure fair and reliable learner grades.

This was also the first round of this prescription with electronic submission of materials. Most were complete and easy to navigate - it is hoped that future submissions will all be complete, contain original documents, and be well labelled.

 
 
Skip to main page content Accessibility page with list of access keys Home Page Site Map Contact Us newzealand.govt.nz