National external moderation summary reports

430 Quantitative Business Methods - 2013 s2

Introduction

This report provides a national perspective on the moderation of 430 Version 1 Quantitative Business Methods, and 430 Version 2 Statistics and Financial Mathematics for Business.

Assessment materials from fourteen Tertiary Education Organisations (TEOs) were moderated for this prescription.

Of the 14 submissions:

  • six met all key assessment requirements
  • a further three met key assessment requirements overall, but require some modification before next course delivery
  • the remaining five did not meet key assessment requirements and modification/redevelopment and post-assessment resubmission is required.

Areas where modification or redevelopment was required were:

  • Variance from prescription weightings exceeded the 10% allowed
  • Learning outcomes not adequately assessed
  • Assessment note 2 specifying use of computer software for learning outcomes 1 and 3 (version 1) not met
  • Sample answers including graphs not provided, and insufficient detail of breakdown of marks in assessment schedules
  • Lenient marking of descriptive answers.

Presentation of assessment materials

Most submissions were well presented and easy to follow. It is suggested, however, that learner samples should be copied in colour so that markers’ comments and marks can be more easily read.

The submission checklist specifies that electronic files provided to learners and presented by learners for assessment should be submitted. As the prescription requires the use of computer software all submissions should have included electronic files. However, most did not.

Assessment grids

Most assessment grids were accurate and clearly showed the breakdown and weightings of the assessments to the learning outcomes and key elements as required. To ensure this, it is suggested that the guidelines and template provided on the NZQA website be used as a basis for developing an appropriate assessment grid.

Assessment activities

The key considerations for moderators were whether tasks:

  • assessed all prescription learning outcomes, with appropriate weightings
  • were at the appropriate level.

Learning outcomes

Learning outcomes indicate assessment outcomes and specify what learners need to know and be able to do. Key elements indicate assessment coverage and specify how the related learning outcome must be evidenced. Key element assessment evidence must be provided in the context of the learning outcome, not in isolation.

Most sets of assessments covered the learning outcomes with appropriate weightings. The following were the most common omissions.

  • Not assessing the requirement to select an appropriate graph type in learning outcome 1 (version 2). Some assessments specified the graph type rather than requiring learners to select an appropriate type. Assessment note 4 (version 2) specifies that learners must select an appropriate graph type at least once during assessment.
  • Learning outcome 2 requires learners to interpret correlation and regression results. To meet this, key element d) ‘interpretation of y-ŷ’ (version 1) and ‘interpretation of residuals yŷ’(version 2) requires learners to interpret a residual plot for all observations, not just calculate the residual for one observation.
  • The requirement in learning outcome 3 to produce forecasts with a seasonal component was not assessed by some TEOs.
  • Learning outcome 4 requires learners to ‘use simple and one other random sampling technique to select samples’ (version 1) or ‘demonstrate the correct use of random sampling techniques to select samples’ (version 2). The words ‘use’ and ‘select samples’ require more than short descriptions of sampling techniques. Good assessments provided learners with population data and asked them to select actual samples from this.
  • Learning outcome 5 requires learners to ‘use index numbers to compare time series’ (version 2). This requires indices for two different time series to be compared requiring a change of base to do so.

Assessment note 1 (version 2) specifies that ‘assessment materials must reflect … good industry/business practice’. TEOs therefore need to ensure that their assessments do not use overly simplistic or unrealistic scenarios not likely to be encountered in an industry/business context. Hypothetical scenarios or data may be used but they need to realistically replicate an industry/business context.

A 10% aggregate variance is allowed in assessment weightings. That is, the percentage variation in total across all learning outcomes should not be more than 10%. Only two TEOs exceeded the allowable variance.

Level

It was pleasing that most assessments were set at the appropriate level, and that there was a high level of consistency between TEOs.

It is suggested that learning outcome five requiring learners to ‘describe the Consumer Price Index (CPI)’ is best assessed at the appropriate level by an assignment which requires learners to search Statistics New Zealand sources and answer in more depth than can be expected in short exam questions assessing low level recall.

Assessment conditions and instructions

The key consideration for moderators was whether assessment conditions and instructions were clear, appropriate and fair to learners.

Most assessments were clear, appropriate and fair. The main issue that arose was what the moderators considered to be over assessment. Several TEOs assessed the same learning outcomes more than once across several assessments. If the emphasis or focus is different this can be appropriate. For example, an exam question may focus on interpreting the analyses while an assignment question may focus on completing the statistical analyses or searching Statistics New Zealand sources. In this case one assessment complements the other and together they may provide a good coverage of the learning outcomes. However, repeating the same or similar questions across several assessments or even within the same assessment is not necessary and can be unfair to learners who may have an excessive summative assessment load as a result. In these cases some of the assessments would be better used as formative assessments.

Some TEOs required learners to generate or find their own data for analysis. This is appropriate and can help reduce learners copying answers from one another. However, care needs to be taken that the data collection is not too time consuming.

There was some inconsistency concerning the number of marks allocated to questions. There was as expected wide variation between TEOs, but there was also inconsistency between questions within the same TEO’s set of assessments. This may confuse learners. Mark values should provide learners with an idea as to the length or depth of answer required. For example, a five mark descriptive question may require several paragraphs, whereas a one mark question may be answered in one sentence. A suggested guide is that each relevant and properly explained point or significant step in a calculation should be worth one mark.

Little use of group assessment was made. This was appropriate as assessments requiring calculations and data analyses are most effectively completed individually. If group assessments are to be used they are perhaps best suited to interpretation of data and analyses where different viewpoints can usefully be discussed amongst group members. However, it is important to specify in the marking schedule how individual marks are to be derived from the group’s work.

Most TEOs met the requirement for computer software to be used, in take home assignments. A few made use of computer software in supervised tests and exams. Both are appropriate. Excel was the most commonly used computer software. While Excel has its limitations it has the advantage that all learners are likely to have access to Excel when they enter the workforce. Learners intending to progress to degree study may find it helpful to have used SPSS. To meet the requirement in assessment note 1 (version 2), to reflect ‘good industry/business practice’. TEOs need to use software such as Excel, SPSS or Minitab which are widely used in industry.

Assessment schedules

The key considerations for moderators were whether schedules:

  • provided statements that specify evidence expectations that meet prescription requirements (e.g. sample answers and/or a range of appropriate answers, and/or quality criteria for answers)
  • provided a sufficiently detailed breakdown of marks to ensure consistent marking.

Some assessment schedules did not provide sample answers for all questions. If learners are asked to interpret some analyses, then possible points expected in the answer need to be shown. Similarly, where a graph is required, the expected graph needs to be shown in the assessment schedule.

The main weakness in the assessment schedules was not showing a sufficiently detailed breakdown of marks, particularly for questions worth more than one mark and where the answer had several parts. Examples included not showing the breakdown of marks for the various parts of graphs, e.g. title, labelling axes, accuracy, and not showing how marks were allocated to points in a discussion. For calculations, marks can be allocated for each step or marks can be deducted for each error. Both are acceptable but need to be outlined in the assessment schedule to ensure consistent marking and be made clear to the learner.

Expected answers to questions asking for interpretation of analyses need to do more than just restate the results of the analysis. For example, stating that the standard deviations of the male and female salaries were $20,500 and $12,100 respectively is not interpretation, whereas explaining that male salaries were more varied explains what the analysis means and is a better answer.

Assessor decisions

The key considerations for moderators were whether:

  • marking rewarded a similar quality of work with similar marks
  • marking rewarded learner work in a manner consistent with prescription requirements.

Marking for the most part was accurate and consistent. Partly this reflects the nature of the subject where assessments often have unique answers, against which learners’ work can be compared. However, care needs to be taken when marking descriptive answers. While it is appreciated that assessors are not marking grammar and expression, they need to ensure that learners’ answers make sense and are accurate before awarding marks.

Conclusions

Overall, many TEOs were assessing all learning outcomes at the appropriate level, producing adequate assessment schedules and marking learner work accurately. However, care needs to be taken with the following:

  • All parts of the learning outcomes must be assessed, and the requirements in the assessment notes met. The key elements are useful for specifying what must be evidenced but key element evidence must be consistent with the requirements in the learning outcomes. Asking learners to ‘describe’ or ‘state’ when the relevant learning outcome requires them to ‘use’, ‘select’, or ‘apply’, for example, is not meeting the learning outcome.
  • Mark values for questions need to reflect the length or depth of answer expected and need to be consistent between questions.
  • Assessment schedules need to provide sample answers for all tasks including graphs, and need to provide a detailed breakdown of marks for all questions to ensure consistent marking.
  • Marking of calculations and analyses was generally accurate and consistent, but particular care is needed when marking descriptions, explanations and interpretations.
 
 
Skip to main page content Accessibility page with list of access keys Home Page Site Map Contact Us newzealand.govt.nz