National external moderation summary reports

430 Quantitative Business Methods - 2016 semester 2


This report provides a national perspective on the moderation of 430 Quantitative Business Methods.

Assessment materials from seven tertiary education organisations (TEOs) were moderated for this prescription.

Of the seven submissions:

  • three met all key assessment requirements
  • a further three met key assessment requirements overall, but require some modification before next course delivery
  • the remaining one did not meet key assessment requirements and modification/redevelopment and post-assessment resubmission is required.

Areas where modification or redevelopment was required were:

  • Assessment schedules needing to include both sample answers and a more detailed breakdown of marks for the questions as asked to ensure consistent marking (four submissions).
  • Learning outcomes not meeting prescription requirements largely through insufficient realistic application questions, lack of interpretation questions, misunderstanding of the more financial learning outcomes and learning outcomes not fully assessed (four submissions).

Four of the seven submissions were using version 2 of the prescription while the balance were using Version 3. As the versions are very similar except for the assessment notes there was no confusion between the versions.

Assessment tools varied, using one to eight assignments followed by a 30 per cent to 50 per cent exam/supervised assessment. No submissions were totally assignment based. Two submissions were entirely supervised assessment, despite allowing students access to computer rooms for one or two assessments. Two submissions have used a series of smaller assessments to ensure students stayed on track with the programme. One had five quizzes, while the other had eight small assignments.

Presentation of assessment materials

Most TEOs took advantage of the ability to present material on line. Most material was well presented and received in full; all were well labelled.

The computer requirement of 430 means that electronic Excel files are required – those provided to learners as part of the assessment and those presented by learners as part of the learner samples. Most were provided to NZQA.

Assessment grids

The assessment grids were generally well done, indicating very little variance from the required weightings. Development of an assessment grid helps ensure that all learning outcomes and key elements of the prescription are evidenced in assessment activities and that learning outcomes are correctly weighted. Two TEOs had not provided details by key elements and a further provider had not fully updated their current grid correctly.

An example assessment grid (PDF, 78KB) gives guidance on basic formatting recommended.

Assessment activities

The key considerations for moderators were whether tasks:

  • assessed all prescription learning outcomes, with appropriate weightings
  • were at the appropriate level.

Learning outcomes

Learning outcomes indicate assessment outcomes and specify what learners need to know and be able to do. Key elements indicate assessment coverage and specify how the related learning outcome must be evidenced. Key element assessment evidence must be provided in the context of the learning outcome, not in isolation.

Two of the seven submissions did not meet moderation requirements in this area, with another four needing modification mainly through failing to assess all learning outcomes fully. Most assessment programmes were in keeping with the prescription aim of a practical real-life slant and generating useful information from the data being analysed. The widespread use of computer software, particularly for learning outcomes 1 to 3, was encouraging. Many aspects of the other learning outcomes also lend themselves to computerised assessment. The guidance provided by the prescription assessment notes is helpful for TEOs’ understanding of the learning outcomes.

The following are the most common issues:

  • In learning outcome 1 key element d) Interpretation was often minimal. Interpretation requires more than identifying the outliers. A comparison of two or more data sets requires interpretation, not merely graphing two data sets on the same graph. Interpretation is about understanding what these graphical pictures of the data are telling you about the data and/or the scenario. 

    There was also confusion in two instances over the use of ‘column charts’ instead of ‘histograms’. While the standard version of Excel can only use a column chart to approximate a histogram, students should know they are trying to do a histogram, and the answer should look like there is continuous data (0 gap width). More use of pivot tables and categorical data can avoid the issues experienced with numerical data and histograms.
  • In learning outcome 2 two aspects of key element d) were not fully assessed. Judging the ‘reliability of the predicted value’ requires an interpretation of how well a straight line regression fitted the data, and whether the prediction is based on the independent data being extrapolated or interpolated. It is therefore best to assess using more than one prediction or comparing more than one regression equation. The other aspect not fully assessed was ‘interpretation of residuals’. In version 3 of the prescription the following prescription note has been added:“ For learning outcome two key element d) calculation and interpretation of a residual plot for all observations is required.”
  • In learning outcome 3 key element a) plots and features were largely well assessed by identifying these features in graphs already plotted. This is a better approach than asking all theoretical questions. One TEO fitted the wrong model to data that was obviously multiplicative.
  • Learning outcome 4 is about the correct use of random sampling. The students therefore need to demonstrate that they can generate random numbers, preferably in a practical example with a sample as the outcome. Two TEOs had excellent assignment-based questions that generated a sample from the population by at least two methods, calculated the mean and standard deviation of that sample and used this later to generate confidence intervals for the population mean in learning outcome 7.
  • Learning outcome 5 was probably one of the most poorly assessed outcomes, largely because it is not a topic covered in some statistics texts. Three TEOs had little knowledge of the practical use of indices as required by key elements b) and c). To ‘use index numbers to compare time series’ requires indices from two different time series to be compared over time. They should both have the same base year at the start of the time series and this may require changing the base of at least one series to do so.

    A price index such as the CPI is used to ‘remove the effect of price changes from a financial price series’ so all the values are in one year’s dollar. Questions can be about the price of one item going up or down relative to other prices over time or using Excel to remove the effect of inflation on the whole financial series (e.g. retail sales). Then the student can interpret how the real value has changed over time. 
  • Assessment note 1 specifies that ‘assessment materials must reflect … good industry/business practice’. Hypothetical scenarios or data may be used, but they need to realistically replicate an industry/business context. This is largely well done, but was an issue in learning outcome 6 where quite unrealistic or complicated scenarios have been used, particularly to assess simple interest. The emphasis in key element a) should be on compound interest, which is the more realistic scenario, and simple interest merely touched on so students understand the difference. Questions do not need to be unduly complex and Assessment Note 6 recommends that key element b) can be taught using the Excel functions rather than using the formulae. Adding together future values at different points of time is never good industry practice.

A 10 per cent aggregate variance is allowed in assessment weightings. The percentage variation in total across all learning outcomes should not be more than 10 per cent.

No TEOs exceeded this aggregate variance.


Most assessments were set at the appropriate level. There was one example of over-detailed headings on questions in closed book assessment. This led students to the correct formula to use, reducing the level of the assessment. Another TEO was assessing statistical tests other than those required, increasing the level of the assessment.

Assessment conditions and instructions

The key consideration for moderators was whether assessment conditions and instructions were clear, appropriate and fair to learners.

Overall, assessment instruction and conditions were relatively clear. However:

  • The main issue was around instructions given to learners of how to complete assessment, and what to send in, particularly with assessment involving computer work. It should be clearly stated if questions should be completed in their Excel file or on an answer sheet or in a Word file using copy and paste.
  • Assignment due dates and preferably a grid showing marks allocated by question, the total marks of the assessment and the results should be included in the assessment. Total marks is particularly important in closed book assessment and when the total is not 100. 
  • There was some inconsistency concerning the number of marks allocated to questions within the same assessment which sometimes bore more relationship to the number required to balance the grid than the time taken to complete the question. Mark values should provide learners with an idea of the length or depth of answer required. For example, a five mark descriptive question may require several paragraphs, while a one mark question may be answered in one sentence. A suggested guide is that each relevant and properly explained point or significant step in a calculation, or aspect of a graph could be worth one mark.
  • In one submission, the size of the assessment and the time allocated to it were considered to be unfair to learners. This aspect is related to the point above.
  • Two submissions had issues around a lack of clarity in the questions themselves. This is particularly an issue in supervised assessment.
  • Two submissions had contradictory information given to students. The information about the assessment programme in the course outline did not match the outcomes that were assessed in each assessment. 
  • One submission gave little indication of how marks would be allocated between parts of the assessment. Learners need to know where to put the most effort.

It is common practice to provide learners with formula sheets and statistical tables. Students are not required to memorise the complex formula required for learning outcomes 6 and 7. If these are provided to students, they should also be supplied for moderation.

While closed book assessment minimises the likelihood of cheating, it is considered that this course lends itself to at least some assignment type assessment so students have time to explore things in detail, for example graphing and sampling.

Assessment schedules

The key considerations for moderators were whether schedules:

  • provided statements that specify evidence expectations that meet prescription requirements (e.g. sample answers and/or a range of appropriate answers, and/or quality criteria for answers)
  • provided a sufficiently detailed breakdown of marks to ensure consistent marking.

The main weakness in the assessment schedules was not showing a sufficiently detailed breakdown of marks, particularly for questions worth more than one mark and where the answer had several parts. Little allowance for alternative answers has been made, particularly when poorly worded questions have been misinterpreted. For example, in learning outcome 1 students need to choose an appropriate graph and there may be more than one. For calculations, marks can be allocated for each step or marks can be deducted for each error. Both methods are acceptable, but need to be outlined in the assessment schedule to ensure consistent marking. They also need to be made clear to the learner.

With assessment that requires a mix of computer-based and written answers, writing an assessment schedule that provides both model answers and a breakdown of marks can be quite involved and time consuming. The recommended approach is to write a specific marking schedule for each assessment in Word, allocating marks and use cut and paste or refer to specific sheets in Excel files to identify model answers. Moderators would still need to receive the Excel files to view the calculations. Five of the seven TEOs could improve in this area.

Assessor decisions

The key considerations for moderators were whether:

  • marking rewarded a similar quality of work with similar marks
  • marking rewarded learner work in a manner consistent with prescription requirements.

Marking for the most part was accurate and consistent. A case was noted where a different method to achieve the same correct answer was not awarded full marks as all the intermediary points had not been covered. There also needs to be a consideration of the level of rounding errors that is acceptable and what marks should be given for a correct method.

It was noted that there was little useful feedback to students given on the learner samples. Marked interim assessment is an opportunity for lecturers to give some pointers and for students to learn from their mistakes. It may be that this assessment was largely computer-based and the feedback was given elsewhere.


Assessment material that met the national standard or required minimal modification reflected good current assessment practice. Version 3 of this prescription has provided some additional assessment notes and guidance as to what is expected. All providers made good use of computer software and included real life or realistic data to analyse in their assessment programmes.

While most providers have a good balance between closed book, time-constrained, supervised assessment and assignment-based assessment in their assessment programme, some providers should consider the introduction of some assignment-based assessment.

The main issues for submissions requiring modification or redevelopment were:

  • full coverage of the learning outcomes considering both the learning outcomes themselves and the assessment notes
  • using marking schedules that provide both model answers and a breakdown of marks within questions. A mark breakdown should highlight if questions have too many or too few marks allocated to them, and if overall the assessment load is reasonable and achievable.


Skip to main page content Accessibility page with list of access keys Home Page Site Map Contact Us