Assessment Report

New Zealand Scholarship
Statistics 2020

Standard 93201

Part A: Commentary

Successful candidates were prepared to apply statistical thinking when solving problems within familiar and unfamiliar situations rather than provide rote learned responses. They demonstrated broad understanding of achievement objectives from across the Statistics strand of the curriculum up to and including Level 8 and were very familiar with the statistical enquiry cycle (PPDAC) and how different types of studies may be designed to investigate different types of problems and the associated analytical methods.

It was evident that candidates who had carried out investigations using the range of data and methods expected for this subject and had practised writing reports about these investigations, were able to confidently engage with concepts assessed in the exam. This included familiarity with critically evaluating statistical-based reports of others and the use of the ‘rule-of-thumb’ approach to estimating the margin of error associated with survey/poll percentages.For example, many unsuccessful candidates incorrectly used two individual confidence intervals for proportions and then checked for overlap, rather than constructing a confidence interval for the difference between two proportions. Some candidates attempted to use norm-based formulae for calculations for confidence intervals, for the most part incorrectly, including combining ‘rule-of-thumb’ approaches with norm-based approaches e.g.,

 

 

Successful candidates demonstrated strong skills with calculations, particularly those associated with probability distributions. However, these candidates also demonstrated understanding of important modelling ideas which were informed by contextual considerations. Surprisingly, there were candidates who used approximations for probability distributions despite having access to technology (and tables) to compute the necessary probabilities from the intended probability distribution (e.g., using a normal approximation to the binomial distribution). The use of approximations for probability distributions is not expected for this standard and created unnecessary additional reasoning (e.g., use of continuity corrections).

It appeared that many candidates were unfamiliar with exploratory data analysis and writing descriptive statements. Some candidates incorrectly assumed that for all situations involving box plots that a “call” must be made (this approach is generally only appropriate for inferential situations). Some candidates confused making links between features of distributions and contextual knowledge with speculating about causality (e.g., claiming why features of distributions existed), going beyond what was asked or failing to identify key features of the distributions.

There was a large amount of variation between candidates with respect to their use of simulation-based inference methods such as bootstrapped confidence intervals and the randomisation test. Some candidates seemed prepared to describe the specifics of how the computer carried out simulations but were unable to confidently interpret the results of the simulation. Candidates were expected to demonstrate they could interpret confidence intervals for the difference between two parameters as well as for single parameters, clearly stating the population for which the inference was being made. There was evidence that some candidates were unfamiliar with interpreting a confidence interval for only one parameter. There was also evidence that several candidates held a common misconception that if a sample is a small percentage of a population (e.g., 2261 out of 5 million people) that the sample will not be representative.


Part B: Report on performance standard

Candidates who were awarded Scholarship with Outstanding Performance commonly:

  • communicated in a clear and succinct manner, with little or no repetition and no contradiction within a response to a question
  • demonstrated an excellent and confident understanding of the statistical enquiry cycle (PPDAC) and were able to apply this to a real-world context
  • demonstrated strong analytical skills in extracting relevant, correct, and concise information from both text and visual summaries of data
  • explained in good, but not superfluous, detail the impact of outliers on the gradient on the line of best fit
  • performed very well on the calculation-based questions
  • identified the opportunity to compare using relative ideas rather than just an absolute value comparison, when appropriate
  • considered other factors that might affect the models that they were given, such as sampling variability or the influence of outliers on model fit.

Candidates who were awarded Scholarship commonly:

  • demonstrated a good understanding across the statistics curriculum, in particular design of experiments and evaluating statistical reports
  • correctly interpreted confidence intervals, making reference to the population parameter in context
  • extracted relevant information from the range of the graphics/charts presented in the examination
  • performed well on the calculation-based questions
  • identified some correct elements of the experimental design process
  • consistently backed up their statements with numerical evidence and succinctly linked to the context
  • responded to all questions by providing focused responses, rather than writing in-depth responses for only some questions.

Other candidates

Candidates who were not awarded Scholarship commonly:

  • used vague, little or no numerical evidence in their responses
  • relied on conjecture rather than statistical evidence to evaluate claims and spent too much time doing this
  • did not use relevant units or scale, or make reference to magnitude of numbers in context, in their responses
  • gave incomplete explanations or interpretations of confidence intervals e.g., omitted any reference to the population parameter
  • incorrectly used the z-score formula to calculate the standard deviation when given the mean and the central 95% limits for a normal distribution
  • incorrectly interpreted the re-randomisation test output from the results of an experiment e.g. stating that the large tail proportion from the randomisation test was “evidence that chance was acting alone” or that the null hypothesis was true
  • did not attempt all questions or ran out of time, possibly due to an over investment of time in earlier questions in the examination. 

Subject page

 

Previous years' reports

2019 (PDF, 171KB)

2018 (PDF, 95KB)

2017 (PDF, 44KB)

2016 (PDF, 197KB)

 
Skip to main page content Accessibility page with list of access keys Home Page Site Map Contact Us newzealand.govt.nz