Home

PiSA - Plagiarism in Statistics Assessment

Send by email
Project leads: 
Bidgood, Penelope; Hunt, Neville; Payne, Bradley; Simonite, Vanessa

There is much concern in UK HE that instances of plagiarism are on the increase. Forms of assessment such as the traditional examination and online testing using large question banks and randomly created tests are the least vulnerable to academic misconduct, but these types of testing fail to address some important learning outcomes in statistics, not least the ability of students to analyse a set of data appropriately and report results effectively without very limiting time constraints. However, the lack of supervision associated with coursework assignments means that giving students the same data to analyse poses a serious risk of plagiarism. Group work, which is used to give students opportunities to develop team skills, has its own plagiarism problems. Various strategies have been developed to try to minimise the possibility of plagiarism in statistics (for example, giving each student a unique random sample from a larger data set).


This miniproject aimed to survey HE lecturers in statistics to find out what methods of assessment and strategies to deter plagiarism are being employed currently, so that element of good practice could be identified. Key findings were:


1. Plagiarism, specifically collusion, is a significant problem within large statistics service modules;


2. The majority of statistics lecturers are aware of plagiarism issues and are taking action to combat it;


3. It is quite common for statistics lecturers to fail to apply institutional procedures in ‘minor’ cases of plagiarism. In contrast, some lecturers make every effort to demonstrate how the regulations and penalties might apply to statistics assessments, giving examples of cases detected in previous years;


4. Plagiarism often goes undetected on large service modules due to a multiplicity of assessors. It is most likely to be detected when one person assesses all the students;


5. There is much innovative work taking place in the area of individualised assessment, but also some duplication of effort. There is scope for a further project to synthesise best practice and make it available to the wider community;


6. Assessments that require students to collect their own data are widely employed;


7. Many statistics lecturers have moved away from take-home assignments to in-class supervised computer-based assessments, often based on a previously circulated data set, case study or research paper;


8. In some universities, in-class tests are exposed to a high risk of cheating by unsuitable accommodation, inadequate invigilation, failure to check student identities, and naïve organisation;


9. The Turnitin electronic plagiarism detection software is increasingly being used and is giving lecturers greater confidence in the integrity of their coursework assessments;


10. In final year extended project assessments it is good practice to include an element that assesses the student’s working method and, ideally, an oral examination to check that the project is genuinely the student’s own work; and,


11. Online cheating companies openly offer an easily accessible, if expensive, way for students to obtain professional individual help with assignments.