This miniproject aimed to upgrade and develop substantially the existing Mathletics suite of online objective tests in the area of GCSE to level 2 undergraduate mathematics. Mathletics is not a teaching package/courseware in the usual sense; it is designed exclusively to test and reinforce learning which has already taken place via the lecture or other computer-assisted learning courseware, and stresses formative feedback and practice of techniques. Mathletics had been authored using QM Designer over many years, primarily by final year project students as part of their dissertations, and formed an extremely useful resource for lecturers and schoolteachers.
As of 2008, following the completion of this project, some 1800 question styles had been authored mostly covering lower level mathematics (GCSE to level 1 university).
Authoring objective questions involves very clear specification of the question design and interacting technical and coding issues. This makes it a rich source of student projects and some of the Mathletics system has been written by final year and postgraduate students. Having established a critical mass of question styles in some areas, attention is increasingly turning to evidence-based evaluation of the tests using the many answer files collected over the last few years (see [1, 2, 3]). In particular, the first two papers aim to look at how students make mistakes using consistent but incorrect methods, termed mal-rules, and how to categorise such mal-rules into an over-arching error taxonomy, partly to make student and whole class profiling more manageable. Such considerations are, of course, independent of any particular assessment system.
Assessing mathematics has specific problems in communication with any system; Mathletics has largely sidestepped the input problem by focusing on multi-choice, multi-response, numerical input, word input, and true/false/undecidable question types. Whilst this does not allow input of mathematical expression, it is very robust, and care has been taken to avoid frustrating the student with complex input syntax. For numerical input questions, sympathetic handling of decimals is needed if not quoted to the correct accuracy, a warning and automatic correction is generally executed. For example, evaluating an integral to 2 decimal places could yield 1.20 but students should not necessarily be marked wrong if they input 1.2 or, say, 1.201742 (directly from a calculator display). Similarly, word inputs are case checked (students may have caps lock on for example).
[1] Baruah, N. and Greenhow, M. (2006) ‘Using new question styles and answer file evidence to design online objective questions in calculus’, HELM-MSOR Conference, September 2006.
[2] FAST Legacy web site (2008). Available via: http://www.open.ac.uk/fast/
[3] Gill, M. and Greenhow, M. (2006) ‘Computer-aided assessment in mechanics: question design and test evaluation’, Proceedings of the IMA Conference, Mathematical Education of Engineers, April 2006.
