Loughborough University
Leicestershire, UK
LE11 3TU
+44 (0)1509 263171
Loughborough University

Loughborough University Institutional Repository

Please use this identifier to cite or link to this item: https://dspace.lboro.ac.uk/2134/1881

Title: Empirical prediction of the measurement scale and base Level ‘Guess factor’ for advanced computer-based assessments
Authors: Mackenzie, Don
O'Hare, David
Issue Date: 2002
Publisher: © Loughborough University
Citation: MACKENZIE and O'HARE, 2002. Empirical Prediction of the Measurement Scale and Base Level ‘Guess Factor’ for Advanced Computer-based Assessments. IN: Proceedings of the 6th CAA Conference, Loughborough: Loughborough University
Abstract: In our experience, insufficient consideration is often given to the way in which the questions in computer-based assessments are scored. The advent of more complex question-styles such as those delivered by the TRIADSystem (Mackenzie, 1999) has made it much more difficult to predict the distribution of possible scores and the base level guess factor than it has been for tests containing simple multiple-choice questions. For example the TRIADS drag and drop template allows each object to be allocated a different score (positive or negative) for each position as well as allowing dummy objects and dummy positions to be defined. The number of score possibilities for a random answer increases dramatically as the number of objects and positions is increased and although a 0 to 100 scoring scale is available, scores are likely to be concentrated about ‘nodes’ on this measurement scale. The positions of these ‘nodes’ will vary with the structure of the question and negative or penalty scoring may serve to ‘smear’ the mark distribution between ‘nodes’. Many tutors may find it difficult to predict the guess factor and will not appreciate the effect that the structure of the question may have on the range and distribution of final scores achieved. In order to demonstrate to test designers the effects of question structure and score allocation on the ‘guess factor’ and mark distributions, we are developing an empirical Marking Simulator. This program allows test designers/tutors to select a question type, enter the proposed structure and scores for each question then view the mark distribution and measurement scale that would result from a set of entirely random answers. Use of the marking simulator should result in a more realistic setting of pass levels and generally enhance the quality of computer-based assessments.
Description: This is a conference paper.
URI: https://dspace.lboro.ac.uk/2134/1881
Appears in Collections:CAA Conference

Files associated with this item:

File Description SizeFormat
Mackenzie_d1.pdf988.23 kBAdobe PDFView/Open

 

SFX Query

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.