+44 (0)1509 263171
Please use this identifier to cite or link to this item:
|Title: ||Item selection and application in Higher Education|
|Authors: ||O'Hare, David|
|Issue Date: ||2002|
|Publisher: ||© Loughborough University|
|Citation: ||O'HARE, D ... et al, 2002. Item Selection and Application in Higher Education. IN: Proceedings of the 6th CAA Conference, Loughborough: Loughborough University|
|Abstract: ||Over the past ten years the use of computer assisted assessment in Higher Education (HE) has grown. The majority of this expansion has been based around the application of multiple-choice items (Stephens and Mascia, 1997). However, concern has been expressed about the use of multiple choice items to test higher order skills.
The Tripartite Interactive Assessment Development (TRIAD) system (Mackenzie, 1999) has been developed by the Centre for Interactive Assessment Development (CIAD) at the University of Derby. It is a delivery platform that allows the production of more complex items. We argue that the use of complex item formats such as those available in TRIADs could enhance validity and produce assessments with features not present in pencil and paper tests (cf. Huff and Sireci, 2001).
CIAD was keen to evaluate tests produced in TRIADs and so sought the aid of the National Foundation for Educational Research (NFER). As part of an initial investigation a test was compiled for a year one Systems Analysis module. This test was produced by the tutor (in consultation with CIAD) and contained a number of item types; both multiple-choice items and complex TRIADs items.
Data from the test were analysed using Classical Test Theory and Item Response Theory models. The results of the analysis led to a number of interesting observations. The multiple-choice items showed lower reliability. This was surprising since these items had been mainly obtained from published sources, with few written by the test constructor. The fact that the multiple-choice items showed lower reliability compared to more complex item types may flag two important points for the unwary test developer: the quality of published items may be insufficient to allow their inclusion in high-quality tests, and furthermore, the production of reliable multiple-choice items is a difficult skill to learn. In addition it may not be appropriate to attempt to stretch multiple-choice items by using options such as ‘all’ or ‘none of the above’. The evidence from this test seems to suggest that multiple-choice items may not be appropriate to test outcomes at undergraduate level.|
|Description: ||This is a conference paper.|
|Appears in Collections:||CAA Conference|
Files associated with this item:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.