Abstract |
Background: Item analysis examines student responses to individual test items (questions) to evaluate the quality of those items and the test as a whole. Materials and Method: The study was conducted in the Department of Forensic Medicine & Toxicology as a part of the internal assessment with 55 MCQs from Forensic Medicine subject. These questions were administered to 90 (batch of 115) students of fifth semester (second year MBBS students). Answer sheets were evaluated; the scores were then arranged in decreasing order. The whole list was then divided into the first 30% of students (high achievers) & the last 30% (low achievers). The difficulty index (Dif I), discrimination index (DI), and distractor effectiveness (DE) were calculated using standard formulae. These MCQs and distractors were classified as per standard reference ranges. Results: The difficulty index of 32 (58.18%) items was in the acceptable range (Dif I= 30–70%), 14 (25.45%) items were too easy (>70%), and 09 (16.36%) items were difficult (<30%). The discrimination index of 10 (18.18%) items were excellent (>0.35), 19 (34.55%) items were good (0.25–0.35), and 25 (45.45%) items were poor (<0.2). A total of 55 items had 165 distractors. Amongst these, 32 (19.75%) were non-functional distracters (NFD), 133 (80.60%) were functional distracters (FD). Conclusions: Post-validation of MCQs must be performed to filter MCQs of acceptable validity, which would increase their quality as assessment tools thereby making assessment more meaningful.
|