What is it about?

With multiple options to choose from, there is always a chance of lucky guessing by examinees on multiple-choice (MC) items, thereby potentially introducing bias in item difficulty estimates. Correct responses by random guessing thus pose threats to the validity of claims made from test performance on an MC test. Under the Rasch framework, the current study investigates the effects of removing responses with likely guessing on item difficulty estimates, person ability measures, and test information function (i.e., a function of measurement precision for person ability) on an MC language proficiency test. Results show that removing responses with likely guessing leads to difficult items becoming more difficult and high-performing examinees receiving higher ability measures. This is because guessing, and consequently lucky guesses, is more likely to appear in responses to difficult items than to easy items; therefore, removing responses with likely guessing results in lower proportion correct (i.e., higher item difficulty) for difficult items. More importantly, the current study shows that the measurement precision for high-performing examinees increases after accounting for likely random guessing, while the measurement precision for low- and medium-performing examinees remains similar with and without likely guessing. Implications for operational scoring of examinees are discussed.

Featured Image

Read the Original

This page is a summary of: Effects of Removing Responses With Likely Random Guessing Under Rasch Measurement on a Multiple-Choice Language Proficiency Test, Language Assessment Quarterly, October 2018, Taylor & Francis,
DOI: 10.1080/15434303.2018.1534237.
You can read the full text:

Read

Contributors

The following have contributed to this page