What is it about?
Schools around the world assess multilingual students' language proficiency to see if they would benefit from a language support program. Because these programs can provide essential services for students who are learning the language of instruction, the policies guiding these assessments merit careful study. It is well accepted that a good assessment must be valid (decisions based on assessment scores are appropriate to real-life situations/scenarios) and reliable (error-free and consistent). However, a tension exists between validity and reliability. Validity is strengthened when the range and depth of the assessed knowledge/skills align with the real-life domain. Yet, increasing the range of assessed knowledge/skills can introduce greater potential for error, which negatively impacts reliability. On the other hand, narrowing the assessed knowledge/skills tends to increase reliability, but it also weakens validity by limiting the range of knowledge/skills that are assessed. In this paper, we revisit the validity–reliability paradox by comparing initial assessment policies for K-12 English language support programs in six nations.
Featured Image
Why is it important?
In several of the national policies we examined, teachers' assessments were the basis for language support placement decisions. In the U.S., however, policies privilege reliability (lack or statistical error), so standardized assessments are the primary basis for these decisions. Comparing across countries allows a better understanding of each country's policy.
Perspectives
Read the Original
This page is a summary of: Initial assessment for K-12 English language support in six countries: revisiting the validity–reliability paradox, Language and Education, February 2018, Taylor & Francis,
DOI: 10.1080/09500782.2018.1430825.
You can read the full text:
Contributors
The following have contributed to this page