Are you thinking about adapting a test?
Ronald K. Hambleton believes that some common myths have a detrimental effect on test adaptation efforts.
Affiliated with the University of Massachusetts at Amherst, Hambleton is a distinguished voice in educational assessment and test adaptation. He has written extensively on these topics and his work was instrumental in the creation of the “ITC Guidelines for Translating and Adapting Tests.”
Let’s examine four of these test adaptation myths as identified by Hambleton:
Myth #1: Adapting a test for a second language group is better than developing a new test.
Cross-cultural comparisons and budget constraints are well-established reasons to adapt a test. However, not every case warrants adaptation. Sometimes creating a whole new test better serves test takers and other stakeholders.
The key is that every situation is different. Before rushing to adapt a test, take a moment to consider: For this particular assessment and the issues and stakeholders involved, is it better to adapt the test or create a new one altogether?
Myth #2: Test scores resulting from a test that has been translated well are linguistically and culturally valid for comparative purposes.
While a good translation is essential for comparative purposes, it does not by itself guarantee validity. Sometimes the features of the language or languages involved, or of the exam itself, can prevent balanced comparisons.
In the case of a timed test, for example, if the words of one language are significantly longer than another, then even though the content is equivalent, it would not be fair to compare the test scores. Or in the case of using a multiple-choice format for a test when some groups of test takers are not accustomed to using that format.
Myth #3: All tests can be successfully translated into other languages and for other cultures.
Cultures don’t always share the same values. If what is being measured in a test is not valued in the same way by different groups of test takers, then a test’s responses and scores have limited meaning outside the group that the test was originally developed for.
Examples include intelligence tests where speed is considered more important than the final answer, or comparative quality-of-life assessments where the importance respondents assign to different answers may vary substantially and render the test’s weighting and reporting useless.
Myth #4: Anyone who knows two languages can do an acceptable job at translating a test.
While it’s true that translators absolutely need to master both the source language and the target language, that’s not all that is required for a successful test translation and adaptation. Subject expertise, cultural knowledge and familiarity with test development practices are also a must for test adaptation success.
When a translator possesses relevant subject expertise, he or she helps ensure that the translation is understandable by the target audience. Consider this: Could a translator with no medical or nursing knowledge do a good job translating a nursing exam? Probably not. That translator wouldn’t know the proper terminology or be able to express test items in a way that test takers would need to demonstrate their knowledge.
As for cultural knowledge, translators need to be well versed in the target culture. Londoners and New Yorkers, for example, both speak English, but British culture is not the same as American culture. Test items written for one group in mind would likely confuse the other.
Lastly, familiarity with test development practices helps translators maintain the features of the original test, ensuring that an adaptation is not too radically different from the original for no reason.
Readers, are there any other test adaptation myths you believe should be addressed and discarded?