Scores of different language tests intended for similar purposes (e.g., admission to higher education) are used to determine candidates’ language proficiency and readiness for a chosen domain. To be fair to all students irrespective of the test they took, score requirements should be comparable. Score concordance tables provide an empirical basis for such comparability when good practice principles are met.
We report on a score concordance project whose ambitious goal was to adhere to the good practice principles laid out in Knoch and Fan (2024). The providers of the two most widely-used English language proficiency tests for academic admissions purposes, IELTS Academic and TOEFL iBT, collaborated to complete this project.
Research teams representing both tests recruited 969 test-takers who took the tests in a counterbalanced order. The study participants represented the major L1 groups of both test-taking populations. Every effort was made to keep the interval between taking the two tests short, to minimise any effect of changes in the test-takers’ language proficiency level. There were no self-reported score data, a major limitation of existing score concordance studies, as all score reports were verified by the test providers using their official score verification service. Equipercentile equating was conducted by a third party, independent from the test providers. We discuss the challenges in meeting several good practice principles and present the implications for building trustworthy score concordance tables to help stakeholders make informed decisions about language test acceptance.
It should be noted that the tests and their scores in this report do not replace English tests and scores accepted for Australian visa purposes at the time of writing, or indicate acceptance of these tests and their scores by the Department of Home Affairs for Australian visa purposes in the future. Accepted tests and scores can be found on the Department of Home Affairs website.