Comparison of IELTS Academic and Duolingo English Test


Sara T Cushing

Haoshan Ren

Date Published:

8th January 2022

This report provides an in-depth comparison between IELTS Academic and the Duolingo English Test (DET), based on a review of publicly available documentation and published scholarship on each test. We follow the analytical framework found in Taylor and Chan (2015), who employed and expanded on the socio-cognitive framework (SCF) for test validation introduced by Weir (2005). This paper is framed by the six components of the SCF: test taker characteristics, cognitive validity, context validity, scoring, consequences, and criterion-related validity.

In terms of test taker characteristics, our analysis of published demographic data suggests that the population of test-takers for each test are approximately equivalent in overall proficiency. While IELTS Academic is specifically designed for use in educational settings, DET was originally designed as a general proficiency test. However, some recent Duolingo publications have stated that its main purpose is for admissions decisions.

To compare cognitive and context validity of the two tests, our analysis focuses on the four main language skills (reading, listening, speaking, and writing) and the specific test tasks targeting each skill. For all four skills, IELTS tasks elicit a wider range of cognitive processes than the DET tasks, and the DET items are generally less oriented to academic skills required in higher education contexts. In terms of scoring validity, despite large differences in the way scores are calculated, both tests appear to be scored reliably and to demonstrate internal consistency, and both testing organizations seem to have in place sufficient procedures for monitoring test performance. Our analysis of criterion-related validity suggests that there is a relationship between scores on the two tests; however, this relationship needs to be interpreted with caution. In particular, we were unable to find any publicly available information about how DET mapped its scores onto the CEFR. Finally, by analyzing available online discussions about the two tests, we discuss their consequential validity. Given that many test takers are focused on getting the highest possible scores on tests, our analysis suggests that the test preparation strategies recommended for IELTS may be more applicable to future academic work than those for DET.

In conclusion, we found that, compared to IELTS, DET test tasks under-represent the construct of academic language proficiency as it is commonly understood, i.e., the ability to speak, listen, read, and write in academic contexts. Most of the DET test tasks are heavily weighted towards vocabulary knowledge and syntactic parsing rather than comprehension or production of extended discourse. Scores on the two tests are correlated, which might suggest that DET could be a reasonable substitute for IELTS, given its accessibility and low cost. However, even though knowledge of lexis and grammar are essential enabling skills for higher-order cognitive skills, a test that focuses exclusively on these lower-level skills is probably more useful for making broad distinctions between low, intermediate, and high proficiency learners rather than for informing high-stakes decisions such as university admissions.