A group of seven graduates wearing black gowns with red accents and holding their graduation caps up in the air, standing in front of a large building with white and orange architectural details

Score comparison tables: Helpful shortcut or hidden risk for admissions?


Audience

Researchers

Category

News

Date Published

07 July 2025

Concordance table

A concordance table is a tool that shows you the relationship between two sets of data, for example, scores on different language tests.

Setting English language proficiency score requirements is a complex and often resource-intensive task. Concordance or test comparison tables can be a useful tool for admission professionals. However, it can be difficult to tell if the scores have been equated in the right way. A recent study in fact suggests that universities and professional bodies are equating scores very differently and this can lead to applicants ‘shopping around’ for the easiest option.

With so much information out there, how do you know which concordance studies to trust?

IELTS and TOEFL, the two leading English language assessment providers for students wishing to study in higher education, have collaborated to create an up-to-date study that takes a close look at how their scores compare.

This summary will provide a brief insight into why this study stands strong amid many similar concordance studies. It highlights the importance of using other evidence in addition to comparison tables for setting the right scores for international student admission, and assures stakeholders that choosing IELTS is the best way to maintain their institution’s high standards and reputation.

The study: a solid methodology producing strong correlations

The IELTS Partners (British Council, IDP: IELTS, Cambridge University Press & Assessment) and TOEFL iBT (ETS) teamed up to conduct a joint concordance study that involved a large sample size and a strict and thorough research methodology that aimed to meet all the necessary good practice principles.

Let’s take a closer look at the four key strengths that make this study a credible source for score setting:

1. Large number of participants

One of the main strengths of this study is that it involved almost 1000 participants from major L1 language groups. A large, representative sample significantly increases the reliability of the results.

2. Balanced test-taking strategy

The participants took the two exams in a counterbalanced manner, which means that half (49.8%) of them took IELTS Academic first, while the other half (50.2%) took TOEFL iBT first. Additionally, the break between the two exam sittings was as short as possible to avoid any potential changes in the test-takers’ language abilities. This was 38.6 days on average.

3. All scores officially verified

Requiring test takers to self-report their scores during concordance studies reduces the reliability and trustworthiness of the results. In the case of this study, however, all scores were verified by examination boards.

4. Strong methodology used in establishing the correlations

The study followed a rigorous methodology to ensure all scores were properly compared and aligned. An independent third party conducted an equating procedure, which has also been used in other similar studies. This process has discovered that the correlations between the two tests (in each section and across the overall/total) were moderate to strong.

Using the results with caution

The study produced the following comparison tables of IELTS and TOEFL:

IELTS TOEFL iBT OverallTOEFL iBT - ListeningTOEFL iBT - ReadingTOEFL iBT - WritingTOEFL iBT - Speaking
9.012030303030
8.5115-11928-2928-293029
8.0108-11426-27273028
7.5100-10724-2525-2628-2926-27
7.091-9922-2322-2426-2724-25
6.581-9019-2119-2123-2522-23
6.067-8016-1816-1819-2219-21
5.551-6612-1512-1514-1817-18
5.037-508-118-119-1314-16
4.526-363-74-74-811-13
4.014-250-21-31-37-10

What makes the concordance tables in this study reliable and highly suitable for score setting is the previously mentioned robust participant size, the officially verified scores and the independent equating process that ensured the scores correlate.

For concordance tables to be truly meaningful, the two exams being compared must be similar in these key areas:

  • their test constructs, i.e. the language abilities they test and what methods they use to assess them in general
  • validity, i.e. the tests should assess what they set out to assess, which is students’ readiness to successfully perform in academic contexts
  • reliability, i.e. the tests should have a consistent structure (including timing, question format and delivery), manner of administration (testing conditions) and scoring system.

IELTS and TOEFL are similar in these regards, so a comparison can produce a reliable concordance table. Even if two tests meet all these criteria, however, there are limits to what concordance tables can reveal about their scores.

Why one table is not enough – the importance of evidence-based score-setting

While concordance tables provide valuable insights, they don’t give admissions teams sufficient information to set language requirements. Scores must reflect current realities, which involves taking an evidence-based approach based on the following:

  • Take guidance from test providers: Ensure your score requirements are aligned with the latest recommendations for each test. This ensures you’re staying up to date with trends in education and assessment as they frequently change and vary across providers.
  • Understand the assessment: Align the design to your institution’s needs by learning what each test assesses and reviewing materials like marking criteria and sample answers.
  • Build a diverse decision-making group: Involve experts from various backgrounds and perspectives to offer input to avoid oversights when setting requirements. For example, your English language centre may have insights into how strongly different scores correlate with performance.

Top tips for score setting

Instead of assuming a one-to-one correspondence between exams, we highly recommend institutions start by looking at the evidence and developing a thorough understanding of the assessments. Afterwards, they can use the concordance tables to validate their findings.

We suggest answering two important questions when setting initial scores:

1. What is the minimal level of English that would enable an individual to cope with the linguistic demands involved in the course to which they are applying?

2. How does this minimally acceptable level of English translate into scores on the IELTS test?

Having this in-depth understanding of the assessment and the language skills it tests can give institutions a clearer understanding of what the applicants are truly capable of.

Tests and scores can change, but skills remain

Once a concordance table is established, the data may seem fixed. However, there can be many changes in test constructs, task structures, or scoring systems that may influence comparisons. Individual institutions may also change their language requirements, causing them to revisit the scoring systems.

What ultimately matters is the applicant’s performance, which again means that it is best practice to use other sources of evidence to judge their readiness. After all, setting fair and effective English score requirements is critical to maintaining academic standards and supporting student success.

So, while concordance tables can be helpful tools, they should be used alongside robust assessment research, band descriptors, your institutional data and professional judgement to make informed decisions. We recommend that institutions avail themselves of all the support that IELTS has to offer. Local IELTS representatives can also offer further support.

To get deeper into our findings, download the full IELTS-TOEFL concordance study report.

Sources used: