This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals. The study relates these to the demands of the writing component of IELTS, which is increasingly used for exit testing.
The number of international and local students whose first language is not English and who are studying in English-medium universities has increased significantly in the past decade. Many of these students aim to start working in the country they studied in; however, some employers have suggested that graduates seeking employment have insufficient language skills.
This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals (our two case study professions). It relates these to the demands of the writing component of IELTS, which is increasingly used for exit or professional entry testing, although not expressly designed for this purpose.
Data include interviews with final year students, lecturers, employers and new graduates in their first few years in the workforce, as well as professional board members. Employers also reviewed both final year assignments, as well as IELTS writing samples at different levels.
Most stakeholders agreed that graduates entering the workforce are underprepared for the writing demands in their professions. When compared with the university writing tasks, the workplace writing expected of new graduates was perceived as different in terms of genre, the tailoring of a text for a specific audience, and processes of review and editing involved.
Stakeholders expressed a range of views on the suitability of the use of academic proficiency tests (such as IELTS) as university exit tests and for entry into the professions. With regard to IELTS, while some saw the relevance of the two writing tasks, particularly in relation to academic writing, others questioned the extent to which two timed tasks representing limited genres could elicit a representative sample of the professional writing required, particularly in the context of engineering.
The findings are discussed in relation to different test purposes, the intersection between academic and specific purpose testing and the role of domain experts in test validation.