Test Development
Test Development, Piloting and Statistical Analysis
At the outset the goal of Q-LEVEL was to develop scaled online language tests for institutions and companies that generate a detailed, accurate language profile in as short a time as possible. The intended shortness led to computer adaptive testing and multiple-choice tasks with little formal change, a key requirement for the optimal calibration of items in the item bank.
As a company in the heart of Europe, the action-oriented language usage model of the Common European Framework of Reference for Languages (CEFR) presented itself as a model. The test objectives, contents and tasks were positioned on this basis.
The global scale was used for the description of the performance on the 6 CEFR levels, and for the vocabulary, grammar, reading and listening comprehension subtests the specific scales of the CEFR were used. The certificate including a language profile refers to this.
The test tasks and texts were developed by experienced test authors and item writers. In each case the standardisation took place in a specialised committee of 4-5 people.
500-1500 people participated in the piloting per test language. The results from the pilot runs were statistically analysed. Items, texts or distractors that were too easy or too difficult and did not meet the defined level or inappropriate response options, distractors or instructions were subsequently revised.
For test development, we have used the following basic works:
- Handbook for Developing and Conducting Language Tests, created by ALTE (Association of Language Testers of Europe) 2012 on behalf of the Council of Europe.
- Common European Framework of Reference for Languages, 2000, Council of Europe
- Relating Language Examinations to the Common European Framework of Reference for Languages, 2009, Council of Europe