TY - JOUR
T1 - Understanding performance in test taking
T2 - The role of question difficulty order
AU - Anaya, Lina
AU - Iriberri, Nagore
AU - Rey-Biel, P.
AU - Zamarro, Gema
N1 - Funding Information:
Nagore Iriberri acknowledes funding by grant PID2019-106146GB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe” and grant IT367-19 funded by the Basque Government.
Publisher Copyright:
© 2022
PY - 2022/10
Y1 - 2022/10
N2 - Standardized assessments are widely used to determine access to educational resources with important consequences for later economic outcomes in life and to evaluate and compare educational systems. However, many design features of the tests themselves may lead to psychological reactions influencing performance. In particular, the level of difficulty of the earlier questions in a test may affect performance in later questions. How should we order test questions according to their level of difficulty such that test performance offers an accurate assessment of the test taker's aptitudes and knowledge? We conduct a field experiment with about 19,000 participants in collaboration with an online teaching platform where we randomly assign participants to different orders of difficulty and we find that ordering the questions from easiest to most difficult yields the lowest probability to abandon the test, as well as the highest number of correct answers. We obtain similar results when exploiting the random variation of difficulty across test booklets in the Programme for International Student Assessment (PISA) for the years 2009, 2012, and 2015, which provides additional external validity to the experiment. We conclude that question difficulty order in tests has important policy implications for optimal test design and performance. It additionally may have important implications for ranking candidates, as well as for the evaluation and comparison of educational institutions and systems.
AB - Standardized assessments are widely used to determine access to educational resources with important consequences for later economic outcomes in life and to evaluate and compare educational systems. However, many design features of the tests themselves may lead to psychological reactions influencing performance. In particular, the level of difficulty of the earlier questions in a test may affect performance in later questions. How should we order test questions according to their level of difficulty such that test performance offers an accurate assessment of the test taker's aptitudes and knowledge? We conduct a field experiment with about 19,000 participants in collaboration with an online teaching platform where we randomly assign participants to different orders of difficulty and we find that ordering the questions from easiest to most difficult yields the lowest probability to abandon the test, as well as the highest number of correct answers. We obtain similar results when exploiting the random variation of difficulty across test booklets in the Programme for International Student Assessment (PISA) for the years 2009, 2012, and 2015, which provides additional external validity to the experiment. We conclude that question difficulty order in tests has important policy implications for optimal test design and performance. It additionally may have important implications for ranking candidates, as well as for the evaluation and comparison of educational institutions and systems.
KW - Difficulty
KW - Question order
KW - Test performance
UR - http://www.scopus.com/inward/record.url?scp=85135896827&partnerID=8YFLogxK
U2 - 10.1016/j.econedurev.2022.102293
DO - 10.1016/j.econedurev.2022.102293
M3 - Article
AN - SCOPUS:85135896827
SN - 0272-7757
VL - 90
JO - Economics of Education Review
JF - Economics of Education Review
M1 - 102293
ER -