2019 | 5 (82) | 43-51
Article title

Elo Rating Algorithm for the Purpose of Measuring Task Difficulty in Online Learning Environments

Title variants
Languages of publication
The Elo rating algorithm, developed for the purpose of measuring player strength in chess tournaments, has also found application in the context of educational research and has been used for the purpose of measuring both learner ability and task difficulty. The quality of the estimations performed by the Elo rating algorithm has already been subject to research, and has been shown to deliver accurate estimations in both low and high-stake testing situations. However, little is known about the performance of the Elo algorithm in the context of learning environments where multiple attempts are allowed, feedback is provided, and the learning process spans several weeks or even months. This study develops the topic of Elo algorithm use in an educational context and examines its performance on real data from an online learning environment where multiple attempts were allowed, and feedback was provided after each attempt. Its performance in terms of stability of the estimation results in two analyzed periods for two groups of learners with different initial levels of knowledge are compared with alternative difficulty estimation methods: proportion correct and learner feedback. According to the results, the Elo rating algorithm outperforms both proportion correct and learning feedback. It delivers stable difficulty estimations, with correlations in the range 0.87–0.92 for the group of beginners and 0.72–0.84 for the group of experienced learners.
Physical description
  • Warsaw University of Life Sciences
  • Warsaw University of Life Sciences
  • Antal, M. (2013). On the use of Elo rating for adaptive assessment. Studia Universitatis Babeş-Bolyai, Informatica, 58(1), 29-41.
  • Ayala, R. J. de (2008). The Theory and Practice of Item Response Theory. New York, NY: The Guilford Press.
  • Chen, C. M., Lee, H. M., & Chen, Y. H. (2005). Personalized e-Learning System Using Item Response Theory. Computers & Education, 44(3), 237-255.
  • Elo, A. E. (1978). The rating of chess players past and present. New York, NY: Arco Publishing.
  • Glickman, M. E. (2001). Dynamic paired comparison models with stochastic variances. Journal of Applied Statistics, 28(6), 673-689.
  • Klinkenberg, S., Straatemeier, M., & van der Maas, H. L. (2011). Computer adaptive practice of maths ability using a new item response model for on the fly ability and difficulty estimation. Computers & Education, 57(2), 1813-1824.
  • Kortemeyer, G. (2014). Extending item response theory to online homework. Physical Review Physics Education Research, 10(1), 010118.
  • Morrison, B. (2019). Comparing Elo, Glicko, IRT, and Bayesian IRT Statistical Models for Educational and Gaming Data (Doctoral dissertation, University of Arkansas, Fayetteville). Retrieved from
  • Pankiewicz, M. (2016). Data analysis for measuring effects of gamification in e-learning environments. In L. Gómez Chova, A. López Martínez, & I. Candel Torres (Eds.), Edulearn 16: 8th International Conference on Education and New Learning Technologies (pp. 7082-7088). Barcelona: IATED. DOI: 10.21125/edulearn.2016.0546
  • Papoušek, J., Pelánek, R., & Stanislav, V. (2014). Adaptive practice of facts in domains with varied prior knowledge. In J. Stamper, Z. Pardos, M. Mavrikis, & B. M. McLaren (Eds.), Proceedings of the 7th International Conference on Educational Data Mining (pp. 6-13). London: EDM 2014. Retrieved from
  • Pelánek, R., Papoušek, J., Řihák, J., Stanislav, V., & Nižnan, J. (2017). Elo-based learner modeling for the adaptive practice of facts. User Modeling and User-Adapted Interaction, 27(1), 89-118.
  • Stephenson, A., & Sonas, J. (2019). R package "PlayerRatings". Retrieved from
  • Veldkamp, B. P., & Sluijter, C. (Eds.). (2019). Theoretical and Practical Advances in Computer-based Educational Measurement. Cham: Springer International Publishing.
  • Verschoor, A., Berger, S., Moser, U., & Kleintjes, F. (2019). On-the-Fly Calibration in Computerized Adaptive Testing. In B. P. Veldkamp, & C. Sluijter (Eds.), Theoretical and Practical Advances in Computer-based Educational Measurement (pp. 307-323). Cham: Springer International Publishing.
  • Wang, M. T., & Eccles, J. S. (2013). School context, achievement motivation, and academic engagement: A longitudinal study of school engagement using a multidimensional perspective. Learning and Instruction, 28, 12-23.
  • Wauters, K., Desmet, P., & Van den Noortgate, W. (2010). Adaptive item-based learning environments based on the item response theory: possibilities and challenges. Journal of Computer Assisted Learning, 26(6), 549-562.
  • Wauters, K., Desmet, P., & Van den Noortgate, W. (2011). Monitoring learners' proficiency: weight adaptation in the elo rating system. In M. Pechenizkiy, T. Calders, C. Conati, S. Ventura, C. Romero, & J. Stamper (Eds.), Proceedings of the 4th International Conference on Educational Data Mining (pp. 247-252). Eindhoven: EDM 2011. Retrieved from
  • Wauters, K., Desmet, P., & Van den Noortgate, W. (2012). Item difficulty estimation: An auspicious collaboration between data and judgment. Computers & Education, 58(4), 1183-1193.
Document Type
Publication order reference
YADDA identifier
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.