BOBCAT: Bilevel Optimization-Based Computerized Adaptive Testing

BOBCAT: Bilevel Optimization-Based Computerized Adaptive Testing

Aritra Ghosh, Andrew Lan

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2410-2417. https://doi.org/10.24963/ijcai.2021/332

Computerized adaptive testing (CAT) refers to a form of tests that are personalized to every student/test taker. CAT methods adaptively select the next most informative question/item for each student given their responses to previous questions, effectively reducing test length. Existing CAT methods use item response theory (IRT) models to relate student ability to their responses to questions and static question selection algorithms designed to reduce the ability estimation error as quickly as possible; therefore, these algorithms cannot improve by learning from large-scale student response data. In this paper, we propose BOBCAT, a Bilevel Optimization-Based framework for CAT to directly learn a data-driven question selection algorithm from training data. BOBCAT is agnostic to the underlying student response model and is computationally efficient during the adaptive testing process. Through extensive experiments on five real-world student response datasets, we show that BOBCAT outperforms existing CAT methods (sometimes significantly) at reducing test length.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Humans and AI: Computer-Aided Education
Humans and AI: Personalization and User Modeling