Abstract

Proceedings Abstracts of the Twenty-Fourth International Joint Conference on Artificial Intelligence

Algorithmic Exam Generation / 1149
Omer Geiger, Shaul Markovitch
PDF

Given a class of students, and a pool of questions in the domain of study, what subset will constitute a good exam? Millions of educators are dealing with this difficult problem worldwide, yet exams are still composed manually in non-systematic ways. In this work we present a novel algorithmic framework for exam composition. Our framework requires two input components: a student population represented by a distribution over overlay models, each consisting of a set of mastered abilities, or actions; and a target model ordering that, given any two student models, defines which should be given the higher grade. To determine the performance of a student model on a potential question, we test whether it satisfies a disjunctive action landmark, i.e., whether its abilities are sufficient to follow at least one solution path. We present a novel utility function for evaluating exams, using the described components. An exam is highly evaluated if it is expected to order the student population with high correlation to the target order.The merit of our algorithmic framework is exemplified with real auto-generated questions in the domain of middle-school algebra.