You are here:

Performing automatic exams

, ,

Computers & Education Volume 31, Number 3, ISSN 0360-1315 Publisher: Elsevier Ltd


In this paper we describe a tool to build software systems which replace the role of the examiner during a typical Italian academic exam in technical/scientific subjects. Such systems are designed so as to exploit the advantages of self-adapted testing (SAT) for reducing the effect of anxiety and of computerised adaptive testing (CAT) for increasing the assessment efficiency. A SAT-like pre-exam determines the starting difficulty level of the following CAT. The exam can thus be dynamically adapted to suit the ability of the student, i.e. by making it more difficult or easier as required.The examiner simply needs to associate the level of difficulty with a suitable number of initial queries. After posing these queries to a sample group of students and collecting statistics, the tool can automatically associate a level of difficulty with each subsequent query by submitting it to sample groups of students. In addition, the tool can automatically assign a score to the levels and to the queries. Finally, the systems collect statistics so as to measure the easiness and selectivity of each query and to evaluate the validity and reliability of an exam.


Frosini, G., Lazzerini, B. & Marcelloni, F. (1998). Performing automatic exams. Computers & Education, 31(3), 281-300. Elsevier Ltd. Retrieved October 20, 2019 from .

This record was imported from Computers & Education on January 30, 2019. Computers & Education is a publication of Elsevier.

Full text is availabe on Science Direct:


Cited By

View References & Citations Map

These links are based on references which have been extracted automatically and may have some errors. If you see a mistake, please contact