Experimental Evidence on the Effectiveness of Automated Essay Scoring in Teacher Education Cases
Journal of Educational Computing Research Volume 35, Number 3, ISSN 0735-6331
Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy preservice teachers in four teacher education classes were assigned to complete two cases. Each student was randomly assigned to either a condition where the AES was available (experimental condition) or a condition where the AES was unavailable (control condition). Students in the experimental condition who opted to use the AES submitted more highly rated final, human-scored essays (in the second of two case studies) and conducted more relevant searches (in both of the two case studies) than students either in the control condition or in the experimental condition who chose not to use the scorer. (Contains 2 tables and 6 figures.)
Riedel, E., Dexter, S.L., Scharber, C. & Doering, A. (2006). Experimental Evidence on the Effectiveness of Automated Essay Scoring in Teacher Education Cases. Journal of Educational Computing Research, 35(3), 267-287.