You are here:

Experimental Evidence on the Effectiveness of Automated Essay Scoring in Teacher Education Cases
ARTICLE

, , ,

Journal of Educational Computing Research Volume 35, Number 3, ISSN 0735-6331

Abstract

Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy preservice teachers in four teacher education classes were assigned to complete two cases. Each student was randomly assigned to either a condition where the AES was available (experimental condition) or a condition where the AES was unavailable (control condition). Students in the experimental condition who opted to use the AES submitted more highly rated final, human-scored essays (in the second of two case studies) and conducted more relevant searches (in both of the two case studies) than students either in the control condition or in the experimental condition who chose not to use the scorer. (Contains 2 tables and 6 figures.)

Citation

Riedel, E., Dexter, S.L., Scharber, C. & Doering, A. (2006). Experimental Evidence on the Effectiveness of Automated Essay Scoring in Teacher Education Cases. Journal of Educational Computing Research, 35(3), 267-287. Retrieved June 18, 2019 from .

This record was imported from ERIC on April 18, 2013. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.

Keywords