You are here:

Comparing the Validity of Automated and Human Scoring of Essays
ARTICLE

, , , ,

Journal of Educational Computing Research Volume 26, Number 4, ISSN 0735-6331

Abstract

Discusses the validity of automated, or computer-based, scoring for improving the cost effectiveness of performance assessments and describes a study that examined the relationship of scores from a graduate level writing assessment to several independent, non-test indicators of examinee's writing skills, both for automated scores and for scores assigned by trained human readers. (Author/LRW)

Citation

Powers, D.E., Burstein, J.C., Chodorow, M.S., Fowles, M.E. & Kukich, K. (2002). Comparing the Validity of Automated and Human Scoring of Essays. Journal of Educational Computing Research, 26(4), 407. Retrieved November 17, 2019 from .

This record was imported from ERIC on April 18, 2013. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.

Keywords