You are here:

Large-Scale Assessment, Locally-Developed Measures, and Automated Scoring of Essays: Fishing for Red Herrings?
ARTICLE

Assessing Writing Volume 18, Number 1, ISSN 1075-2935

Abstract

Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the criticisms leveled at AES are reasonable, the more important, underlying issues relate to the aspects of the writing construct of the tests AES can rate. Because these tests underrepresent the construct as it is understood by the writing community, such tests should not be used in writing assessment, whether for admissions, placement, formative, or achievement testing. Instead of continuing the traditional, large-scale, commercial testing enterprise associated with AES, we should look to well-established, institutionally contextualized forms of assessment as models that yield fuller, richer information about the student's control of the writing construct. Such tests would be more valid, as reliable, and far fairer to the test-takers, whose stakes are often quite high. (Contains 1 figure.)

Citation

Condon, W. (2013). Large-Scale Assessment, Locally-Developed Measures, and Automated Scoring of Essays: Fishing for Red Herrings?. Assessing Writing, 18(1), 100-108. Retrieved December 5, 2021 from .

This record was imported from ERIC on April 18, 2013. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.

Keywords