You are here:

Comparison of Automated Scoring Methods for a Computerized Performance Assessment of Clinical Judgment
ARTICLE

, ,

Applied Psychological Measurement Volume 37, Number 8, ISSN 0146-6216

Abstract

Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that "compare" automated scoring strategies. Here, comparisons are made among five strategies for machine-scoring examinee performances of computer-based case simulations, a complex item format used to assess physicians' patient-management skills as part of the Step 3 United States Medical Licensing Examination. These strategies utilize expert judgments to obtain various (a) case-specific or (b) generic scoring algorithms. The various compromises between efficiency, validity, and reliability that characterize each scoring approach are described and compared.

Citation

Harik, P., Baldwin, P. & Clauser, B. (2013). Comparison of Automated Scoring Methods for a Computerized Performance Assessment of Clinical Judgment. Applied Psychological Measurement, 37(8), 587-597. Retrieved February 20, 2020 from .

This record was imported from ERIC on November 3, 2015. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.

Keywords