You are here:

Not All Rubrics Are Equal: A Review of Rubrics for Evaluating the Quality of Open Educational Resources
ARTICLE

, , Utah State University

IRRODL Volume 16, Number 5, ISSN 1492-3831 Publisher: Athabasca University Press

Abstract

The rapid growth in Internet technologies has led to a proliferation in the number of Open Educational Resources (OER), making the evaluation of OER quality a pressing need. In response, a number of rubrics have been developed to help guide the evaluation of OER quality; these, however, have had little accompanying evaluation of their utility or usability. This article presents a systematic review of 14 existing quality rubrics developed for OER evaluation. These quality rubrics are described and compared in terms of content, development processes, and application contexts, as well as, the kind of support they provide for users. Results from this research reveal a great diversity between these rubrics, providing users with a wide variety of options. Moreover, the widespread lack of rating scales, scoring guides, empirical testing, and iterative revisions for many of these rubrics raises reliability and validity concerns. Finally, rubrics implement varying amounts of user support, affecting their overall usability and educational utility.

Citation

Yuan, M. & Recker, M. (2015). Not All Rubrics Are Equal: A Review of Rubrics for Evaluating the Quality of Open Educational Resources. The International Review of Research in Open and Distributed Learning, 16(5),. Athabasca University Press. Retrieved June 19, 2019 from .

Keywords

Cited By

View References & Citations Map

These links are based on references which have been extracted automatically and may have some errors. If you see a mistake, please contact info@learntechlib.org.