Validity and Reliability of a Rubric for Assessing and Stimulating Quality in Effective Online Threaded Discussions
Jessica Hale, David Anderson, Eastern Michigan University, United States
Society for Information Technology & Teacher Education International Conference, in Charleston, SC, USA ISBN 978-1-880094-67-9 Publisher: Association for the Advancement of Computing in Education (AACE), Chesapeake, VA
This study tests the validity and reliability of a rubric created to assess quality in online threaded discussions. The participants in this study consist of faculty members: experts with extensive experience with online threaded discussions and operant users with varying experience levels. Using the Delphi method, experts evaluate and develop the construct validity of the rubric. Operant users, randomly assigned to the experimental or control group, apply the rubric to samples of online threaded discussions. Members of the experimental group receive a rubric training sequence prior to using the tool. Correlational, multiple regression, and generalized linear modeling analysis is used to analyze test-retest reliability, inter-rater reliability, and the impact of external rater factors on reliability. The outcome of this study is a tool that can be used by instructors to evaluate the quality of student contributions to threaded discussions consistently and objectively.
Hale, J. & Anderson, D. (2009). Validity and Reliability of a Rubric for Assessing and Stimulating Quality in Effective Online Threaded Discussions. In I. Gibson, R. Weber, K. McFerrin, R. Carlsen & D. Willis (Eds.), Proceedings of SITE 2009--Society for Information Technology & Teacher Education International Conference (pp. 51-52). Charleston, SC, USA: Association for the Advancement of Computing in Education (AACE).