You are here:

Understanding Student Engagement in Large-Scale Open Online Courses: A Machine Learning Facilitated Analysis of Student’s Reflections in 18 Highly Rated MOOCs ARTICLE

, , , The University of Hong Kong

IRRODL Volume 19, Number 3, ISSN 1492-3831 Publisher: Athabasca University Press

Abstract

Although massive open online courses (MOOCs) have attracted much worldwide attention, scholars still understand little about the specific elements that students find engaging in these large open courses. This study offers a new original contribution by using a machine learning classifier to analyze 24,612 reflective sentences posted by 5,884 students, who participated in one or more of 18 highly rated MOOCs. Highly rated MOOCs were sampled because they exemplify good practices or teaching strategies. We selected highly rated MOOCs from Coursetalk, an open user-driven aggregator and discovery website that allows students to search and review various MOOCs. We defined a highly rated MOOC as a free online course that received an overall five-star course quality rating, and received at least 50 reviews from different learners within a specific subject area. We described six specific themes found across the entire data corpus: (a) structure and pace, (b) video, (c) instructor, (d) content and resources, (e) interaction and support, and (f) assignment and assessment. The findings of this study provide valuable insight into factors that students find engaging in large-scale open online courses.

Citation

Hew, K., Qiao, C. & Tang, Y. (2018). Understanding Student Engagement in Large-Scale Open Online Courses: A Machine Learning Facilitated Analysis of Student’s Reflections in 18 Highly Rated MOOCs. The International Review of Research in Open and Distributed Learning, 19(3),. Athabasca University Press. Retrieved August 22, 2018 from .

Keywords

View References & Citations Map

References

  1. Anderson, A., Huttenlocher, D., Kleinberg, J., & Leskovec, J. (2014). Engaging with massive online courses. Proceedings of the 23rd international conference on WorldWide Web, 687-698.
  2. Baxter, J.A., & Haycock, J. (2014). Roles and student identities in online large course forums: Implications for practice. The International Review of Research in Open and Distributed Learning, 15(1), 20-40.
  3. Bonk, C.J., & Wisher, R.A. (2000). Applying collaborative and e-learning tools to military distance learning:
  4. Conole, G. (2013). MOOCs as disruptive technologies: Strategies for enhancing the learner experience and
  5. Elliot, A.J., & Church, M.A. (1997). A hierarchical model of approach and avoidance achievement motivation. Journal of Personality and Social Psychology, 72, 218-232.
  6. Kizilcec, R., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: analyzing learner subpopulations in massive open online courses. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 170-179). New York, NY: ACM.
  7. Knowlton, D.S. (2000). A theoretical framework for the online classroom: A defense and delineation of a student‐centered pedagogy. New Directions for Teaching and Learning, 2000(84), 5-14.
  8. Robertson, S. (2004). Understanding inverse document frequency: On theoretical arguments for IDF. Journal of Documentation, 60(5), 503-520. DOI> 10.1108/00220410410560582
">Http://dx.doi.org/10.1108/00220410410560582
  • Rodriguez. C.O. (2012). MOOCs and the AI-Stanford like courses: Two successful and distinct course formats
  • Shenton, A.K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22, 63–75. DOI> 10.3233/EFI-2004-22201
  • ">Http://dx.doi.org/10.3233/EFI-2004-22201
  • Sims, R. (2003). Promises of interactivity: Aligning learner perceptions and expectations with strategies for flexible and online learning. Distance Education, 24(1), 87-103.
  • Szpunar, K.K., Khan, N.Y., & Schacter, D.L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, 110(16), 63136317.
  • Wanzer, M.B. (2002). Use of humor in the classroom: The good, the bad, and the not-so-funny things that teachers say and do. In J.L. Chesebro & J.C. McCroskey (Eds.), Communication for Teachers (pp. 116-125). Boston: Allyn & Bacon.
  • Wu, H.C., Luk, R.W.P., Wong, K.F., & Kwok, K.L. (2008). Interpreting tf-idf term weights as making relevance decisions. ACM Transactions on Information Systems (TOIS), 26(3), 13.
  • These references have been extracted automatically and may have some errors. If you see a mistake in the references above, please contact info@learntechlib.org.