International Review of Education

, Volume 64, Issue 2, pp 241–263 | Cite as

Assessment approaches in massive open online courses: Possibilities, challenges and future directions

Original Paper

Abstract

The development of massive open online courses (MOOCs) has launched an era of large-scale interactive participation in education. While massive open enrolment and the advances of learning technology are creating exciting potentials for lifelong learning in formal and informal ways, the implementation of efficient and effective assessment is still problematic. To ensure that genuine learning occurs, both assessments for learning (formative assessments), which evaluate students’ current progress, and assessments of learning (summative assessments), which record students’ cumulative progress, are needed. Providers’ more recent shift towards the granting of certificates and digital badges for course accomplishments also indicates the need for proper, secure and accurate assessment results to ensure accountability. This article examines possible assessment approaches that fit open online education from formative and summative assessment perspectives. The authors discuss the importance of, and challenges to, implementing assessments of MOOC learners’ progress for both purposes. Various formative and summative assessment approaches are then identified. The authors examine and analyse their respective advantages and disadvantages. They conclude that peer assessment is quite possibly the only universally applicable approach in massive open online education. They discuss the promises, practical and technical challenges, current developments in and recommendations for implementing peer assessment. They also suggest some possible future research directions.

Keywords

massive open online course (MOOC) formative assessment summative assessment peer assessment lifelong learning (LLL) 

Résumé

Méthodes d’évaluation dans les formations en ligne ouvertes à tous : possibilités, défis et futures orientations – L’essor des formations en ligne ouvertes à tous (FLOT) ouvre la voie à une ère de la participation interactive de masse à l’éducation. Tandis que l’inscription libre et massive ainsi que les avancées des technologies d’apprentissage créent des possibilités prometteuses pour l’apprentissage tant formel qu’informel tout au long de la vie, la réalisation d’une évaluation efficiente et efficace demeure un obstacle. Pour garantir un véritable apprentissage, il est nécessaire d’effectuer à la fois des évaluations pour l’apprentissage (évaluations formatives) qui mesurent les progrès actuels des apprenants, et les évaluations de l’apprentissage (évaluations sommatives) qui recensent les progrès cumulés des apprenants. La récente tendance des prestataires à attribuer des certificats et insignes numériques sanctionnant la réussite aux cours signale aussi la nécessité de résultats d’évaluation appropriés, sécurisés et précis qui garantissent la responsabilité. L’article examine les approches possibles d’évaluation qui correspondent à la formation en ligne ouverte à tous sous l’angle de l’évaluation formative et sommative. Les auteurs signalent l’importance et les défis d’évaluer les progrès des apprenants des FLOT dans ces deux buts. Ils identifient plusieurs approches d’évaluation formative et sommative en examinant et analysant leurs avantages et inconvénients respectifs. Ils concluent que l’évaluation entre pairs est fort probablement la seule approche universellement applicable dans la formation en ligne ouverte à tous. Ils en présentent les aspects prometteurs, les défis pratiques et techniques, l’évolution actuelle dans la réalisation de ce type d’évaluation ainsi que des recommandations. Ils proposent enfin plusieurs orientations possibles pour de futures études.

摘要

用于慕课的评估方法: 机会, 挑战及未来发展方向 – 慕课开启了大规模互动学习的新时代。 教育科技的进步为终身学习创造了很多机会, 但同时如何实现高效的学习评估也成为一个很大的挑战。 为了帮助学生学习, 形成性评估 (给学生提供阶段性反馈) 和总结性评估 (评估教学的最终效果) 都是必要的手段。 很多慕课开始向课程完成者颁发电子证书, 这一趋势也使得安全有效的评估变得尤为重要。 本文阐述了评估在慕课中的重要性和所面临的挑战, 并介绍了适用于慕课的形成性和总结性评估方法, 并对不同方法的优势和劣势进行了分析。 作者认为在慕课中, 学生互评是一个普遍适用的评估方法。 本文还对学生互评的优势、 挑战、 发展趋势以及实际应用中的问题进行了探讨, 最后提出了慕课评估方法未来的发展方向。

References

  1. Admiraal, W., Huisman, B., & Van de Ven, M. (2014). Self- and peer assessment in massive open online courses. International Journal of Higher Education, 3(3), 119–128.CrossRefGoogle Scholar
  2. Balfour, S. P. (2013). Assessing writing in MOOCs: Automated essay scoring and Calibrated Peer Review™. Journal of Research and Practice in Assessment, 8(1), 40–48.Google Scholar
  3. Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167.CrossRefGoogle Scholar
  4. Bowen, W. G. (2013). Higher education in the digital age. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
  5. Breslow, L., Pritchard, D. E., Deboer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edX’s first MOOC. Research and Practice in Assessment, 8(1), 13–25.Google Scholar
  6. Brindley, J., Blaschke, L. M., & Walti, C. (2009). Creating effective collaborative learning groups in an online environment. The International Review of Research in Open and Distributed Learning, 10(3). Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/675/1271.
  7. Bulu, S. T., & Yildirim, Z. (2008). Communication behaviors and trust in collaborative online teams. Educational Technology and Society, 11(1), 132–147.Google Scholar
  8. Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121–152.CrossRefGoogle Scholar
  9. Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers and Education, 48(3), 409–426.CrossRefGoogle Scholar
  10. Chung, C. (2015). Futurelearn partners with Pearson VUE for proctored testing. Class Central (web post, 8 May). Retrieved 18 January 2018 from https://www.class-central.com/report/futurelearn-pearson-vue-proctored-testing/.
  11. Coetzee, D., Fox, A., Hearst, M. A., & Hartmann, B. (2014). Should your MOOC forum use a reputation system? In Proceedings of the 17th ACM conference on computer-supported cooperative work and social computing (pp. 1176–1187). New York: Association for Computing Machinery (ACM) Press.Google Scholar
  12. Coursera. (2013). Introducing signature track. Coursera (blog post 9 January). Retrieved 26 January 2018 from https://blog.coursera.org/signaturetrack/.
  13. Daradoumis, T., Bassi, R., Xhafa, F., & Caballé, S. (2013). A review on massive e-learning (MOOC) design, delivery and assessment. In Proceedings of the eighth international conference on P2P, parallel, grid, cloud and internet computing (pp. 208–213). Compiègne: IEEE.Google Scholar
  14. Dascalu, M.-I., Bodea, C.-N., Mihailescu, M. N., Tanase, E. A., & de Pablos, P. O. (2016). Educational recommender systems and their application in lifelong learning. Behaviour and Information Technology, 35(4), 290–297.CrossRefGoogle Scholar
  15. Dinevski, D., & Kokol, P. (2004). ICT and lifelong learning. European Journal of Open, Distance and E-Learning, 7(2), Article 136. Retrieved 18 January 2018 from http://www.eurodl.org/?p=archives&year=2004&halfyear=2&article=136.
  16. Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322.CrossRefGoogle Scholar
  17. Foster, D., & Layman, H. (2013). Online proctoring systems compared. Retrieved 5 February 2018 from https://ivetriedthat.com/wp-content/uploads/2014/07/Caveon-Test-Security.pdf.
  18. Garavalia, L., Olson, E., Russel, E., & Christensen, L. (2007). How do student cheat? In E. M. Anderman & T. B. Murdock (Eds.), Psychology of academic cheating (pp. 33–55). Burling, MA: Elsevier Academic.CrossRefGoogle Scholar
  19. Garrison, D. R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, 19(3), 133–148.CrossRefGoogle Scholar
  20. Garrison, D. R., Cleveland-Innes, M., & Fung, T. S. (2010). Exploring causal relationships among teaching, cognitive and social presence: Student perceptions of the community of inquiry framework. The Internet and Higher Education, 13(1), 31–36.CrossRefGoogle Scholar
  21. Gershon, R. C. (2005). Computer adaptive testing. Journal of Applied Measurement, 6(1), 109–127.Google Scholar
  22. Gielen, S., Dochy, F., Onghena, P., Struyven, K., & Smeets, S. (2011). Goals of peer assessment and their associated quality concepts. Studies in Higher Education, 36(6), 719–735.CrossRefGoogle Scholar
  23. Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers and Education, 57(4), 2333–2351.CrossRefGoogle Scholar
  24. Goldin, I. M. (2012). Accounting for peer reviewer bias with Bayesian models. In J. Kim & R. Kumar (Eds), Proceedings of the full-day workshop on intelligent support for learning groups at the 11th International conference on intelligent tutoring systems (ITS 2012) (pp. 27–34). Chania: (n. p.). Retrieved 31 January 2018 from https://sites.google.com/site/islg2012/.
  25. Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the first ACM conference on learning at scale (pp. 41–50). New York: Association for Computing Machinery (ACM) Press.Google Scholar
  26. Harlen, W., & James, M. (1997). Assessment and learning: Differences and relationships between formative and summative assessment. Assessment in Education, 4(3), 365–379.CrossRefGoogle Scholar
  27. Hollands, F. M., & Tirthali, D. (2014). MOOCs: Expectations and reality. Full report. New York: Center for Benefit-Cost Studies of Education (CBCSE), Teachers College, Columbia University. Retrieved 18 January 2018 from http://cbcse.org/wordpress/wp-content/uploads/2014/05/MOOCs_Expectations_and_Reality.pdf.
  28. Hwang, G. J., Hung, C. M., & Chen, N. S. (2014). Improving learning achievements, motivations and problem-solving skills through a peer assessment-based game development approach. Educational Technology Research and Development, 62(2), 129–145.CrossRefGoogle Scholar
  29. Jordan, K. (2014). Initial trends in enrolment and completion of massive open online courses. The International Review of Research in Open and Distributed Learning, 15(1), 133–160. Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/1651.
  30. Jordan, K. (2015). Massive open online course completion rates revisited: Assessment, length and attrition. The International Review of Research in Open and Distributed Learning, 16(3), 341–358. Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/2112/3340.
  31. Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65(4), 681–706.CrossRefGoogle Scholar
  32. Khalil, H., & Ebner, M. (2014). MOOCs completion rates and possible methods to improve retention: A literature review. In Proceedings of EdMedia. EdMedia: World conference on educational media and technology, 23 June Tampere, Finland (pp. 1305–1313). Waynesville, NC: Association for the Advancement of Computing in Education (AACE).Google Scholar
  33. Kim, M. (2005). The effects of the assessor and assessee’s roles on preservice teachers’ metacognitive awareness, performance, and attitude in a technology-related design task. DPhil Dissertation. Electronic Theses, Treatises and Dissertations, Florida State University. Retrieved 18 January 2018 from http://purl.flvc.org/fsu/fd/FSU_migr_etd-3051.
  34. Kizilcec, R. F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: Analyzing learner subpopulations in massive open online courses categories and subject descriptors. In Proceedings of the third international conference on learning analytics and knowledge (pp. 170–179). New York: Association for Computing Machinery (ACM) Press.Google Scholar
  35. Koller, D., Ng, A., Do, C., & Chen, Z. (2013). Retention and intention in massive open online courses. Educause Review, 48(3), 62–63. Retrieved 18 January 2018 from http://www.educause.edu/ero/article/retention-and-intention-massive-open-online-courses.
  36. Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., et al. (2015). Peer and self-assessment in massive online classes. In H. Plattner, C. Meinel, & L. Leifer (Eds.), Design thinking research: Building innovators (pp. 131–168). Cham: Springer.Google Scholar
  37. Lewin, T. (2012). Colorado State to offer credits for online class. The New York Times, 6 September. Retrieved 18 January 2018 from http://www.nytimes.com/2012/09/07/education/colorado-state-to-offer-credits-for-online-class.html?_r=0.
  38. Luo, H., Robinson, A. C., & Park, J.-Y. (2014). Peer grading in a MOOC: Reliability, validity, and perceived effects. Journal of Asynchronous Learning Networks, 18(2), n2. Retrieved 18 January 2018 from http://onlinelearningconsortium.org/sites/default/files/429-2286-1-LE.pdf.
  39. Mackness, J., Mak, S., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In L. Dirckinck-Holmfeld, V. Hodgson, C. Jones, M. de Laat, D. McConnell & T. Ryberg (Eds), Proceedings of the seventh international conference on networked learning (pp. 266–275). Lancaster: University of Lancaster.Google Scholar
  40. Mak, B., & Coniam, D. (2008). Using wikis to enhance and develop writing skills among secondary school students in Hong Kong. System, 36(3), 437–455.CrossRefGoogle Scholar
  41. Mukta, G. (2015). Experimenting with open online office hours. edX Blog (blog post 21 July). Retrieved 18 January 2018 from http://blog.edx.org/experimenting-with-open-online-office-hours.
  42. Newton, D. (2015). Cheating in online classes is now big business. The Atlantic, 4 November. Retrieved 18 January 2018 from http://www.theatlantic.com/education/archive/2015/11/cheating-through-online-courses/413770/.
  43. Onah, D. F. O., Sinclair, J., & Boyatt, R. (2014). Dropout rates of massive open online courses: Behavioral patterns. In L. Gómez Chova, A. López Martínez & I. Candel Torres (Eds), Proceedings of the 6th international conference on education and new learning technologies (EDULEARN 14) (pp. 5825–5834). Valencia: International Academy of Technology, Education and Development (IATED). Retrieved 31 January 2018 from http://wrap.warwick.ac.uk/65543/1/WRAP_9770711-cs-070115-edulearn2014.pdf.
  44. Pappano, L. (2012). The year of the MOOC. The New York Times, 2 November. Retrieved 18 January 2018 from http://www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-multiplying-at-a-rapid-pace.html?pagewanted=all.
  45. Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. In S. K. D’Mello, R. A. Calvo & A. Olney (Eds), Proceedings of the 6th international conference on educational data mining (EDM 2013) (pp. 153–160). Worcester, MA: International Educational Data Mining Society. Retrieved 31 January 2018 from http://www.educationaldatamining.org/EDM2013/proceedings/EDM2013Proceedings.pdf.
  46. Raman, K., & Joachims, T. (2014). Methods for ordinal peer grading. In Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1037–1046). New York. Association for Computing Machinery (ACM) Press.Google Scholar
  47. Ramesh, A., Goldwasser, D., Huang, B., Daumé Iii, H., & Getoor, L. (2014). Understanding MOOC discussion forums using seeded LDA. In Proceedings of the ninth workshop on innovative use of NLP for building educational applications (pp. 28–33). Baltimore, MD: Association for Computational Linguistics.Google Scholar
  48. Ricci, F., Rokach, L., & Shapira, B. (2011). Introduction to recommender systems handbook. In F. Ricci, L. Rokach, B. Shapira, & P. B. Kantor (Eds.), Recommender systems handbook (pp. 1–35). Boston: Springer.CrossRefGoogle Scholar
  49. Robinson, A. C., Kerski, J., Long, E. C., Luo, H., DiBiase, D., & Lee, A. (2015). Maps and the geospatial revolution: Teaching a massive open online course (MOOC) in geography. Journal of Geography in Higher Education, 39(1), 65–82.CrossRefGoogle Scholar
  50. Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20–27.CrossRefGoogle Scholar
  51. Roediger, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten benefits of testing and their applications to educational practice. In J. Mestre & B. Ross (Eds.), Psychology of learning and motivation (pp. 1–36). Oxford: Elsevier.Google Scholar
  52. Sandeen, C. (2013a). Assessment’s place in the new MOOC world. Journal of Research and Practice in Assessment, 8(1), 5–12.Google Scholar
  53. Sandeen, C. (2013b). Integrating MOOCS into traditional higher education: The emerging “MOOC 3.0” era. Change: The Magazine of Higher Learning, 45(6), 34–39.CrossRefGoogle Scholar
  54. Sclater, N. (2009). The organizational impact of open educational resources. In U. D. Ehlers & D. Schneckenberg (Eds.), Changing cultures in higher education (pp. 485–497). Berlin: Springer.Google Scholar
  55. Sharples, M. (2000). The design of personal mobile technologies for lifelong learning. Computers and Education, 34(3–4), 177–193.CrossRefGoogle Scholar
  56. Sharples, M., Kirsop, L., & Kholmatova, A. (2016). Designing small group discussion for a MOOC platform. In Proceedings of the third conference on learning with MOOCS (LWMOOCs’16): Being and learning in a digital age (pp. 11–12). Philadelphia: University of Pennsylvania. Presentation slides retrieved 31 January 2018 from https://www.slideshare.net/sharplem/small-group-learning-for-a-mooc-pplatform.
  57. Shermis, M.D., Burstein, J., Higgins, D., & Zechner, K. (2010). Automated essay scoring: Writing assessment and instruction. In E. Baker, B. McGaw & N. S. Petersen (Eds), International encyclopedia of education (3rd ed., pp. 75–80). Oxford: Elsevier. Retrieved 30 January 2018 from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.652.7014&rep=rep1&type=pdf.
  58. Shohamy, E., Donitsa-Schmidt, S., & Ferman, I. (1996). Test impact revisited: Washback effect over time. Language Testing, 13(3), 298–317.CrossRefGoogle Scholar
  59. Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1), Article no. 1. Retrieved 18 January 2018 from http://er.dut.ac.za/handle/123456789/69.
  60. Siemens, G. (2013). Massive open online courses: Innovation in education? In R. McGreal, W. Kinuthia & S. Marshall (Eds), Open educational resources: Innovation, research and practice (pp. 5–15). Vancouver: Commonwealth of Learning and Athabasca University.Google Scholar
  61. Skrypnyk, O., Joksimović, S., Kovanović, V., Gašević, D., & Dawson, S. (2015). Roles of course facilitators, learners, and technology in the flow of information of a cMOOC. The International Review of Research in Open and Distributed Learning, 16(3), 188–217. Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/2170/3347.
  62. Sluijsmans, D. M. A., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2002a). Peer assessment training in teacher education: Effects on performance and perceptions. Assessment and Evaluation in Higher Education, 27(5), 443–454.CrossRefGoogle Scholar
  63. Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Bastiaens, T. J. (2002b). The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Educational Evaluation, 29(1), 23–42.CrossRefGoogle Scholar
  64. Smith, P. (2014). The coming era of personalized learning paths. Educause Review, 49(6). Retrieved 18 January 2018 from http://er.educause.edu/articles/2014/11/the-coming-era-of-personalized-learning-paths.
  65. Soares, L. (2011). The “personalization” of higher education: Using technology to enhance the college experience. Center for American Progress (web post, 4 October). Retrieved 18 January 2018 from https://www.americanprogress.org/issues/labor/report/2011/10/04/10484/the-personalization-of-higher-education/.
  66. Springer, L., Stanne, M. E., & Donovan, S. S. (1999). Effects of small-group learning on undergraduates in science, mathematics, engineering, and technology: A meta-analysis. Review of Educational Research, 69(1), 21–51.CrossRefGoogle Scholar
  67. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press.Google Scholar
  68. Staubitz, T., Petrick, D., Bauer, M., Renz, J., & Meinel, C. (2016). Improving the peer assessment experience on MOOC platforms. In Proceedings of the third ACM conference on learning at scale (pp. 389–398). Edinburgh: Association for Computing Machinery (ACM) Press.Google Scholar
  69. Suen, H. K. (2013). Role and current methods of peer assessment in massive open online courses (MOOCs). In Presented at the first international workshop on advanced learning sciences (IWALS), 21–22 October, University Park, PA.Google Scholar
  70. Suen, H. K. (2014). Peer assessment for massive open online courses (MOOCs). The International Review of Research in Open and Distributed Learning, 15(3), 312–327. Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/1680/2904.
  71. Tomkin, J. H., & Charlevoix, D. (2014). Do professors matter? Using an a/b test to evaluate the impact of instructor involvement on MOOC student outcomes. In Proceedings of the first ACM conference on learning at scale (pp. 71–78). New York: Association for Computing Machinery (ACM) Press.Google Scholar
  72. Topping, K. J. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276.CrossRefGoogle Scholar
  73. Tsai, C.-C., Liu, E. Z.-F., Lin, S. S. J., & Yuan, S.-M. (2001). A networked peer assessment system based on a Vee heuristic. Innovations in Education and Teaching International, 38(3), 220–230.CrossRefGoogle Scholar
  74. Tu, C.-H., & McIsaac, M. (2002). The relationship of social presence and interaction in online classes. American Journal of Distance Education, 16(3), 131–150.CrossRefGoogle Scholar
  75. Uto, M., & Ueno, M. (2016). Item response theory for peer assessment. IEEE Transactions on Learning Technologies, 9(2), 157–170. Retrieved 30 January 2018 from https://www.computer.org/csdl/trans/lt/2016/02/07243342.pdf.
  76. van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2009). Peer assessment for learning from a social perspective: The influence of interpersonal variables and structural features. Educational Research Review, 4(1), 41–54.CrossRefGoogle Scholar
  77. van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2010). Peer assessment as a collaborative learning activity: The role of interpersonal variables and conceptions. Learning and Instruction, 20(4), 280–290.CrossRefGoogle Scholar
  78. Veletsianos, G., & Shepherdson, P. (2015). Who studies MOOCs? Interdisciplinarity in MOOC research and its changes over time. The International Review of Research in Open and Distributed Learning, 16(3). Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/2202/3348.
  79. Walvoord, M. E., Hoefnagels, M. H., Gaffin, D. D., Chumchal, M. M., & Long, D. A. (2008). An analysis of Calibrated Peer Review (CPR) in a science lecture classroom. Journal of College Science Teaching, 37(4), 66–73.Google Scholar
  80. Webb, N. M., Troper, J. D., & Fall, R. (1995). Constructive activity and learning in collaborative small groups. Journal of Educational Psychology, 87(3), 406–423.CrossRefGoogle Scholar
  81. Xiong, Y., Goins, D., Suen, H. K., Pun, W. H., & Zang, X. (2014). A proposed credibility index (CI) in peer assessment. In Presented at the 76th annual meeting of the National Council on Measurement in Education (NCME), 2–6 April, Philadelphia, PA.Google Scholar
  82. Xiong, Y., Li, H., Kornhaber, M. L., Suen, H. K., Pursel, B., & Goins, D. D. (2015). Examining the relations among student motivation, engagement, and retention in a MOOC: A structural equation modeling approach. Global Education Review, 2(3), 23–33.Google Scholar
  83. Yuan, L., & Powell, S. (2013). MOOCs and open education: Implications for higher education. Bolton: Centre for Educational Technology and Interoperability Standards (CETIS). Retrieved 18 January 2018 from http://publications.cetis.org.uk/wp-content/uploads/2013/03/MOOCs-and-Open-Education.pdf.

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature, and UNESCO Institute for Lifelong Learning 2018

Authors and Affiliations

  1. 1.University of PittsburghPittsburghUSA
  2. 2.The Pennsylvania State UniversityUniversity ParkUSA

Personalised recommendations