Researchers see self-regulated learning (SRL) as a fundamental skill for succeeding in massive open online courses (MOOCs). However, there is no sufficient evidence of adequate functioning of SRL dimensions such as environment structuring, goal setting, time management, help-seeking, task strategies, and self-evaluation in the MOOC environment. This study fills the gap in understanding the structure of SRL skills utilising the Online Self-Regulated Learning Questionnaire (OSLQ). The construct-related validity of the OSLQ is evaluated based on self-reported survey responses of 913 Russian MOOC learners with confirmatory factor analysis and criterion-related validity is checked with independent samples t-tests comparison. The results show that the original six-factor hierarchical model does not fit the data adequately. The evidence implies that the dimension ‘help-seeking’ is not effective in the MOOC environment. Therefore, a redefined five-factor hierarchical model of the OSLQ is suggested.
The start of 2020 brought significant changes, challenging almost all spheres of life. The coronavirus (COVID-19) pandemic made universities across the world mobilise and move all teaching online. Not long ago MOOCs were a source of monetisation, but amid the coronavirus outbreak many MOOC providers offered temporary free access to MOOCs, helping universities teach remotely (Schwartz 2020). Although teaching online might seem like a good solution in such a situation, the old problems of MOOCs, which have been identified but not solved, could become a potential danger to its successful realisation.
One of the biggest challenges, faced by MOOCs, is the high percentage of learners who do not reach the finish line (Kizilcec et al. 2013; Perna et al. 2014; Reich and Ruipérez-Valiente 2019). Statistics show that the dropout rate can be up to 90–98% on various online platforms (Rivard 2013; Perna et al. 2014; Reich 2014; Healy 2017). Researchers specify diverse factors which have an impact on successful MOOC completion, for example, young people, the so-called ‘net generation’, are considered more digitally adapted, and they show higher chances of finishing online courses compared to an older generation who needs more guidance (Bennett et al. 2008). MOOC learners’ motivation to learn and develop personal and professional identity is also related to course completion (Yuan and Bowel 2013). Individuals who are self-regulated in their learning tend to achieve more positive academic outcomes than individuals who do not exhibit self-regulated learning behaviors (Barnard et al. 2010; Maldonado-Mahauad et al. 2018). The research conducted to date proves self-regulated learning (SRL) to be the most essential skill to succeed in MOOCs (Littlejohn et al. 2016). Barnard et al. (2010) describe self-regulated learners as those who are able to set their academic goals, manage their time, seek help from their peers and instructors when needed, monitor their work, evaluate their academic progress, and create a productive environment for learning. The value of self-regulated learning skills was first attributed to the traditional offline academic environment (Zimmerman 1990), and then this idea was brought to the online learning environment (Barnard et al. 2009; Milligan et al. 2013; Milligan and Littlejohn 2014; Fontana et al. 2015). As research shows, there is a significant relationship between learners’ self-regulated learning skills and MOOC completion (Milligan et al. 2013).
Barnard et al. (2009) conceptualise SRL as a complex construct consisting of six dimensions: environment structuring, goal setting, time management, help-seeking, task strategies, and self-evaluation. Although there are many studies which confirm the importance of different dimensions of self-regulated learning for MOOC completion, we argue that not all of them can be applied to the MOOC environment. In particular, help-seeking, which implies face-to-face or online meetings with classmates and getting help from the instructor through email, does not seem to be relevant for the MOOC context. By design MOOCs have limited interaction between learners and instructors. As Baker et al. (2018) find, only 7% of MOOC learners received feedback from instructors. It was also shown that the level of activity of MOOC learners’ on online platforms is low: 90% of learner activity includes the review of the same information (Breslow et al. 2013) and 94% of them do not take part in online discussions (Qiu et al. 2016). Despite the fact that researchers highlight the importance of learner-instructor interaction to increase distant learners’ motivation and satisfaction (Moore 1989), only learner-content interaction has been currently implemented to its full potential (Gameel 2017). Therefore our study fills the gap in understanding the structure of self-regulated learning skills in the MOOC environment utilising the Online Self-Regulated Learning Questionnaire (OSLQ).
Self-regulated learning in the online settings
An increasing number of studies investigate the role of self-regulated skills in online learning (Bernacki et al. 2011; Wang et al. 2013). In self-paced, open-ended, and non-linear learning environments students gain autonomy, but at the same time this adds responsibility towards their learning process. Self-regulated learning skills are critically important in online-environment because students should plan, manage and control their learning activities in order to finish courses successfully (Wang et al. 2013). The ability to self-regulate the learning process helps achieve personal objectives in MOOCs: goal setting and strategic planning are seen as strong predictors of goal attainment (Kizilcec et al. 2017). Interviews conducted with MOOC students indicate that high SRL learners tend to connect their online education experience to their personal needs, such as career advancement (Littlejohn et al. 2016). The research by Littlejohn et al. (2016) demonstrated that students with high level of SRL utilize a more flexible approach to organize their learning process. For example, high self-regulated students can spend more time watching video lectures and submitting weekly tests, they also are more likely to revisit course materials (Kizilcec et al. 2017).
Interaction between learners and instructors is also deemed to be a vital element of SRL. The results of the study by Sunar et al. (2017) suggest that dropout rates will be lower when learners engage in repeated and frequent social interactions. The analysis of the discussions in an eight week FutureLearn MOOC with 9855 participants revealed that if a learner starts following someone, and if learners also interact with those they follow, the probability of their finishing the course is higher. Feelings of isolation and having no one to ask for help increase chances of failing a MOOC (Khalil and Ebner 2014). Despite the widely-accepted importance of social engagement, a MOOC environment does not allow participants to interact easily with other learners and instructors for better social engagement. MOOC instructors cannot invest much time to take care of a large number of students, their attention to each student is limited. However, this does not make social engagement less important, rather places some barriers on success of MOOCs.
Validity of the online self-regulated learning questionnaire
The Online Self-Regulated Learning Questionnaire (OSLQ) has become one of the widely-used instruments to assess self-regulated learning skills in the online environment (Barnard et al. 2009). The OSLQ was used in both cross-sectional (e.g. Kintu and Zhu 2016; Onah and Sinclair 2016) and longitudinal (Tabuenca et al. 2015) studies. Also, the OSLQ was adapted into the Turkish (Korkmaz and Kaya 2012), Romanian (Cazan 2014), Russian (Martinez-Lopez et al. 2017) and Chinese (Fung et al. 2018) languages. However, to the best of our knowledge, none of the previous studies managed to provide comprehensive evidence of the validity of OSLQ. We assume that self-regulated learning in the MOOC settings is different from the one suggested by Barnard et al. (2009). Our assumption can be explained based on the following.
First, studies attempted to test construct-related validity of the OSLQ using Confirmatory Factor Analysis (CFA) (Barnard et al. 2009; Korkmaz and Kaya 2012; Martinez-Lopez et al. 2017; Fung et al. 2018) utilise rather small samples. None of the samples used in the studies meets the minimum criterion of 10 participants per estimated parameter (Schreiber et al. 2006). There are 54 parameters that need to be estimated to test the structure of the OSLQ. This indicates that a minimum sample size required for CFA is 540 participants. Second, CFA standardised coefficients for the help-seeking dimension reported in Barnard et al. (2009) were much lower for online students than for blended course students. Another potential problem is the low standardised coefficients (< .40) for the factor ‘help-seeking’ presented in the CFA model for online students. Third, there is only one study (Martinez-Lopez et al. 2017), which utilised a sample of MOOCs learners to test the validity of the OSLQ, while the majority of studies were based on either traditional or blended samples. Finally, criterion-related validity was tested as correlation between the OSLQ’s subscales and academic achievements (Cazan 2014). According to the results, the association of two subscales (goal setting and environment structuring) with academic performance was statistically significant. However, no evidence for the association of the OSLQ’s total score and educational outcomes was presented. Table 1 summarises the methods used in prior studies to support the OSLQ validity.
After reviewing research designs, methods and results from previous studies, we can conclude that there is no strong support for both construct and criterion validity of the OSLQ. Therefore, self-regulated learning might have a different structure from the one proposed by Barnard et al. (2009). Based on the studied literature, we hypothesise that help-seeking subscale might not be useful in assessing SRL skills among MOOC learners, and the five-factor hierarchical model is more appropriate instead of the six-factor hierarchical model for the current MOOC environment.
Data and methods
The sample consists of MOOCs learners, who were enrolled in online courses on the National Open Education Platform (NOEP)Footnote 1 in 2017, and previously agreed to receive e-newsletters from the platform. NOEP markets itself as a Russian analogue to MOOC providers like Coursera or edX which offers various courses mainly in the Russian language. Each course, posted on the platform, is subject to expert analysis. Currently, there are 582 MOOCs created by 16 leading Russian universities. Learners received an email invitation to participate in a web-based online survey and share their experience with MOOCs. The survey was programmed with the EnjoySurvey software. It was anonymous and the participation was voluntary. As this study was targeted at the student population, we filtered out everybody who did not specify himself or herself as a student currently enrolled at university.
Descriptive statistics of the sample of MOOC learners
The total number of qualified respondents is 913. We kept only those respondents who completed the questionnaire. The sample included 68% of female students, the average age is within the range 19–22 years and the majority of the respondents are pursuing their Bachelor degree. Learners who participated in the survey were enrolled in a wide range of MOOCs. The most popular MOOCs, which attracted the largest number of learners, were ‘Philosophy’, ‘Marketing’, ‘Programming’, and ‘Game Theory’. The detailed descriptive statistics of the sample is presented in Table 2.
The majority of students in the sample (63%) were enrolled in 1–2 MOOCs in 2017 (see Table 3).
Instrument adaptation procedure
The use of self-regulation strategies was measured by the OSLQ. The original English version of the OSLQ was adapted to the Russian language according to the ITC Guidelines for Translating and Adapting Tests (International Test Commission 2017). Back-translation procedures were adopted to ensure conceptual equivalence across languages. Two translators who are fluent in both the source language (English) and the target language (Russian) and had previous experience in translating survey instruments were selected. One translator did the translation of the original version into Russian and a back translation into English was performed by another translator. The content validity was assessed based on experts’ opinion as to whether or not the instrument is measuring what it is supposed to measure. The group of experts (8 people) who were part of the research group with the focus on online learning discussed the adapted version in the format of a focus group. Their aim was to obtain evidence of such features of the instrument as sufficiency, clarity, relevance, and the match between the items and the definition of the construct controlling for possible biases, for example, gender, culture, and age (Goodwin and Leech 2003). After the structured procedure, it was decided to remove the following items from the OSLQ:
Goal setting: I set goals to help me manage studying time for my online courses.
Task strategies: I read aloud instructional materials posted online to fight against distractions.
Self-evaluation: I summarise my learning in online courses to examine my understanding of what I have learned.
Self-evaluation: I ask myself a lot of questions about the course material when studying for an online course.
These four items were deemed by the experts as confusing because of their vague meaning in the Russian language and logical discrepancies. For example, the meaning of the item ‘I set goals to help me manage studying time for my online courses’ is similar to the item ‘I set short-term (daily or weekly) goals as well as long-term goals (monthly or for the semester)’. The omitted ‘self-evaluation’ items are different in essence from the two that were kept. We kept the items which describe students’ communication with their classmates and evaluation of their progress, whereas the two deleted items were considered by the experts more of a monitoring strategy in which the student summarises or questions oneself. The item ‘I read aloud instructional materials posted online to fight against distractions’ was deleted as the research shows that the strategy of reading instructions aloud to get focused and comprehend information works for kids, school children with attention deficit hyperactivity disorder (US Department of Education 2008) and our target audience was university students.
The data analysis was performed in two stages. The first stage was aimed to check the OSLQ’s construct-related validity. In the first stage, we tested the structure of the questionnaire utilising Confirmatory Factor Analysis (CFA). We also evaluated the internal consistency of the six subscales using the classical reliability index (Cronbach’s alpha). The second stage was focused on assessing the OSLQ’s criterion-related validity using independent samples t-tests comparison.
We applied CFA to test the structure of the questionnaire. We tested two CFA models: the original six-factor hierarchical or second-order model suggested for the OSLQ in Barnard et al. (2009) and the alternative five-factor hierarchical model, since we hypothesise that help-seeking subscale might not be useful in assessing SRL skills among MOOC learners.
A hierarchical or a second-order model suggests that there is a higher-order factor and the lower-order factors (Chen et al. 2012). In the case of the OSLQ, self-regulated learning is the higher order-factor, which accounts for the commonality between the six subscales (goal setting, environment structuring, task strategies, time management, help-seeking, and self-evaluation). The correlation matrix for the 20 items of the OSLQ is presented in Table 4. See Table 5 for the correlations between the first-order subscales.
The correlations between factors are not high (< .70), suggesting that each of the subscales forms a separate construct. The correlation matrix of the OSLQ items shows no significant correlations exceeding .70 between items from different factors.
The STATA 15.0 software package was used for CFA. We chose maximum likelihood (ML) estimation because there was no evidence of excessive non-normality.
First, we estimated the fit between the model and the observed data. Three conventional statistics, reflecting the model fit, were reported: the root mean square error of approximation (RMSEA), the Tucker Lewis Index (TLI), and the Comparative Fit Index (CFI). We relied on the following values of statistics which indicated the acceptable fit (Byrne 2010; Schreiber et al. 2006): RMSEA close or below .06 (.08) with confidence interval, TLI and CFI close or above .90 (.95). No post hoc model modifications were made.
Second, we calculated the standardised path coefficients from the latent variable constructs to the items, and from the higher order construct to the latent variable constructs.
Third, we determined which model better fits the data. We compared them using two fit indexes: Akaike information criterion (AIC) and Bayes information criterion (BIC). According to Schreiber et al. (2006), the smaller these indexes are, the better the model fits the data.
One of the commonly used approaches in criterion-related validity research is group-comparison. It usually aims to investigate differences between instruments scores across groups of examinees (Goodwin and Leech 2003). A comparison variable should be related to the measuring construct. In this case, we rely on prior studies about MOOC completion. The research indicated that learners with a high level of SRL skills perform better than their lower counterparts (Kizilcec et al. 2017; Milligan et al. 2013). It has been demonstrated that learners, who set goals to finish the course, are less likely to drop out (Handoko et al. 2019).
To assess the criterion-related validity of the OSLQ, we used students’ plans to finish the course (the question ‘Did you plan to receive the verified certificate at the end of the course?’). We divided students into two groups according to their plans. The first group of students informed us that they planned to receive a verified certificate, the second group did not. Their plan to receive a verified certificate is perceived as the criterion variable. We performed an independent samples t-test to compare mean scores on SRL between these groups.
Based on the previous research by Barnard et al. (2009), we tested the original six-factor hierarchical model suggested for the OSLQ. The model was specified according to the following criteria: (1) each item had a nonzero loading on the six first-order factors; (2) standard errors associated with each item were uncorrelated; (3) covariance between six first-order factors was explained by a second-order factor – self-regulated learning.
Figure 1 provides the standardised path coefficients from the latent variable constructs to the items and from the higher order construct to the latent variable constructs. The values of fit-statistics indicate an unacceptable fit between the model and the observed data, RMSEA = .08 (.08; .09), CFI = .88, and TLI = .86. Each of the fit indexes exceeds the cutoff criteria suggested by Byrne (2010) and Schreiber et al. (2006). The results do not support the original six-factor hierarchical model.
The six-factor hierarchical model was modified by removing the help-seeking subscale from the model. Relying on the results of the mentioned research suggested that help-seeking behaviour might be not common among MOOC learners, we hypothesise that a five factor hierarchical model could better fit our data. The model was specified according to the following criteria: (1) each item had a nonzero loading on the five first-order factors; (2) standard errors associated with each item were uncorrelated; (3) covariance between five first-order factors was explained by a second-order factor – self-regulated learning.
The values of fit-statistics indicate an acceptable fit between the five factor hierarchical model and the observed data, RMSEA = .07 (.06; .07), CFI = .94, and TLI = .93. Figure 2 presents the standardised path coefficients from the latent variable constructs to the items and between the constructs. The paths in the second model are all significant (p < .001) with standardised values ranging from .59 to .92 from the first-order factors to the items, and standardised values ranging from .45 to .94 from the second-order factor to the first-order factors.
Next, we determined which model better fits the data. Table 6 provides the fit indexes for each of the models. AIC and BIC for the five-factor hierarchical model are smaller: 53,558.51 > 42,103.73 and 53,876.41 > 42,359.02. According to these results, the second model better fits the data. The five-factor hierarchical model gives a better approximation and interpretation of our data about SRL behaviour among MOOC learners. Therefore, we suggest a redefined model of the OSLQ for MOOC learners, because help-seeking skills appear not relevant in this environment.
The average classical reliability (Cronbach’s α) ranges from 0.72 to 0.90 for the subscales (Table 7). Mean Cronbach’s α for the six-factor model is 0.89, for the five-factor model is 0.88. High reliability coefficients (α > 0.80) indicate that the questionnaire can be used for both research and diagnostic purposes (Tavakol and Dennick 2011).
The research shows that successful MOOC learners tend to have higher scores on SRL skills (Milligan et al. 2013; Kizilcec et al. 2017). Since we did not have access to the data about students’ grades, we asked them whether they planned to obtain the verified certificate for MOOC completion. Students’ answers for the question ‘Did you plan to receive the verified certificate at the end of the course?’ was used as a comparison variable. The independent samples t-tests for each dimension were conducted to determine if there is a difference in SRL skills between students, which planned to obtain the verified certificate, and which did not (Table 8).
According to the results, the mean on all of the six OSLQ’s subscales statistically differs among the two groups of students. There are statistically significant differences, at .001 or .01 levels of significance, between students who planned to obtain the verified certificate, and those who did not. The results show that students, who planned to receive the certificate, score higher on goal setting, environment structuring, task strategies, time management, help-seeking, and self-evaluation. We find that the difference in average on the OSLQ subscales ranges from 0.21 standard deviations for ‘task strategies’ to .39 standard deviations for ‘goal setting’. Despite the fact that ‘help-seeking’ dimension might be redundant in MOOCs, it still determines higher educational outcomes.
Discussion and conclusions
The main finding of this study is that the original six-factor model for OSLQ is not always effective and the Russian version of the OSLQ has a five-factor hierarchical structure with the second-order factor SRL in the MOOC environment. The results suggest that the OSLQ can be used to measure people’s ability to set goals, structure the online environment, manage time, and evaluate their progress in the online environment. However, the subscale ‘help-seeking’ does not work in the current context of MOOCs.
The poor statistics and logical incoherence of the items from the dimension ‘help-seeking’ points to one of the main challenges faced by MOOCs – low communication between MOOCs’ students and instructors during the learning process. Researchers demonstrate that 94% of learners do not participate in online discussions (Qiu et al. 2016), and 90% of forum activity is revisiting previous threads (Breslow et al. 2013). In addition, one of the reasons learners withdraw from MOOCs is the failure to understand the content and having no one to turn to for help (Belanger and Thornton 2013). According to Mahasneh et al. (2012), more than half of students (64%) avoid help-seeking, even if they do not understand the material. The researchers explain students’ fear of asking for help as not wanting to be seen as incompetent (Ryan et al. 2001).
The pandemic outbreak in the world, which stopped normal face-to-face classes, shook the whole education system. This suggests online platforms and instructors need to be prepared to embrace this challenge by improving the quality of platforms and developing new pedagogical practices. As educators and researchers state, education is not only a question of good presentation and design, such elements as support and guidance are also essential (Shaughnessy et al. 2010). Eventually, online education has to become an old-new product with different instructor-learner relations and different ways of using information.
Although this study shows important results, it has some limitations. The results were derived from the sample of Russian MOOC learners. As previous studies show, self-regulated learning skills and strategies can be dependent on the environment (Schunk 2001; Barnard et al. 2010) and can function differently in various countries. Therefore, the results may not be generalized to the whole population of MOOC learners. Future research should consider verifying whether the OSLQ works similarly in different cultural contexts. For this purpose, first, comprehensive evidence of the validity of OSLQ in other cultural settings should be found following strict statistical requirements (for example, homogeneous group of respondents, appropriate sample size). Second, Differential Item Functioning (DIF) analysis can be utilised. DIF appears when there are different probabilities of endorsing a given item on a multi-item scale although the measured characteristics are the same (Zumbo 1999).
Further research could analyse convergent validity of the OSLQ through taking two measures that are supposed to be measuring the same construct and show that they are related. In other words, the correlation between OSLQ scores and other instruments, for example, the Academic Self-Regulation Scale (Magno 2010), the Motivated Strategies for Learning Questionnaire (Pintrich et al. 1993) can be investigated. Predictive validity can be assessed through a comparison of the OSLQ scores with a degree of completion of MOOCs taken right after measuring self-regulated learning skills. Finally, investigating heterogeneous samples, not limited to one category of respondents, would show whether our results are valid for other types of MOOC learners.
The datasets generated and/or analysed during the current study are not publicly available due to the restrictions imposed by the funding organisation but are available from the corresponding author on reasonable request.
Baker, R., Dee, T., Evans, B., & John, J. (2018). Bias in online classes: Evidence from a field experiment. CEPA working paper no. 18-03. Stanford Center for Education Policy Analysis. https://cepa.stanford.edu/content/bias-online-classes-evidence-field-experiment.
Barnard, L., Lan, W. Y., To, Y. M, Paton, V. O., & Lai, S. L. (2009). Measuring self-regulation in online and blended learning environments. Internet and Higher Education, 12, 1–6. https://doi.org/10.1016/j.iheduc.2008.10.005.
Barnard, L., Paton, V., & Lan, W. (2010). Profiles in self-regulated learning in the online learning environment. International Review of Research in Open and Distance Learning, 11(1), 61–80. https://doi.org/10.19173/irrodl.v11i1.769.
Belanger, Y., & Thornton, J. (2013). Bioelectricity: A quantitative approach. Technical Report, Duke University, NC, USA: Duke University’s First MOOC.
Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, 39(5), 775–786. https://doi.org/10.1111/j.1467-8535.2007.00793.x.
Bernacki, M. L., Aguilar, A. C., & Byrnes, J. P. (2011). Self-regulated learning and technology enhanced learning environments: An opportunity-propensity analysis. In G. Dettori & D. Persico (Eds.), Fostering self-regulated learning through ICT (pp. 1–26). Hershey: IGI Global.
Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom research into edX's first MOOC. Research & Practice in Assessment, 8, 13–25.
Byrne, B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming. New York: Routledge.
Cazan, A. M. (2014). Self-regulated learning and academic achievement in the context of online learning environments. In The International Scientific Conference Elearning and Software For Education, 3, 90-95).
Chen, F. F., Hayes, A., Carver, C. S., Laurenceau, J. P., & Zhang, Z. (2012). Modeling general and specific variance in multifaceted constructs: A comparison of the bifactor model to other approaches. Journal of Personality, 80(1), 219–251. https://doi.org/10.1111/j.1467-6494.2011.00739.x.
Fontana, R., Milligan, C., Littlejohn, A., & Margaryan, A. (2015). Measuring self-regulated learning in the workplace. International Journal of Training and Development, 19(1), 32–52. https://doi.org/10.1111/ijtd.12046.
Fung, J. J., Yuen, M., & Yuen, A. H. (2018). Validity evidence for a Chinese version of the online self-regulated learning questionnaire with average students and mathematically talented students. Measurement and Evaluation in Counseling and Development, 51(2), 111–124. https://doi.org/10.1080/07481756.2017.1358056.
Gameel, B. G. (2017). Learner satisfaction with massive open online courses. American Journal of Distance Education, 31(2), 98–111. https://doi.org/10.1080/08923647.2017.1300462.
Goodwin, L. D., & Leech, N. L. (2003). The meaning of validity in the new standards for educational and psychological testing: Implications for measurement courses. Measurement and Evaluation in Counseling and Development, 36(3), 181–191. https://doi.org/10.1080/07481756.2003.11909741.
Handoko, E., Gronseth, S. L., McNeil, S. G., Bonk, C. J., & Robin, B. R. (2019). Goal setting and MOOC completion: A study on the role of self-regulated learning in student performance in massive open online courses. The International Review of Research in Open and Distance Learning, 20(3), 39–58. https://doi.org/10.19173/irrodl.v20i4.4270.
Healy, P. A. (2017). Georgetown’s first six MOOCs: Completion, intention, and gender achievement gaps. Undergraduate Economic Review, 14(1), 1.
International Test Commission. (2017). The ITC guidelines for translating and adapting tests (2nd ed.). Retrieved from http://www.intestcom.org/. Accessed 19 May 2020.
Khalil, H., & Ebner, M. (2014). MOOCs completion rates and possible methods to improve retention—A literature review. In In world conference on educational multimedia, hypermedia and telecommunications. Chesapeake, VA: AACE.
Kintu, M. J., & Zhu, C. (2016). Student characteristics and learning outcomes in a blended learning environment intervention in a Ugandan University. Electronic Journal of e-Learning, 14(3), 181–195.
Kizilcec, R. F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: Analyzing learner subpopulations in massive open online courses. In Proceedings of the third international conference on learning analytics and knowledge (pp. 170–179). New York: ACM.
Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2017). Self-regulated learning strategies predict learner behavior and goal attainment in massive open online courses. Computers & Education, 104, 18–33. https://doi.org/10.1016/j.compedu.2016.10.001.
Korkmaz, O., & Kaya, S. (2012). Adapting online self-regulated learning scale into Turkish. Turkish Online Journal of Distance Education, 13(1), 52–67.
Littlejohn, A., Hood, N., Milligan, C., & Mustain, P. (2016). Learning in MOOCs: Motivations and self-regulated learning in MOOCs. The Internet and Higher Education, 29, 40–48 https://doi.org/10.1016/j.iheduc.2015.12.003.
Magno, C. (2010). Assessing academic self-regulated learning among Filipino college students: The factor structure and item fit. The International Journal of Educational and Psychological Assessment, 5, 61–76.
Mahasneh, R. A., Sowan, A. K., & Nassar, Y. H. (2012). Academic help-seeking in online and face-to-face learning environments. E-Learning and Digital Media, 9(2), 196–210. https://doi.org/10.2304/elea.2012.9.2.196.
Maldonado-Mahauad, J., Pérez-Sanagustín, M., Kizilcec, R. F., Morales, N., & Munoz-Gama, J. (2018). Mining theory-based patterns from big data: Identifying self-regulated learning strategies in massive open online courses. Computers in Human Behavior, 80, 179–196. https://doi.org/10.1016/j.chb.2017.11.011.
Martinez-Lopez, R., Yot, C., Tuovila, I., & Perera-Rodríguez, V. H. (2017). Online self-regulated learning questionnaire in a Russian MOOC. Computers in Human Behavior, 75, 966–974. https://doi.org/10.1016/j.chb.2017.06.015.
Milligan, C., & Littlejohn, A. (2014). Supporting professional learning in a massive open online course. The International Review of Research in Open and Distance Learning, 15(5), 197–213. https://doi.org/10.19173/irrodl.v15i5.1855.
Milligan, C., Littlejohn, A., & Margaryan, A. (2013). Patterns of engagement in connectivist MOOCs. Journal of Online Learning and Teaching, 9(2), 149–158.
Moore, M. G. (1989). Three types of interaction. The American Journal of Distance Education, 3(2), 1–6. https://doi.org/10.1080/08923648909526659.
Onah, D. F., & Sinclair, J. E. (2016, September). A multi-dimensional investigation of self-regulated learning in a blended classroom context: A case study on eLDa MOOC. In International Conference on Interactive Collaborative Learning (pp. 63-85). Springer.
Perna, L. W., Ruby, A., Boruch, R. F., Wang, N., Scull, J., Ahmad, S., & Evans, C. (2014). Moving through MOOCs: Understanding the progression of users in massive open online courses. Educational Researcher, 43(9), 421–432. https://doi.org/10.3102/0013189X14562423.
Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801–813.
Qiu, J., Tang, J., Liu, T. X., Gong, J., Zhang, C., Zhang, Q., & Xue, Y. (2016). Modeling and predicting learning behavior in MOOCs. In Proceedings of the ninth ACM international conference on web search and data mining (pp. 93-102). ACM.
Reich, J. (2014) MOOC completion and retention in the context of student intent. Retrieved from: https://er.educause.edu/articles/2014/12/mooc-completion-and-retention-in-the-context-of-student-intent. .
Reich, J., & Ruipérez-Valiente, J. A. (2019). The MOOC pivot. Science, 363(6423), 130–131. https://doi.org/10.1126/science.aav7958.
Rivard, R. (2013). Measuring the MOOC Dropout Rate. Retrieved from: https://www.insidehighered.com/news/2013/03/08/researchers-explore-who-taking-moocs-and-why-so-many-drop-out. .
Ryan, A. M., Pintrich, P. R., & Midgley, C. (2001). Avoiding seeking help in the classroom: Who and why? Educational Psychology Review, 13(2), 93–114. https://doi.org/10.1023/A:1009013420053.
Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. The Journal of Educational Research, 99(6), 323–338. https://doi.org/10.3200/JOER.99.6.323-338.
Schunk, D. H. (2001). Social cognitive theory and self-regulated learning. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement (2nd ed.). Mahwah, NJ: Lawrence Erlbaum.
Schwartz, N. (2020). MOOC providers offer some free course access amid coronavirus outbreak. Retrieved from: https://www.educationdive.com/news/mooc-providers-offer-some-free-course-access-amid-coronavirus-outbreak/574027/. .
Shaughnessy, M. F., Fulgham, S. M., & Kirschner, P. (2010). Interview with Paul Kirschner. Educational Technology, 50(4), 47–52.
Sunar, A., White, S., Abdullah, N., & Davis, H. (2017). How learners’ interactions sustain engagement: A MOOC case study. IEEE Transactions on Learning Technologies, 10(4), 475–487. https://doi.org/10.1109/TLT.2016.2633268.
Tabuenca, B., Kalz, M., Drachsler, H., & Specht, M. (2015). Time will tell: The role of mobile learning analytics in self-regulated learning. Computers & Education, 89, 53–74. https://doi.org/10.1016/j.compedu.2015.08.004.
Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach's alpha. International Journal of Medical Education, 2, 53–55. https://doi.org/10.5116/ijme.4dfb.8dfd.
U.S. Department of Education, (2008) Office of Special Education and Rehabilitative Services, Office of Special Education Programs, Teaching Children with Attention Deficit Hyperactivity Disorder: Instructional Strategies and Practices. Washington, D.C.
Wang, C. H., Shannon, D. M., & Ross, M. E. (2013). Students’ characteristics, self-regulated learning, technology self-efficacy, and course outcomes in online learning. Distance Education, 34(3), 302–323. https://doi.org/10.1080/01587919.2013.835779.
Yuan L., Bowel S. (2013). MOOCs and Open Education: Implications for Higher Education. Retrieved from: http://publications.cetis.org.uk/wp-content/uploads/2013/03/MOOCs-and-Open-Education.pdf. .
Zimmerman, B. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25(1), 3–17. https://doi.org/10.1207/s15326985ep2501_2.
Zumbo, B. D. (1999). A handbook on the theory and methods of differential item functioning (DIF) (pp. 1–57). National Defense Headquarters: Ottawa.
The authors express gratitude to Saule Bekova, the Research Fellow of the Centre of Sociology of Higher Education, National Research University Higher School of Economics, who worked as the coordinator of the project “Modern digital educational environment”.
This work was supported by the Ministry of Science and Education of the Russian Federation in the framework of the Modern Digital Educational Environment.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Vilkova, K., Shcheglova, I. Deconstructing self-regulated learning in MOOCs: In search of help-seeking mechanisms. Educ Inf Technol (2020). https://doi.org/10.1007/s10639-020-10244-x
- Self-regulated learning
- Education research