Abstract
In this chapter, I conclude this book on computer-based scaffolding in science, technology, engineering, and mathematics (STEM) education. I note the overall effect size point estimate for scaffolding—g = 0.46—and compare that to other effect size estimates in the literature. I summarize the wide variation in contexts in which and learner populations among which scaffolding is used, as well as note the characteristics along which the magnitude of scaffolding’s impact does not vary—contingency, generic versus context specific, and intended learning outcome—as well as characteristics along which it does— problem-centered model with which scaffolding is used, and grade level and learner characteristics. I also note areas in which more research is needed—motivation scaffolding, scaffolding for students with learning disabilities, and scaffolding in the context of project-based and design-based learning.
Keywords
6.1 Overall Implications
Despite the attempt by Kirschner, Sweller, and Clark (2006) to posit problem-centered instructional approaches as failures due to their purported lack of instructional guidance, it has been seen in this book that problem-centered instruction paired with computer-based scaffolding is quite effective in promoting strong cognitive outcomes. Scaffolding leads to effects that were significantly greater than zero and practically important across the concept, principles , and application assessment levels (Belland, Walker, Kim, & Lefler, In Press; Sugrue, 1995). As strength across such a wide range of assessment levels was not found in meta-analyses of problem-centered instructional approaches by themselves (e.g., Albanese & Mitchell, 1993; Gijbels, Dochy, Van den Bossche, & Segers, 2005; Walker & Leary, 2009), one can conclude that it is the instructional support of computer-based scaffolding that leads to the strong outcomes.
Scaffolding used in the context of problem-centered instruction led to an average effect size of g = 0.46 on cognitive outcomes . This is in line with results from prior meta-analyses, which indicated overall effect sizes of g = 0.53 (Belland, Walker, Olsen, & Leary, 2015) and g = 0.44 (Belland, Walker, Kim, & Lefler, 2014) for computer-based scaffolding in science, technology, engineering, and mathematics (STEM) education . It is below the effect size estimate for step-based intelligent tutoring systems (ES = 0.76) found in a recent review (VanLehn, 2011), but this is to be expected as our review covered a much wider variety of scaffolding treatments. Briefly, computer-based scaffolding has a substantial impact on cognitive outcomes. This is consistent with prior research (Alfieri, Brooks, Aldrich, & Tenenbaum, 2011; Belland et al., 2014; Belland, Walker, et al., 2015; Dochy, Segers, Van den Bossche, & Gijbels, 2003; Gijbels et al., 2005; Hmelo-Silver, Duncan, & Chinn, 2007; Kuhn, 2007; Schmidt, van der Molen, te Winkel, & Wijnen, 2009; Strobel & van Barneveld, 2009; Swanson & Lussier, 2001; Walker & Leary, 2009) and also reflects well on the considerable investment that has been made developing and studying scaffolding.
Although the intended learning outcomes of computer-based scaffolding include both content-learning and the development of higher-order thinking skills, it is worthwhile to compare its average effect size with that of a wider range of instructional interventions designed to enhance critical thinking skills, and educational technology interventions as a whole. Computer-based scaffolding’s effect (g = 0.46) is greater than the average effect size of educational technology interventions designed to support direct instruction (ES = 0.31) found in a synthesis of meta-analyses of educational technology interventions conducted over the course of 40 years (see Fig. 6.1; Tamim, Bernard, Borokhovski, Abrami, & Schmid, 2011). It is also higher than the effect size estimates of interventions designed to increase critical thinking abilities: ES = 0.195 (Niu, Behar-Horenstein, & Garvan, 2013) and ES = 0.341 (Abrami et al., 2008). It is also higher than the average effect size of educational technology applications designed for mathematics education (ES = 0.13; Cheung & Slavin, 2013) and that of educational technology applications designed for reading instruction (ES = 0.16; Cheung & Slavin, 2012) found in recent reviews. Furthermore, the average effect size for computer-based scaffolding is higher than the median effect size among meta-analyses of interventions in psychological research (g = 0.324; Cafri, Kromrey, & Brannick, 2010). Briefly, the magnitude of the effect of computer-based scaffolding on cognitive outcomes is substantial when compared to instructional interventions that seek to influence similar outcomes, and also compared to other educational technology interventions and interventions in psychological research.
Computer-based scaffolding includes a wide variation of interventions, ranging from scaffolding embedded in intelligent tutoring systems, which contain all material to be encountered by students and which fade scaffolding based on a comparison of student performance with a model of an idealized student and allow students to add scaffolding by clicking a hint button (Koedinger & Corbett, 2006; Means & Gott, 1988) to tools used when investigating problems in the outside world that often do not involve fading or adding (Pea, 2004; Reiser, 2004). This large variation in scaffolding can be traced to the different theoretical frameworks (i.e., activity theory (Leont’ev, 1974; Luria, 1976; Vygotsky, 1978), Adaptive Character of Thought—Rational (ACT-R; Anderson, 1983; Anderson, Matessa, & Lebiere, 1997), and knowledge integration (Linn, 2000; Linn, Clark, & Slotta, 2003)) that were integrated into the relatively atheoretical initial conceptualization of scaffolding (Wood & Wood, 1996). Each of these theoretical frameworks has different views on the nature of learning and the goal of instruction. Still, the characteristics on which scaffolding informed by these different theoretical frameworks varies—contingency , generic versus context-specific , and intended learning outcome— did not explain any significant differences in cognitive outcomes. This suggests that the effect of scaffolding on cognitive learning outcomes is robust to different intended learning outcomes and the choice of whether or not to embed content knowledge in scaffolding, and is largely robust to the presence or absence of scaffolding customization as well as customization bases.
6.2 How the Meta-Analysis Responds to Persistent Debates in the Scaffolding and Problem-Centered Instruction Literature
This book presents some interesting answers to questions regarding scaffolding customization , the role of context-specific information in scaffolding, and whether scaffolding should be geared toward promoting enhanced content learning or higher-order thinking abilities. I certainly do not consider such questions to be answered definitively, as there is much to be learned when considering these findings alongside findings of empirical studies that were not eligible for inclusion in the meta-analysis. Such may be accomplished through the use of meta-synthesis (Bondas & Hall, 2007; Finfgeld, 2003; Thorne, 2004) and other synthesis efforts. Such further work can help to further address these questions and help scaffolding developers and researchers learn the most effective scaffolding strategies.
6.2.1 Scaffold Customization
Scaffolding scholars from the various scaffolding theoretical traditions have long posited scaffolding customization as a necessary attribute of scaffolding (Collins, Brown, & Newman, 1989; Pea, 2004; Puntambekar & Hübscher, 2005). This was clearly an important part of the original scaffolding definition; scaffolding customization unfolded as teachers dynamically assessed students’ current abilities and adjusted the support that was given accordingly. Scholars from the intelligent tutoring systems tradition have long called for the use of fading and adding (Aleven, Stahl, Schworm, Fischer, & Wallace, 2003; Koedinger & Aleven, 2007), while scholars from the knowledge integration and activity theory traditions have called for the use of fading (Collins et al., 1989; McNeill, Lizotte, Krajcik, & Marx, 2006; Pea, 2004; Puntambekar & Hübscher, 2005). Indeed, some scholars suggested that interventions that do not include fading cannot be called scaffolding (Pea, 2004; Puntambekar & Hübscher, 2005). The count of outcomes in which scaffolding was faded or added versus when scaffolding was neither added nor faded indicated that the majority of outcomes were associated with no fading or adding (64.9 %), which is consistent with prior research (Lin et al., 2012; Pea, 2004; Puntambekar & Hübscher, 2005). But the meta-analysis suggests that scaffold customization does not influence cognitive outcomes . Further research is needed to fully understand the role of scaffold customization in promoting learning.
Cognitive outcomes are only one way to characterize the success (or lack thereof) of an instructional intervention/feature. Other ways include attitudinal and affective outcomes and the capacity of the intervention to foster transfer, neither of which were the focus of the underlying meta-analysis of this book. Indeed, one of the arguments in favor of fading holds that providing scaffolding support when it is not needed can undermine motivation, thereby decreasing learning and performance (Dillenbourg, 2002). Motivation is a very important influence on learning (Belland, Kim, & Hannafin, 2013; Fredricks, Blumenfeld, & Paris, 2004; Wigfield & Eccles, 2000), and so investigating the influence of scaffolding customization (or lack thereof) on motivation, and consequently on achievement, is important and warrants future research.
One can also examine the extent to which scaffolding leads to transfer, including students’ preparation for future learning (Bransford & Schwartz, 1999) and their ability to recognize similarities between the learning context and new contexts in which the learning can be applied (Lobato, 2003). Transfer is clearly an important goal of problem-centered instruction and forms one of the key pillars in the rationale for such approaches (Hmelo-Silver, 2004). Does scaffolding customization influence transfer? This is an empirical question that warrants future research.
6.2.2 Problem-Centered Instruction and Content Learning
One of the persistent criticisms of problem-centered instructional models is that they do not do a good job in promoting concept-level learning (Kirschner et al., 2006). The thinking goes that problem-based learning does a better job than lecture at promoting learning at the principles and application levels but does not do as well at promoting concept-level learning (Albanese & Mitchell, 1993; Berkson, 1993). This is borne out in most meta-analyses of problem-based learning that break learning down by assessment level (Albanese & Mitchell, 1993; Gijbels et al., 2005; Kalaian, Mullan, & Kasim, 1999; Vernon & Blake, 1993; Walker & Leary, 2009), and has been found to be consistent outside of medical education (Walker & Leary, 2009). One exception to this trend is that problem-based learning seems to tend to lead to stronger long-term concept learning than lecture (Dochy et al., 2003; Strobel & van Barneveld, 2009).
One review found mixed results on inquiry-based learning’s influence on concept learning, finding that student concept learning was predicted by the extent to which students needed to think actively and draw conclusions from data, rather than by the simple use of inquiry-based learning (Minner, Levy, & Century, 2010). Another review indicated that when inquiry-based learning aims at promoting epistemic and conceptual learning, effect sizes tend to be quite low (ES = 0.19) as compared to studies that focused squarely on epistemic learning goals (ES = 0.75) or on procedural, epistemic, and social goals (ES = 0.72; Furtak, Seidel, Iverson, & Briggs, 2012).
In this meta-analysis, scaffolding used in the context of problem-centered instructional models led to average effect sizes at the concept, principles, and application levels of g = 0.40, g = 0.51, and g = 0.44, respectively. These are all substantial effect sizes and mean that scaffolding leads to strong learning outcomes across the three assessment levels (Sugrue, 1995). The findings suggest that by employing computer-based scaffolding along with problem-centered instructional models, one can erase the former liability of problem-centered instructional models—poor concept learning. This makes sense when one considers that inquiry-based learning led to strong effect sizes on content learning when students needed to engage in active thinking (Minner et al., 2010). Scaffolding promotes active thinking on the part of students, and often encourages them to draw conclusions from data (Belland, 2014; Quintana et al., 2004; Reiser, 2004).
6.2.3 Context Specificity
Much work on scaffolding in science has focused on context-specific scaffolding due to thoughts that (a) scientific problem-solving is highly context specific (Abd-El-Khalick, 2012; McNeill & Krajcik, 2009; Perkins & Salomon, 1989) and (b) any problem-solving strategy that involves any domain-specific knowledge is itself domain specific (Smith, 2002). Furthermore, there are arguments that one does not need to teach generic skills, based on a premise that individuals will simply pick up the generic skills they need through everyday life (Tricot & Sweller, 2013). That the majority of computer-based scaffolding is context specific was confirmed by the fact that 82 % of the outcomes included in the meta-analysis were associated with context-specific scaffolding. The arguments above against scaffolding generic processes appear to be tenuous arguments, and one would be better served to look at the empirical evidence to decide whether generic or context-specific scaffolding is more effective.
Much evidence indicates that scientific problem-solving in fact incorporates a mix of domain-specific and generic processes (Klahr & Simon, 1999; Molnár, Greiff, & Csapó, 2013; Perkins & Salomon, 1989; Schunn & Anderson, 1999). For example, evaluating sources can involve domain-specific knowledge, but the underlying strategy can be considered generic (Smith, 2002). There is not a large amount of empirical work addressing the relative effectiveness of generic and context-specific scaffolding. But we addressed it in the meta-analysis , finding no differences in cognitive outcomes between generic and context-specific scaffolding. Therefore, one may envision the need for a mix of generic and context-specific scaffolding that can allow the strengths of each scaffolding type to complement each other (Belland, Gu, Armbrust, & Cook, 2013).
6.2.4 Higher-Order Thinking Skills Versus Content Knowledge
Scaffolding has been used to promote the development of higher-order thinking skills (Belland, Glazewski, & Richardson, 2011; Belland, Gu, Armbrust, & Cook, 2015; Kim & Hannafin, 2011) and enhanced content knowledge (Chang & Linn, 2013; Davis & Linn, 2000)—two seemingly disparate instructional goals. These differences in instructional goals can be linked to differences in the theoretical bases to which scaffolding is tied. These differences in theoretical bases lead to real differences in scaffolding strategies, such as the use of adding and fading (Koedinger & Aleven, 2007) versus fading (Pea, 2004; Puntambekar & Hübscher, 2005), differences in intended learning outcomes, and differences in contexts of use. Such a disparity in intended learning outcome may lead one to think that these are qualitatively different interventions. Yet, the scaffolding definition that noted that scaffolding needs to extend and enhance student abilities as they engage in authentic problem-solving was carefully applied. Meta-analysis indicated that the two scaffolding types lead to effect sizes that were statistically the same.
6.2.5 Scaffolding Strategy
Scaffolding can incorporate a variety of approaches according to what processes it aims to support in students, including conceptual, strategic, metacognitive, and motivational scaffolding (Belland, Kim, et al., 2013; Hannafin, Land, & Oliver, 1999). Designers of computer-based scaffolding often chose to support either motivation or cognition (Belland, Kim, et al., 2013), and the effectiveness of metacognitive scaffolding has often been questioned (Belland, Glazewski, & Richardson, 2008; Oliver & Hannafin, 2000). But the meta-analysis indicated that there were no differences in cognitive student outcomes on the basis of scaffolding strategy. Certainly, further research is needed to ascertain if the integration of support for motivation and cognition in the same scaffold leads to stronger learning outcomes than when such support is separated.
6.2.6 Summary
Briefly, decisions about whether to (a) include context-specific content or not, (b) target higher-order thinking abilities or content knowledge, and (c) fade, add, or fade and add scaffolding and on what basis can be made without fear of adversely impacting learning outcomes. Rather, such decisions can be made in the context of learning goals and what is known about the target learner population. And further research is needed to determine if these conclusions apply to education areas other than STEM.
6.3 Other Interesting Findings
6.3.1 Scaffolding’s Effectiveness in Different STEM Disciplines
It was interesting that computer-based scaffolding was equally effective, statistically, in science, technology, engineering, and mathematics. This suggests that scaffolding is a highly effective intervention that is appropriate for use with a wide range of authentic problems across STEM. Clearly, addressing authentic problems is a crucial skill throughout STEM. It would be unwise to think that students will automatically have the skills to be able to do so, or that if they learn declarative content, they will figure out how to apply the content to authentic problems. Furthermore, there is a need for more primary research to be done on scaffolding in engineering and mathematics education; such further research is needed to obtain a more precise estimate of the effect of scaffolding used in the context of mathematics and engineering education. Certainly, computer-based scaffolding would seem to fit well with the types of goals that instructors often have in mathematics and engineering education—to use the tools of the respective disciplines to model and solve problems, both through conceptual solutions and the design of products (Brophy, Klein, Portsmore, & Rogers, 2008; Carr, Bennett, & Strobel, 2012; Lesh & Harel, 2003; Schoenfeld, 1985).
6.3.2 Scaffolding’s Effectiveness by Grade Level
Next, it was interesting that scaffolding has come to be used at many different educational levels, and the largest effect sizes were among graduate and college learners. This is indeed a large expansion of an instructional method originally proposed to describe how adults could help toddlers learn to construct pyramids with wooden blocks. It also brings to light an important consideration that the distance between a more capable other and the learner in graduate education is much less than in preschool. There is an expectation that preschool students think about problems in qualitatively different ways than do adults (Inhelder & Piaget, 1955), and so the metaphor of scaffolding in which a more capable other extends and enhances student cognition makes intuitive sense. But the hope is that graduate students gradually begin to think about pertinent problems in the same general manner as their professors. In this way, it may be difficult to apply the scaffolding metaphor in an intuitive manner to graduate education. Further research is needed to explore the role of scaffolding in graduate education and how it differs from scaffolding used in the context of other education levels .
6.4 Directions for Future Research
This book also suggests directions for future research. In particular, more research is needed on motivation scaffolding and scaffolding in the context of design-based learning and project-based learning . With the exception of design-based learning, these were all associated with particularly large effect size point estimates, but one could not have great confidence in the estimates due to a small sample size. For design-based learning , the point estimate was low relative to other contexts of use.
Supporting motivation through scaffolding has often been an afterthought when one desires to enhance cognitive skills (Belland, Kim, et al., 2013; Rienties et al., 2012), and this led us to find only one article that met the inclusion criteria, among which was that the student had to measure cognitive outcomes . But its outcomes had very large effect sizes. Furthermore, theory suggests that scaffolds that support motivational and cognitive aspects of student work are likely to be more effective than scaffolds that focus solely on cognitive factors (Belland, Kim, et al., 2013). This may indicate that (a) if more scaffolding is designed to enhance motivation alongside cognitive outcomes, one may find very strong effects, and (b) researchers would be advised to measure cognitive outcomes resulting from the use of existing motivation scaffolds (Brophy, 1999).
In brief, all of these areas would seem to benefit from more primary research, both to improve the precision of estimates of scaffolding’s effect on cognitive outcomes and to potentially learn more about a promising way to help students develop the skills they need to succeed in potentially authentic instructional approaches (Belland, 2014; Hmelo-Silver et al., 2007) and the twenty-first century workforce (Carnevale & Desrochers, 2003; Gu & Belland, 2015).
Finally, it is important to investigate the relative impact of scaffolding characteristics and contexts of use on cognitive outcomes in non-STEM areas (Brush & Saye, 2001; Proctor, Dalton, & Grisham, 2007). These are clearly important learning outcomes, and enhancing these would not only help students be better prepared for careers in STEM but also for the twenty-first century economy in general (Gu & Belland, 2015).
Instructional scaffolding is an effective intervention that can help students perform a half a standard deviation higher than they would have been able to otherwise (Belland et al., In Press, 2014; Belland, Walker, et al., 2015; VanLehn, 2011). Scaffolding led to effects that were statistically greater than zero across education levels ranging from primary to adult. The effect size estimate for middle-level learners was lower than that of adult students but still compared favorably to similar instructional interventions. Scaffolding also led to effect size estimates that were statistically significantly greater than zero across a range of learner populations, from underrepresented and underperforming to low income, traditional, and high performing. However, underperforming students had a lower effect size estimate than traditional students. Scaffolding also had consistently positive effects among instructional models with which it is used. Furthermore, scaffolding led to positive effect size estimates across science, technology, engineering, and mathematics education. Scaffolding’s sizable impact on cognitive outcomes was largely consistent across assessment levels, with the caveat that when learning was assessed at the principles level, effect sizes were higher than when assessed at the concept or application level.
Scaffolding had a positive effect size estimate across customization type (i.e., fading, adding, fading/adding, or none), customization basis (i.e., performance based, self-selected, and none), or whether or not context-specific information was embedded in the scaffolding. There were no significant differences among these moderators. Furthermore, the effect size estimate was consistently positive across scaffolding intervention types (i.e., conceptual, metacognitive, strategic, and motivation), and there were no significant differences in this categorization.
6.5 Conclusion
Computer-based scaffolding is a highly effective intervention that leads to strong effect sizes that are statistically significantly greater than zero across contexts of use, intended learning outcomes, and scaffolding characteristics (Belland et al., In Press). Scaffolding is particularly well positioned to help students succeed in the problem-centered instructional approaches encouraged by the Next Generation Science Standards and Common Core Standards (Achieve, 2013; Krajcik, Codere, Dahsah, Bayer, & Mun, 2014; McLaughlin & Overturf, 2012; National Governors Association Center for Best Practices & Council of Chief State School Officers, 2010; National Research Council, 2012). It can do this by extending students’ abilities in the following areas: argumentation (Belland, 2010; Cho & Jonassen, 2002), modeling (Buckland & Chinn, 2010; Fretz et al., 2002), problem-solving (Ge & Land, 2003; Raes, Schellens, De Wever, & Vanderhoven, 2012), and forming coherent mental models to describe natural phenomena (Clark & Linn, 2013; Linn, 2000). As such, computer-based scaffolding is a timely intervention that raises the likelihood that problem-centered models will be successful. Research outlined in this book can contribute to an understanding of the scaffolding goals, strategies, and contexts of use that are associated with the strongest cognitive learning outcomes.
Results indicate differences in effect sizes based on several characteristics. But in most of these cases, effect sizes for the levels of the characteristic that was associated with lower effect size estimates were also substantial and significantly greater than zero. For example, scaffolding had the highest effect sizes when learning was assessed at the principles level, but effect sizes were statistically greater than zero and of substantial magnitude across the concept, principles, and application levels.
Results also help scaffolding researchers learn what scaffolding characteristics do not lead to differences in effect sizes—scaffolding customization, generic or context-specific nature of support, scaffolding function (e.g., conceptual and strategic), and whether scaffolding was designed to enhance content learning or higher-order skills.
The material covered in this book can be parlayed into stronger scaffolding designs. Further research should contribute to a greater understanding of the conditions under which and the strategies with which scaffolding leads to strong learning outcomes. Future research should also investigate how to extend scaffolding’s reach to benefit underrepresented groups in STEM, a very important goal (Ceci, Williams, & Barnett, 2009; Syed, Azmitia, & Cooper, 2011; Thoman, Smith, Brown, Chase, & Lee, 2013). This could be pursued through a combination of strategies: look at the differences between scaffolds that work well among underrepresented groups, and those that are not as effective, examine the literature on designing effective instructional supports for members of underrepresented groups (Cuevas, Fiore, & Oser, 2002; Marra, Peterson, & Britsch, 2008), examine the literature on universal design for learning (Rao, Ok, & Bryant, 2014; Scott, Mcguire, & Shaw, 2003), and examine whether there are differences in how students from underrepresented groups are using scaffolds that could explain lower effectiveness (Belland & Drake, 2013).
References
Abd-El-Khalick, F. (2012). Examining the sources for our understandings about science: Enduring conflations and critical issues in research on nature of science in science education. International Journal of Science Education, 34(3), 353–374. http://doi.org/10.1080/09500693.2011.629013.
Abrami, P. C., Bernard, R. M., Borokhovski, E., Wade, A., Surkes, M. A., Tamim, R., & Zhang, D. (2008). Instructional interventions affecting critical thinking skills and dispositions: A stage 1 meta-analysis. Review of Educational Research, 78(4), 1102–1134. http://doi.org/10.3102/0034654308326084.
Achieve. (2013). Next generation science standards. http://www.nextgenscience.org/next-generation-science-standards. Accessed 8 Aug 2013.
Albanese, M. A., & Mitchell, S. (1993). Problem-based learning—A review of literature on its outcomes and implementation issues. Academic Medicine, 68(1), 52–81.
Aleven, V., Stahl, E., Schworm, S., Fischer, F., & Wallace, R. (2003). Help seeking and help design in interactive learning environments. Review of Educational Research, 73(3), 277–320. http://doi.org/10.3102/00346543073003277.
Alfieri, L., Brooks, P. J., Aldrich, N. J., & Tenenbaum, H. R. (2011). Does discovery-based instruction enhance learning? Journal of Educational Psychology, 103(1), 1–18. http://doi.org/10.1037/a0021017.
Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA, USA: Harvard University Press.
Anderson, J. R., Matessa, M., & Lebiere, C. (1997). ACT-R: A theory of higher level cognition and its relation to visual attention. Human-Computer Interaction, 12(4), 439–462. http://doi.org/10.1207/s15327051hci1204_5.
Belland, B. R. (2010). Portraits of middle school students constructing evidence-based arguments during problem-based learning: The impact of computer-based scaffolds. Educational Technology Research and Development, 58(3), 285–309. http://doi.org/10.1007/s11423-009-9139-4.
Belland, B. R. (2014). Scaffolding: Definition, current debates, and future directions. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (4th ed., pp. 505–518). New York: Springer.
Belland, B. R., & Drake, J. (2013). Toward a framework on how affordances and motives can drive different uses of computer-based scaffolds: Theory, evidence, and design implications. Educational Technology Research & Development, 61, 903–925. http://doi.org/10.1007/s11423-013-9313-6.
Belland, B. R., Glazewski, K. D., & Richardson, J. C. (2008). A scaffolding framework to support the construction of evidence-based arguments among middle school students. Educational Technology Research and Development, 56(4), 401–422. http://doi.org/10.1007/s11423-007-9074-1.
Belland, B. R., Glazewski, K. D., & Richardson, J. C. (2011). Problem-based learning and argumentation: Testing a scaffolding framework to support middle school students’ creation of evidence-based arguments. Instructional Science, 39(5), 667–694. http://doi.org/10.1007/s11251-010-9148-z.
Belland, B. R., Gu, J., Armbrust, S., & Cook, B. (2013). Using generic and context-specific scaffolding to support authentic science inquiry. In Proceedings of the IADIS International Conference on Cognition and Exploratory Learning in Digital Age (CELDA 2013) (pp. 185–192). Fort Worth, TX, USA: IADIS.
Belland, B. R., Kim, C., & Hannafin, M. (2013). A framework for designing scaffolds that improve motivation and cognition. Educational Psychologist, 48(4), 243–270. http://doi.org/10.1080/00461520.2013.838920.
Belland, B. R., Walker, A., Kim, N., & Lefler, M. (2014). A preliminary meta-analysis on the influence of scaffolding characteristics and study and assessment quality on cognitive outcomes in STEM education. Presented at the 2014 Annual Meeting of the Cognitive Science Society, Québec City, Canada.
Belland, B. R., Gu, J., Armbrust, S., & Cook, B. (2015). Scaffolding argumentation about water quality: A mixed method study in a rural middle school. Educational Technology Research & Development, 63(3), 325–353. http://doi.org/10.1007/s11423-015-9373-x.
Belland, B. R., Walker, A., Olsen, M. W., & Leary, H. (2015). A pilot meta-analysis of computer-based scaffolding in STEM education. Educational Technology and Society, 18(1), 183–197.
Belland, B. R., Walker, A. E., Kim, N., & Lefler, M. (In Press). Synthesizing results from empirical research on computer-based scaffolding in STEM education: A meta-analysis. Review of Educational Research.
Berkson, L. (1993). Problem-based learning: Have the expectations been met? Academic Medicine, 68(10), S79–S88.
Bondas, T., & Hall, E. O. C. (2007). Challenges in approaching metasynthesis research. Qualitative Health Research, 17(1), 113–121. http://doi.org/10.1177/1049732306295879.
Bransford, J. D., & Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with multiple implications. Review of Research in Education, 24, 61–100. http://doi.org/10.2307/1167267.
Brophy, J. E. (1999). Toward a model of the value aspects of motivation in education: Developing appreciation for particular learning domains and activities. Educational Psychologist, 34(2), 75–85. http://doi.org/10.1207/s15326985ep3402_1.
Brophy, S., Klein, S., Portsmore, M., & Rogers, C. (2008). Advancing engineering education in P-12 classrooms. Journal of Engineering Education, 97(3), 369–387. http://doi.org/10.1002/j.2168-9830.2008.tb00985.x.
Brush, T., & Saye, J. (2001). The use of embedded scaffolds with hypermedia-supported student-centered learning. Journal of Educational Multimedia and Hypermedia, 10(4), 333–356.
Buckland, L. A., & Chinn, C. A. (2010). Model-evidence link diagrams: A scaffold for model-based reasoning. Proceedings of the 9th International Conference of the Learning Sciences—Volume 2 (pp. 449–450). Chicago: International Society of the Learning Sciences. http://dl.acm.org/citation.cfm?id=1854509.1854741.
Cafri, G., Kromrey, J. D., & Brannick, M. T. (2010). A meta-meta-analysis: Empirical review of statistical power, type I error rates, effect sizes, and model selection of meta-analyses published in psychology. Multivariate Behavioral Research, 45(2), 239–270. http://doi.org/10.1080/00273171003680187.
Carnevale, A. P., & Desrochers, D. M. (2003). Preparing students for the knowledge economy: What school counselors need to know. Professional School Counseling, 6(4), 228–236.
Carr, R. L., Bennett, L. D., & Strobel, J. (2012). Engineering in the K-12 STEM standards of the 50 U.S. states: An analysis of presence and extent. Journal of Engineering Education, 101(3), 539–564. http://doi.org/10.1002/j.2168-9830.2012.tb00061.x.
Ceci, S. J., Williams, W. M., & Barnett, S. M. (2009). Women’s underrepresentation in science: Sociocultural and biological considerations. Psychological Bulletin, 135(2), 218–261. http://doi.org/10.1037/a0014412.
Chang, H.-Y., & Linn, M. C. (2013). Scaffolding learning from molecular visualizations. Journal of Research in Science Teaching, 50(7), 858–886. http://doi.org/10.1002/tea.21089.
Cheung, A. C. K., & Slavin, R. E. (2012). How features of educational technology applications affect student reading outcomes: A meta-analysis. Educational Research Review, 7(3), 198–215. http://doi.org/10.1016/j.edurev.2012.05.002.
Cheung, A. C. K., & Slavin, R. E. (2013). The effectiveness of educational technology applications for enhancing mathematics achievement in K-12 classrooms: A meta-analysis. Educational Research Review, 9, 88–113. http://doi.org/10.1016/j.edurev.2013.01.001.
Cho, K., & Jonassen, D. H. (2002). The effects of argumentation scaffolds on argumentation and problem-solving. Educational Technology Research and Development, 50(3), 5–22. http://doi.org/10.1007/BF02505022.
Clark, D. B., & Linn, M. C. (2013). The knowledge integration perspective: Connections across research and education. In S. Vosniadou (Ed.), International handbook of research on conceptual change (pp. 520–538). New York: Routledge.
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). Hillsdale, NJ, USA: Lawrence Erlbaum Associates.
Cuevas, H. M., Fiore, S. M., & Oser, R. L. (2002). Scaffolding cognitive and metacognitive processes in low verbal ability learners: Use of diagrams in computer-based training environments. Instructional Science, 30(6), 433–464. http://doi.org/10.1023/A:1020516301541.
Davis, E. A., & Linn, M. C. (2000). Scaffolding students’ knowledge integration: Prompts for reflection in KIE. International Journal of Science Education, 22, 819–837. http://doi.org/10.1080/095006900412293.
Dillenbourg, P. (2002). Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In P. A. Kirschner (Ed.), Three worlds of CSCL. Can we support CSCL (pp. 61–91). Heerlen, NE: Open Universiteit Nederland.
Dochy, F., Segers, M., Van den Bossche, P., & Gijbels, D. (2003). Effects of problem-based learning: A meta-analysis. Learning and Instruction, 13(5), 533–568. http://doi.org/10.1016/S0959-4752(02)00025-7.
Finfgeld, D. L. (2003). Metasynthesis: The state of the art—so far. Qualitative Health Research, 13(7), 893–904. http://doi.org/10.1177/1049732303253462.
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59–109. http://doi.org/10.3102/00346543074001059.
Fretz, E. B., Wu, H.-K., Zhang, B., Davis, E. A., Krajcik, J. S., & Soloway, E. (2002). An investigation of software scaffolds supporting modeling practices. Research in Science Education, 32(4), 567–589. http://doi.org/10.1023/A:1022400817926.
Furtak, E. M., Seidel, T., Iverson, H., & Briggs, D. C. (2012). Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research, 82(3), 300–329. http://doi.org/10.3102/0034654312457206.
Ge, X., & Land, S. M. (2003). Scaffolding students’ problem-solving processes in an ill-structured task using question prompts and peer interactions. Educational Technology Research and Development, 51(1), 21–38. http://doi.org/10.1007/BF02504515.
Gijbels, D., Dochy, F., Van den Bossche, P., & Segers, M. (2005). Effects of problem-based learning: A meta-analysis from the angle of assessment. Review of Educational Research, 75(1), 27–61. http://doi.org/10.3102/00346543075001027.
Gu, J., & Belland, B. R. (2015). Preparing students with 21st century skills: Integrating scientific knowledge, skills, and epistemic beliefs in middle school science. In X. Ge, D. Ifenthaler, & J. M. Spector (Eds.), Full STEAM ahead—Emerging technologies for STEAM (Vol. 2). New York: Springer.
Hannafin, M., Land, S., & Oliver, K. (1999). Open-ended learning environments: Foundations, methods, and models. In C. M. Reigeluth (Ed.), Instructional design theories and models: Volume II: A new paradigm of instructional theory (pp. 115–140). Mahwah, NJ, USA: Lawrence Erlbaum Associates.
Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3), 235–266. http://doi.org/10.1023/B:EDPR.0000034022.16470.f3.
Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), 99–107. http://doi.org/10.1080/00461520701263368.
Inhelder, B., & Piaget, J. (1955). De la logique de l’enfant à la logique de l’adolescent: Essai sur la construction des structures opératoires formelles [The growth of logical thinking from childhood to adolescence: An essay on the construction of formal operational structures]. Paris: Presses Universitaires de France.
Kalaian, H. A., Mullan, P. B., & Kasim, R. M. (1999). What can studies of problem-based learning tell us? Synthesizing and modeling PBL effects on National Board of Medical Examination performance: Hierarchical linear modeling meta-analytic approach. Advances in Health Sciences Education: Theory and Practice, 4(3), 209–221. http://doi.org/10.1023/A:1009871001258.
Kim, M., & Hannafin, M. (2011). Scaffolding 6th graders’ problem solving in technology-enhanced science classrooms: A qualitative case study. Instructional Science, 39(3), 255–282. http://doi.org/10.1007/s11251-010-9127-4.
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86. http://doi.org/10.1207/s15326985ep4102_1.
Klahr, D., & Simon, H. A. (1999). Studies of scientific discovery: Complementary approaches and convergent findings. Psychological Bulletin, 125(5), 524–543. http://doi.org/10.1037/0033-2909.125.5.524.
Koedinger, K. R., & Aleven, V. (2007). Exploring the assistance dilemma in experiments with cognitive tutors. Educational Psychology Review, 19(3), 239–264. http://doi.org/10.1007/s10648-007-9049-0.
Koedinger, K. R., & Corbett, A. (2006). Cognitive tutors: Technology bringing learning sciences to the classroom. In K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 61–78). Cambridge, UK: Cambridge University Press.
Krajcik, J., Codere, S., Dahsah, C., Bayer, R., & Mun, K. (2014). Planning instruction to meet the intent of the next generation science standards. Journal of Science Teacher Education, 25(2), 157–175. http://doi.org/10.1007/s10972-014-9383-2.
Kuhn, D. (2007). Is direct instruction an answer to the right question? Educational Psychologist, 42(2), 109–113. http://doi.org/10.1080/00461520701263376.
Leont’ev, A. N. (1974). The problem of activity in psychology. Soviet Psychology, 13(2), 4–33. http://doi.org/10.2753/RPO1061-040513024.
Lesh, R., & Harel, G. (2003). Problem solving, modeling, and local conceptual development. Mathematical Thinking and Learning, 5(2–3), 157–189. http://doi.org/10.1080/10986065.2003.9679998.
Lin, T.-C., Hsu, Y.-S., Lin, S.-S., Changlai, M.-L., Yang, K.-Y., & Lai, T.-L. (2012). A review of empirical evidence on scaffolding for science education. International Journal of Science and Mathematics Education, 10(2), 437–455. http://doi.org/10.1007/s10763-011-9322-z.
Linn, M. C. (2000). Designing the knowledge integration environment. International Journal of Science Education, 22(8), 781–796. http://doi.org/10.1080/095006900412275.
Linn, M. C., Clark, D., & Slotta, J. D. (2003). WISE design for knowledge integration. Science Education, 87(4), 517–538. http://doi.org/10.1002/sce.10086.
Lobato, J. (2003). How design experiments can inform a rethinking of transfer and vice versa. Educational Researcher, 32(1), 17–20. http://doi.org/10.3102/0013189X032001017.
Luria, A. R. (1976). Cognitive development: Its cultural and social foundations. (M. Cole, Ed., M. Lopez-Morillas & L. Solotaroff, Trans.). Cambridge, MA, USA: Harvard University Press.
Marra, R. M., Peterson, K., & Britsch, B. (2008). Collaboration as a means to building capacity: Results and future directions of the National Girls Collaborative Project. Journal of Women and Minorities in Science and Engineering, 14(2), 119–140. http://doi.org/10.1615/JWomenMinorScienEng.v14.i2.10.
McLaughlin, M., & Overturf, B. J. (2012). The common core: Insights Into the K-5 standards. The Reading Teacher, 66(2), 153–164. http://doi.org/10.1002/TRTR.01115.
McNeill, K. L., & Krajcik, J. (2009). Synergy between teacher practices and curricular scaffolds to support students in using domain-specific and domain-general knowledge in writing arguments to explain phenomena. Journal of the Learning Sciences, 18(3), 416–460. http://doi.org/10.1080/10508400903013488.
McNeill, K. L., Lizotte, D. J., Krajcik, J., & Marx, R. W. (2006). Supporting students’ construction of scientific explanations by fading scaffolds in instructional materials. Journal of the Learning Sciences, 15(2), 153–191. http://doi.org/10.1207/s15327809jls1502_1.
Means, B., & Gott, S. P. (1988). Cognitive task analysis as a basis for tutor development: Articulating abstract knowledge representations. In J. Psotka, L. D. Massey, & S. A. Mutter (Eds.), Intelligent tutoring systems: Lessons learned (pp. 35–58). Hillsdale, NJ, USA: Lawrence Erlbaum Associates.
Minner, D. D., Levy, A. J., & Century, J. (2010). Inquiry-based science instruction-what is it and does it matter? Results from a research synthesis years 1984 to 2002. Journal of Research in Science Teaching, 47(4), 474–496. http://doi.org/10.1002/tea.20347.
Molnár, G., Greiff, S., & Csapó, B. (2013). Inductive reasoning, domain specific and complex problem solving: Relations and development. Thinking Skills and Creativity, 9, 35–45. http://doi.org/10.1016/j.tsc.2013.03.002.
National Governors Association Center for Best Practices, & Council of Chief State School Officers. (2010). Common core state standards. http://www.corestandards.org/the-standards.
National Research Council. (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC, USA: National Academies Press. http://www.nap.edu/catalog/13165/a-framework-for-k-12-science-education-practices-crosscutting-concepts.
Niu, L., Behar-Horenstein, L. S., & Garvan, C. W. (2013). Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educational Research Review, 9, 114–128. http://doi.org/10.1016/j.edurev.2012.12.002.
Oliver, K., & Hannafin, M. J. (2000). Student management of web-based hypermedia resources during open-ended problem solving. Journal of Educational Research, 94, 75–92. http://doi.org/10.1080/00220670009598746.
Pea, R. D. (2004). The social and technological dimensions of scaffolding and related theoretical concepts for learning, education, and human activity. Journal of the Learning Sciences, 13(3), 423–451. http://doi.org/10.1207/s15327809jls1303_6.
Perkins, D. N., & Salomon, G. (1989). Are cognitive skills context-bound? Educational Researcher, 18(1), 16–25. http://doi.org/10.3102/0013189X018001016.
Proctor, C. P., Dalton, B., & Grisham, D. L. (2007). Scaffolding English language learners and struggling readers in a universal literacy environment with embedded strategy instruction and vocabulary support. Journal of Literacy Research, 39(1), 71–93. http://doi.org/10.1080/10862960709336758.
Puntambekar, S., & Hübscher, R. (2005). Tools for scaffolding students in a complex learning environment: What have we gained and what have we missed? Educational Psychologist, 40, 1–12. http://doi.org/10.1207/s15326985ep4001_1.
Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., Duncan, R. G., Kyza, E., Edelson, D., Soloway, E. (2004). A scaffolding design framework for software to support science inquiry. Journal of the Learning Sciences, 13(3), 337–386. http://doi.org/10.1207/s15327809jls1303_4.
Raes, A., Schellens, T., De Wever, B., & Vanderhoven, E. (2012). Scaffolding information problem solving in web-based collaborative inquiry learning. Computers & Education, 59(1), 82–94. http://doi.org/10.1016/j.compedu.2011.11.010.
Rao, K., Ok, M. W., & Bryant, B. R. (2014). A review of research on universal design educational models. Remedial and Special Education, 35(3), 153–166. http://doi.org/10.1177/0741932513518980.
Reiser, B. J. (2004). Scaffolding complex learning: The mechanisms of structuring and problematizing student work. Journal of the Learning Sciences, 13(3), 273–304. http://doi.org/10.1207/s15327809jls1303_2.
Rienties, B., Giesbers, B., Tempelaar, D., Lygo-Baker, S., Segers, M., & Gijselaers, W. (2012). The role of scaffolding and motivation in CSCL. Computers & Education, 59(3), 893–906. http://doi.org/10.1016/j.compedu.2012.04.010.
Schmidt, H. G., van der Molen, H. T., te Winkel, W. W. R., & Wijnen, W. H. F. W. (2009). Constructivist, problem-based learning does work: A meta-analysis of curricular comparisons involving a single medical school. Educational Psychologist, 44(4), 227–249. http://doi.org/10.1080/00461520903213592.
Schoenfeld, A. H. (1985). Mathematical problem solving. Orlando, FL, USA: Academic Press.
Schunn, C. D., & Anderson, J. R. (1999). The generality/specificity of expertise in scientific reasoning. Cognitive Science, 23(3), 337–370. http://doi.org/10.1016/S0364-0213(99)00006-3.
Scott, S. S., Mcguire, J. M., & Shaw, S. F. (2003). Universal design for instruction: A new paradigm for adult instruction in postsecondary education. Remedial and Special Education, 24(6), 369–379. http://doi.org/10.1177/07419325030240060801.
Smith, G. (2002). Are there domain–specific thinking skills? Journal of Philosophy of Education, 36(2), 207–227. http://doi.org/10.1111/1467-9752.00270.
Strobel, J., & van Barneveld, A. (2009). When is PBL more effective? A meta-synthesis of meta-analyses comparing PBL to conventional classrooms. Interdisciplinary Journal of Problem-Based Learning, 3(1), 44–58. http://doi.org/10.7771/1541-5015.1046.
Sugrue, B. (1995). A theory-based framework for assessing domain-specific problem-solving ability. Educational Measurement: Issues and Practice, 14(3), 29–35. http://doi.org/10.1111/j.1745-3992.1995.tb00865.x.
Swanson, H. L., & Lussier, C. M. (2001). A selective synthesis of the experimental literature on dynamic assessment. Review of Educational Research, 71(2), 321–363. http://doi.org/10.3102/00346543071002321.
Syed, M., Azmitia, M., & Cooper, C. R. (2011). Identity and academic success among underrepresented ethnic minorities: An interdisciplinary review and integration. Journal of Social Issues, 67(3), 442–468. http://doi.org/10.1111/j.1540-4560.2011.01709.x.
Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research, 81(1), 4–28. http://doi.org/10.3102/0034654310393361.
Thoman, D. B., Smith, J. L., Brown, E. R., Chase, J., & Lee, J. Y. K. (2013). Beyond performance: A motivational experiences model of stereotype threat. Educational Psychology Review, 25(2), 211–243. http://doi.org/10.1007/s10648-013-9219-1.
Thorne, S. (2004). Qualitative metasynthesis: Reflections on methodological orientation and ideological agenda. Qualitative Health Research, 14(10), 1342–1365. http://doi.org/10.1177/1049732304269888.
Tricot, A., & Sweller, J. (2013). Domain-specific knowledge and why teaching generic skills does not work. Educational Psychology Review, 26(2), 265–283. http://doi.org/10.1007/s10648-013-9243-1.
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. http://doi.org/10.1080/00461520.2011.611369.
Vernon, D. T. A., & Blake, R. L. (1993). Does problem-based learning work? A meta-analysis of evaluative research. Academic Medicine, 68, 550–563.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA, USA: Harvard University Press.
Walker, A., & Leary, H. (2009). A problem based learning meta analysis: Differences across problem types, implementation types, disciplines, and assessment levels. Interdisciplinary Journal of Problem-Based Learning, 3(1), 12–43. http://doi.org/10.7771/1541-5015.1061.
Wigfield, A., & Eccles, J. S. (2000). Expectancy–value theory of achievement motivation. Contemporary Educational Psychology, 25(1), 68–81. http://doi.org/10.1006/ceps.1999.1015.
Wood, D., & Wood, H. (1996). Vygotsky, tutoring and learning. Oxford Review of Education, 22(1), 5–16. http://doi.org/10.1080/0305498960220101.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the work’s Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work’s Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material.
Copyright information
© 2017 The Author(s)
About this chapter
Cite this chapter
Belland, B.R. (2017). Conclusion. In: Instructional Scaffolding in STEM Education. Springer, Cham. https://doi.org/10.1007/978-3-319-02565-0_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-02565-0_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-02564-3
Online ISBN: 978-3-319-02565-0
eBook Packages: EducationEducation (R0)