Cognitive load theory (Sweller 1988, 2010; Sweller et al. 1998) is concerned with the learning of complex cognitive tasks, in which learners may be overwhelmed by the number of interactive information elements that need to be processed simultaneously before meaningful learning can commence. Instructional control of the excessively high load that complex tasks impose on learners’ capacity-limited working memory provides the focus of cognitive load theory, its central tenet being that instruction should be designed in such a way that it is at an optimal level of complexity (i.e., intrinsic load), reduces the load on working memory resulting from processes that do not contribute to learning (i.e., ineffective or extraneous load), and optimizes as far as possible the load resulting from processes that foster learning (i.e., germane load).

Worked examples are one such instructional format that—compared to conventional problem solving—reduces ineffective load imposed by the use of weak problem solving strategies, and fosters learning by allowing students to devote available capacity to studying the solution procedure. An impressive body of research over the last few decades has shown that for novice learners, worked examples are more effective in facilitating learning and transfer compared to conventional problem solving, and are often more efficient in their requirements of time or mental effort (e.g., Cooper and Sweller 1987; Paas 1992; Paas and Van Merriënboer 1994; Sweller and Cooper 1985; Van Gog et al. 2006). The advantage of worked examples over problem solving has become known as the ‘worked example effect’ (for reviews, see Atkinson et al. 2000; Sweller et al. 1998).

Two of the contributions to this special issue focus on worked examples. Salden et al. (2010) review a number of studies on the effects of integrating worked examples in cognitive tutoring systems. The authors discuss the “assistance dilemma” (Koedinger and Aleven 2007) that considers the extent to which learners should be assisted in problem solving, for instance by the use of worked examples. Learning by studying worked examples provides a very high degree of instructional guidance while learning by solving conventional problems includes no or very little instructional guidance. Koedinger and Aleven suggested that the very limited guidance during conventional problem solving results in a ‘weak’ control condition. Tutored problem solving usually provides additional assistance to students. The studies reviewed by Salden et al. show, however, that worked examples have beneficial effects on learning (either in terms of performance, time, or both) even when compared to tutored problem solving in which instructional guidance is available on demand, thereby indicating that the worked example effect is “not an artefact of lousy control conditions” (Schwonke et al. 2009, pp. 258). Salden et al. discuss the findings from these studies in terms of the cognitive load imposed by worked examples and tutored problem solving.

The question of how much instructional guidance or assistance should be provided is also addressed by Wittwer and Renkl (2010), though they focus on the amount of guidance provided within worked examples. Wittwer and Renkl conducted a meta-analysis of the effects of providing instructional explanations in worked examples. They found that adding instructional explanations to worked examples had a significant, but small, positive effect on learning, was more helpful for acquiring conceptual than procedural knowledge, and was not necessarily more effective than prompting students to provide self-explanations.

Whereas worked examples provide learners with a written, worked-out solution procedure to study, in animated or video-based modeling examples the solution procedures are demonstrated to learners by a human or animated model (Van Gog and Rummel 2010). Animated examples increasingly are being used in computer-based learning environments (Wouters et al. 2008). However, very often, information in animations or videos is transient. Research has shown that information transience may pose a serious challenge to learning from animations that show for instance, natural, mechanical, or biological processes or problem solving procedures. Information may be missed completely if it is not attended to at the right moment, and needs to be kept in mind while simultaneously processing new incoming information (Ayres and Paas 2007). For animations showing human movement procedures, this problem is reduced or eliminated (Höffler and Leutner 2007; Van Gog et al. 2009b), presumably because processing such animations is facilitated by the mirror-neuron system (Van Gog et al.). However, for other types of animations to be effective for learning, the negative effects on cognitive load caused by transience should be counteracted. Some effective means of doing so are cueing to guide attention to the right place at the right time (De Koning et al. 2009) or segmenting the animation, which “slows the pace of presentation, thereby enabling the learner to carry out essential processing” (Mayer 2005, pp. 170).

Spanjers et al. (2010) review the literature on the segmentation effect, which shows that studies have thus far always incorporated pauses between segments. They point out that as a consequence, the cause of the positive effect of segmentation is not entirely clear. It may be due to either the time provided by the pause to carry out essential processing, but it may also be the signal that a segment has ended which is provided by the pause, or both. They go on to discuss each of these explanations in detail, drawing on event segmentation theory (Zacks et al. 2007) and on the time-based resource sharing model of working memory (Barrouillet and Camos 2007). This model could potentially prove useful for further specifying and updating cognitive load theory, as it might be able to explain the underlying mechanisms of other cognitive load effects (e.g., split attention, modality) in more detail.

While it is possible to test hypotheses in which there are expectations regarding cognitive load manipulations using mental effort ratings and measures of learning, these are ‘offline’ measures. As a consequence, they will yield only indirect evidence as they do not capture cognitive load at the relevant specific points in time. ‘Online’ process measures of cognitive load are able to indicate cognitive load at specific times allowing hypotheses to be tested in more detail (Van Gog et al. 2009a). Given rapid technological advances, new methods for online, continuous cognitive load measurement are coming within educational researchers’ reach.

Antonenko et al. (2010) describe the possibilities offered by electroencephalography (EEG) to provide a continuous and objective measure of cognitive load. They describe which EEG components can be used to assess cognitive load, and review studies in learning from hypertext and multimedia materials in which EEG was applied to measure cognitive load. For example, the fine temporal resolution of EEG allowed Antonenko and Niederhauser (2010) to detect differences in cognitive load at the time participants were accessing hyperlinks either with or without ‘leads’ providing content previews at particular nodes.

Cognitive load theory has continued to develop over many years as data and theoretical concepts have become available. The papers of this issue indicate that the pace of change is currently accelerating rapidly, providing the theory with considerable depth and breadth.