Study context and participants
The context of the study was a massive open online course (MOOC) entitled “Sustainability in Everyday Life,”Footnote 2 offered by Chalmers University of Technology between Aug 29, 2016, and Oct 16, 2016, using the EdX platform. The course was not part of any university program, required no particular prior knowledge, was open to take, and free of charge for everyone with internet access. It only generated a diploma if completed. This MOOC was chosen for this study due to the relevant course content and the possibility to get a large number of respondents.
The sustainability MOOC consisted of five modules or themes: globalization, climate, food, energy, and chemicals. The performance on different kinds of SF tasks was assessed during the climate module, directly after a general introductory video on climate change, which did not address the knowledge tested by the SF tasks, and a question assessing climate policy support was included in the pre-course survey (i.e., before the students were introduced to any course contents). To motivate task completion, the SF tasks gave points that contributed to the total examination of the course regardless of performance.
Of 3540 participants enrolled in the course, 300 started the climate change module where the SF tasks were placed. Of these, 214 participated in the study by completing all of the SF tasks. A total of 49 countries were represented in the sample, with most participants from the EU/EEA (58), the USA (25), India (11), and Mexico (9). See the supplementary material for the full list. The sample included 119 females and 77 males (18 participants had not disclosed their gender). The participants’ average age was 38 years. Of the 92% who stated their highest attained educational level, 81% had a bachelor’s degree or higher. Admittedly, the high average education level, together with the fact that the participants have opted to take a course in sustainability, implies that our participants do not constitute a representative sample of the general public (see the supplementary material for more information on the course context and participants).
Study design
In this section, the overall design of the study is described along with the design of the tasks; in the next section we explain—by drawing on a typology of knowledge—how tasks were designed to assess different types of knowledge. Table 1 depicts the overall design of the study, summarizing the different tasks (all tasks were completed online) and the order in which they were completed—the five steps of the study design.
Table 1 An overview of the study design, describing the tasks’ order and format, the types of knowledge assessed, and the number of participants that completed each task Prior to the SF tasks, the participants were given a question aiming to measure stated preferences with respect to climate policy (T0). Here, the participants were asked which one of the following statements came closest to their personal view:
-
1.
Society should not take any steps to reduce emissions of greenhouse gases (such as CO2).
-
2.
Society should reduce emissions of greenhouse gases in the future, in response to climate impacts as they actually occur.
-
3.
Society should take moderate actions to reduce emissions of greenhouse gases today, to reduce future climate impacts.
-
4.
Society should take strong action to reduce emissions of greenhouse gases today, to reduce future climate impacts.
-
5.
I do not know/I have not formed an opinion.
The alternatives were formulated to reflect attitudes of “wait and see” (2) or “go slow” (3), as discussed by Sterman (2008).
In the first SF task (T1), participants completed a task, which we will refer to as the main SF task that was designed to be similar to the task used by Sterman and Booth Sweeney (2007).Footnote 3 The main SF task consists of a short introductory text, graphs of the annual historic emissions and uptake of CO2, a graph of a scenario with a stabilized amount of CO2 in the atmosphere, and a multiple choice question (see Fig. 1). Participants were asked to choose, among four alternative graphs, the graph depicting emissions and uptake trajectories that is consistent with the scenario for CO2 stabilization. The correct answer is alternative 3 (marked with a green symbol).
Although the main SF task (see Fig. 1) was designed to be similar to the task used by Sterman and Booth Sweeney (2007), our version of the task contained less superfluous information, both in text and graphs, to avoid cognitive overload. However, we added more elaborate information about the CO2 uptake, which was given the same attention as the emissions. For the first period of the graphs (i.e., 1900–2015), the CO2 emissions and uptake values were produced using a simple climate model (Sterner and Johansson 2017), which simulates the carbon cycle response. For this, widely used “historic emissions” that give a realistic impression were used (Meinshausen et al. 2011).
No feedback on task performance is provided to the participants throughout the full set of tasks. In the second SF task (T2), participants were randomly assigned to complete one of three alternative tasks, T2A–C (see Table 1). In contrast to the main SF task, these tasks were designed to direct the participants’ attention towards the principles of accumulation. This was done by explicitly asking questions about (T2A–B) or describing (T2C) the relationship between the flows into and out of a stock in order for the stock to stabilize at a certain level. As a consequence, and as we argue in the next section, these tasks differ from the main SF task in terms of their knowledge demands—that is, in terms of the type of knowledge they assess. The first task (T2A) uses the carbon cycle as context (see Fig. 2), while the second (T2B) uses a bathtub as context (see Fig. 3). These two tasks are central to our hypothesis (stated in the introduction) as they allow us to investigate whether participants perform better on stock stabilization tasks that explicitly ask about the relationship between the flows into and out of a stock (T2A–B), compared with the kind of task used in previous studies (Dutt and Gonzalez 2012a; Guy et al. 2013; Newell et al. 2016; Sterman and Booth Sweeney 2007) (T1). The third task (T2C), not involving a question, uses a bathtub analogy to explain atmospheric CO2 accumulation in a simple way (see figure in the supplementary material); in T2C, the respondents were only asked to confirm that they had studied the analogy. This task, in contrast to T2A–B, presented the participants with the knowledge that is needed to solve the main SF task.
Thereafter, the participants were asked to complete the main SF task again (T3) (see Table 1 and Fig. 1). The logic behind this was that the alternative tasks, T2A–C, would help participants by pointing to the knowledge needed for solving the main SF task, thus allowing us to investigate whether these three tasks could serve as educational interventions that improve performance on the main SF task.
In addition to testing people’s performance on SF tasks with different knowledge demands, we aim to unpack public understanding of CO2 accumulation by exploring people’s ways of reasoning when solving SF tasks. We did this by, in task T4, asking participants to provide a short, written explanation of how they reasoned when choosing to keep or change their answer when completing the main SF task again (T3). Collecting the combined data of how people answer on SF tasks and how they reason while doing so, we aim to study the mental representations used by the participants when answering the main SF task. Mental representations are similar to mental models (which are “personal, internal representations of external reality that people use to interact with the world around them”) (Jones et al. 2011) but are here used instead of mental models to emphasize that their nature is not seen to be stable or static to the same extent that mental models are sometimes viewed.
Task design and knowledge demands
As noted above, the tasks—the main SF task (T1/T3) and the alternative tasks (T2A–B)—were designed to assess different types of knowledge. While knowledge can be classified in many ways (Alexander et al. 1991), we draw on a typology described by (among others) Biggs (2003), comprising three types of knowledge:
-
1.
Declarative knowledge, which refers to “knowing about things [such as facts, concepts, and principles], or knowing what” (p. 41)
-
2.
Procedural knowledge, which refers to “knowing how to do things, such as carrying out procedures or enacting skills” (p. 42)Footnote 4
-
3.
Conditional knowledge, which refers to “knowing when to do these things [...] under what conditions one should do this as opposed to that” (p. 42)
These types of knowledge are “characterized by the function they fulfil in the performance of a target task” (de Jong and Ferguson-Hessler 1996, p. 106). To put it differently, we are interested in knowledge-in-use (ibid. p. 110).Footnote 5 Moreover, while “it is certainly possible to know the what of a thing without knowing the how or when of it” (Alexander et al. 1991, p. 323), successful problem solving requires the use of all three of these types of knowledge (Turns and Van Meter 2011). With these theoretical deliberations in mind, we now turn to an epistemological demand analysis (de Jong and Ferguson-Hessler 1996)—i.e., an analysis of the knowledge demands—of our SF tasks.
Tasks T2A (climate context) and T2B (bathtub context) were designed to assess declarative and procedural knowledge of accumulation. That is, in these tasks, participants first have to recall what the principles of accumulation (i.e., principles of mass balance) say—thus demonstrating declarative knowledge. Next, they have to figure out how to apply these principles to arrive at the relationship between the emissions/inflow and uptake/outflow for the amount of CO2 or water to stabilize at a certain level—thus demonstrating procedural knowledge.Footnote 6 The difference between T2A and T2B is mainly the familiarity of the context, where the more familiar context of a bathtub may make it easier to draw on knowledge that is relevant for solving the problem.
In the main SF task (T1/T3), on the other hand, participants not only have to apply the principles of accumulation—thus demonstrating declarative and procedural knowledge (as in T2A–B)—but also have to realize that this is what the task requires them to do—thus demonstrating conditional knowledge. Note that the main SF task does not direct the participants’ attention towards the principles of accumulation; that is, it does not explicitly ask about the relationship between the emissions and uptake for the amount of CO2 to stabilize. As such, one can argue that the main SF task (T1/T3) poses higher demands on knowledge, compared with tasks T2A–B.
Data analysis
In addition to descriptive statistics, a chi-square test of homogeneity was used to determine if the rate of success was significantly different between any pair of groups on the same task or any pair of tasks for the same group.
An inductive thematic analysis (Braun and Clarke 2006) was used to analyze the participants’ written answers to the open-ended question, “Briefly explain how you reasoned when choosing to keep or change your answer.” In line with this kind of qualitative analysis, a set of themes was identified after coding the data and sorting and sifting the codes in an iterative way. (For a more detailed account of the analysis, see the supplementary material.) These themes provided a deeper understanding of the ways of reasoning being used when answering the main SF task and made it possible to relate the performance on the different SF tasks to different ways of reasoning.