Simulation in children’s conscious recursive reasoning
 204 Downloads
Abstract
When do children acquire the ability to understand recursion—that is, repeated loops of actions, as in cookery recipes or computer programs? Hitherto, studies have focused either on unconscious recursions in language and vision or on the difficulty of conscious recursions—even for adults—when learning to program. In contrast, we examined 10 to 11yearold fifthgraders’ ability to deduce the consequences of loops of actions in informal algorithms and to create such algorithms for themselves. In our experiments, the children tackled problems requiring the rearrangement of cars on a toy railway with a single track and a siding—an environment that in principle allows for the execution of any algorithm—that is, it has the power of a universal Turing machine. The children were not allowed to move the cars, so each problem’s solution called for them to envision the movements of cars on the track. We describe a theory of recursive thinking, which is based on kinematic simulations and which we have implemented in a computer program embodying mental models of the cars and track. Experiment 1 tested children’s ability to deduce rearrangements of the cars in a train from descriptions of algorithms containing a single loop of actions. Experiment 2 assessed children’s spontaneous creation of similar sorts of algorithms. The results showed that fifthgrade children with no training in computer programming have systematic abilities to deduce from and to create informal recursive algorithms.
Keywords
Recursion Informal algorithms Deduction Abduction Kinematic simulationsIn computer science, any process that contains a loop of actions is recursive (Enderton, 2010). Recursion is also commonplace in daily life—from cookery recipes to laying place settings on a table. It lies at the core of computation: a loop of actions is repeated either for a given number of times or while a given condition continues to hold—though loops that do not terminate are the bane of programmers. But, what are the origins of recursion? Most people who have thought about this question have assumed that it depends on innate mental machinery (e.g., Hauser, Chomsky, & Fitch, 2002). This assumption is hard to test, but it does raise a more tractable question: When does recursion first appear as children develop?
Recursive rules are part of grammar, and 7yearold children can already generate recursive sentences (Berwick, Pietroski, Yankama, & Chomsky, 2011; Miller, Kessel, & Flavell, 1970; Roeper, 2009). Likewise, 10yearolds can discriminate between diagrams that are the products of recursive processes (fractals) and those that are not (Martins, Laaha, Freiberger, Choi, & Fitch, 2014). The application of recursion in these skills is unconscious, but it is exercised in a deliberate and conscious way in writing computer programs. However, not much research has examined whether children can cope with recursion outside calculation or programming. One issue is that the purview of recursive reasoning is often narrow and is thought of as a specialized operation in computer science, in which functions call themselves, or as a sort of reasoning that is selfreferential (e.g., Cherubini & JohnsonLaird, 2004). From this perspective, children and adults seldom make recursive inferences. Indeed, this narrow conception of recursion is more relevant to the niceties of logic, computability, and programming, where a function that calls itself is elegant.
Previous studies have examined how children trained in computer programming understand recursion (see Chan Mow, 2008; Mayer, 2013; Sleeman, 1986, for reviews). For example, children’s recursive abilities have been examined in programming languages such as LOGO (Papert, 1980) and LEGO (Resnick, 1994), and 10yearolds have been shown to have difficulty learning the concept of recursion (e.g., Dicheva & Close, 1996), whereas 11yearolds have difficulty thinking about how recursive programs work (e.g., Kurland & Pea, 1985). However, programming depends on much more than a grasp of recursion: It calls for knowledge of formal programming language.
Any recursive function is equivalent to a loop of operations, and such loops are of two sorts. One sort is specified beforehand to be carried out for a given number of repetitions (a for loop), and the other, which can compute functions beyond the scope of for loops, is specified to repeat while a particular condition holds (a while loop; see, e.g., Enderton, 2010; for an introduction, see JohnsonLaird, 1983, chap. 1). This broader notion of recursion as a loop of operations clarifies why it is commonplace in everyday life—for example, “take two pills a day for five days,” “scrub while the stain still shows,” or “beat until the cream holds a peak.” Recursive reasoning therefore concerns the ability to reason about the repetition of actions. So, in attempting to answer our question about when a conscious grasp of recursion first develops in human life, our main assumption is that this is not a matter of understanding calculation or computer programming. It does not call for specialized training in formal languages and symbols, but instead depends on grasping the broader conception of a repeated loop of actions. We therefore simply need participants who can make kinematic simulations of actions, and fifthgrade children can do so (e.g., Caeyenberghts, Wilson, van Roon, Swinnen, & SmitsEngelsman, 2009; Skoura, Vinter, & Papaxanthis, 2009). We need participants who can plan rearrangements, as in the Tower of Hanoi problem, and fifthgrade children can do this as well (e.g., AamodtLeeper, Creswell, McGurk, & Skuse, 2001; Keen, 2011). Finally, we need participants who can solve problems using means–ends analysis, and once again, fifthgrade children can do so (e.g., Kuhn, 2013). We do not claim that younger children cannot cope with recursion in a conscious way, but we do claim that fifthgraders appear to be the best population from which to draw a sample to carry out such recursions at a level better than chance.
 1.
Problem solving: The participants have to solve a rearrangement problem by moving the cars from their given order into a required new order, using a siding on the track where necessary.
 2.
Deduction: They have to deduce a new order of cars from a description of an algorithm that makes a rearrangement of a given order.
 3.
Abduction: They have to formulate their own informal algorithm for making a rearrangement. This process of creating an algorithm is a sort of inductive reasoning, but one that is known as “abduction,” because it is more akin to an explanation of how to make a rearrangement than to a generalization from the rearrangement.
In an earlier study, we showed that Naïve adults—that is, those who knew nothing about programming or its cognate disciplines—can carry out all three sorts of tasks (Khemlani, Mackiewicz, Bucciarelli, & JohnsonLaird, 2013). The evidence corroborated their use of kinematic mental simulations. Likewise, in a previous study of fifthgrade children, we examined two of the three tasks. We showed that children can solve problems of rearranging five cars, and that they can abduce informal algorithms for rearranging trains of six cars (Bucciarelli, Mackiewicz, Khemlani, & JohnsonLaird, 2016). We also showed that gestures helped them abduce algorithms when they were not allowed to move the cars.
The present investigation was designed to answer two new questions that earlier studies had never addressed: Could fifthgrade children make deductions from algorithms, and could they abduce recursive algorithms for trains of an indefinite length? In Experiment 1, we therefore examined children’s ability to deduce the consequences of algorithms presented to them in written form; some of these algorithms were for rearranging trains of five cars, and some of them were recursive, containing a loop of operations appropriate for trains of any length. Experiment 2 examined children’s ability to abduce their own informal algorithms for making rearrangements; some of their algorithms had to rearrange trains of six cars, but some of them had to rearrange trains of an indefinite length—that is, correct algorithms had to be recursive and to contain a loop of moves.
In the rest of this introduction, we describe the railway environment and a theory of recursive thinking based on kinematic mental models. Next, we report the two experiments, one on deduction, and one on abduction. We conclude with a general discussion of the implications of their results for alternative theories of recursive reasoning and for the pedagogy of programming.
A domain of recursive problems
 1.
Cars can move only along the tracks: One car cannot jump over another. So, when one car moves, it also moves any car in front of it.
 2.
Only three sorts of move are allowed: Cars can move from the left track to the right track (R), from the left track to the siding (S), and from the siding back to the left track (L). They cannot move from the right track back to the left track, or from the siding straight to the right track.
 3.
The trains must be rearranged in as few moves as possible, so when it is necessary to move more than one car, the cars should move together.
The siding allows cars to be stored for a while so that other cars can move unimpeded from the left to the right track. The siding is therefore akin to a stacklike memory. But so, too, is the left track, because cars can shuttle between the two in intricate dances, before they move to the right track.
Children and adults have no difficulty understanding the environment and its rules, and in solving problems that call for rearranging cars in a train (Bucciarelli et al., 2016; Khemlani, Goodwin, & JohnsonLaird, 2015; Khemlani et al., 2013). One potential worry is that the environment is idiosyncratic and not representative of recursive domains. However, given the ability to add cars to the track or remove them, and to have the cars denote zeroes and ones, the system is equivalent to a universal Turing machine, because both the left track and the siding are stacks, from a computational standpoint (Hopcroft & Ullman, 1979). In theory, through these simple additions and the possibility of extending the length of each of the three parts of the track as required to accommodate any number of cars, the railway can carry out any computation.
The rearrangement we described earlier is a permutation of the original train, with the order of its cars reversed. Permutations have an interesting property seldom mentioned in texts on the topic (cf. Bona, 2012): A particular permutation, of which there are a countable infinity, can apply to any number of entities. For example, a reversal can apply to trains of any length. Hence, an algorithm for reversals needs to work for any number of cars and is bound to call for at least one recursive loop of operations. That is why we used the railway environment in our studies.
A theory of the abduction of algorithms
An infinite number of algorithms can compute the same function, such as a reversal (Enderton, 2010). The process of formulating an algorithm is therefore akin to the abduction of an explanation: It goes beyond the given information, which needs only to state the inputs and outputs of a function. To abduce an algorithm that solves any instance of a rearrangement, such as a reversal of cars, three steps are necessary.
that is, a move of one car to the right track. Because car A has now been solved, the goal can be updated to DCB. It is easy to solve its rightmost car. We move B from the siding,
B[CD]A (L 1)over to the right track:
−[CD]BA (R 1)
This partial means–ends analysis can solve any rearrangement, but to guarantee a minimal solution—one with the fewest possible moves—takes some exploration in certain cases. The process could be carried out in actual moves on the railway track, but children do have some ability to simulate moves if they are prohibited from touching the actual cars (Bucciarelli et al., 2016). Instead of performing physical moves, children construct mental models of what cars are where on the railway track. We invite readers to imagine how they would rearrange ABCD so that D is at the back of the train: DBCA. It is not difficult. As you may notice, mental models are iconic, in that their structure corresponds to what they represent (JohnsonLaird, 2006, chap. 2). So, they represent the spatial arrangement of the track in a spatial model and simulate the movements of the cars on the track in a kinematic sequence of models. But mental simulations are also costly: Each move either sets up a new mental model or updates an existing one, so its representation depends on the processing capacity of working memory.

(S 3)(R 1)(L 1)(R 1)(L1 R1)(L1 R1)

(S 4)(R 1)(L 1)(R 1)(L1 R1)(L1 R1)(L1 R1)
To abduce the algorithm, one needs to detect a loop in these sequences, as well as any moves that occur before or after it. In one minimal solution of the reversal, there is an initial move of (S n–1), where n is the number of cars in the train, then a loop of two moves: (R 1)(L 1), and finally a move of (R 1). The general specification of a for loop calls for the solution of two simultaneous linear equations, which seems beyond the competence of fifthgrade children. A simpler solution (albeit one that has more computational power) is to simulate the solution and determine the situation that causes a while loop to continue. For a reversal, this continues as long as there is at least one car on the siding. Other sorts of problem have different loops with different while conditions, but they can be determined from simulations of their solutions.
The third step is to test the algorithm—a step that programmers neglect at their peril—to assess whether it does what it is supposed to do. This step calls for deduction. It simulates the effect of the algorithm on a train of a new length, in order to deduce the consequences of the algorithm and check that the algorithm halts with the required rearrangement on the right track.
A computer program, mAbducer, that the fourth author wrote carries out all three of these steps for any rearrangement that calls for a finite number of moves or a single recursive loop. It is an automatic programmer for rearrangement problems (see Khemlani et al., 2013), and its source code is available at http://mentalmodels.princeton.edu/models/. This automatic programmer generates algorithms, using a for loop and a while loop, that solve the problems, and it also describes them in both a programming language, Lisp, and informal English. It provided minimal correct algorithms as a basis for the problems in our experiments. The kinematic model that it uses to simulate moves on the track is schematic, and we have already illustrated it in the moves for the reversal above.
In summary, the theory and its computer implementation rest on three assumptions that derive from the theory of mental models—henceforth, the model theory, for short. First, simulations depend on iconic models. They are iconic in that their structure corresponds to the structure of the world (JohnsonLaird, 1983). Second, they are kinematic, in that they unfold in time in the same sequence as the required moves for a problem—that is, they use time to represent time (Hegarty, 2004; Schaeken, JohnsonLaird, & d’Ydewalle, 1996). Third, they are schematic, and therefore more parsimonious than visual images, though they may underlie such images. Hence, they yield faster inferences than do images (Knauff, Fangmeier, Ruff, & JohnsonLaird, 2003). A model can therefore represent what is common to many possibilities that differ in their details.
The theory makes three principal predictions. Fifthgrade children should be able to deduce the consequences of algorithms containing loops, and to abduce such algorithms, with betterthanchance accuracy (Prediction 1). They should make more accurate deductions and abductions for algorithms without loops than for those with loops, because the latter impose an additional load on working memory (Prediction 2). Because simulations depend on the processing capacity of working memory, children should differ in ability (Prediction 3).
Experiment 1: Children’s deductions from algorithms
 1.
a reversal of the order of the cars in a train, so the train AEIOU would become UOIEA;
 2.
a parity sort, in which all the cars in evennumbered positions would be moved in front of all the cars in oddnumbered positions, so that the train AEIOU would become EOAIU;
 3.
a center palindrome, in which a train would be rearranged by pairing its two outer cars, then the next pair of outermost cars, and so on, until only the center car would be left, which would be put at the end of the train—so the train AEIOU would become AUEOI (see the materials for the algorithms).
Method
Participants
The participants were 30 fifthgrade children (16 females and 14 males; mean age 10 years 3 months) attending a primary school in Turin, Italy. The Ethical Committee of the University of Turin approved the experiment, and the children took part in the study after their parents had given their informed consent.
Design
The participants deduced the consequences of three sorts of algorithms (reversal, parity sort, and center palindrome) described in one version in a finite list of actions—that is, without a loop and in another version with a while loop. The six problems were presented in a different random order to each participant, with the constraint that the two versions of a problem never followed one after the other.
Materials
1. The reversal algorithm, which reverses the order of the cars:  AEIOU[] 
Move one less than the number of cars to the siding.  A[EIOU] 
While there are more than zero cars on the siding,  
move one car to the right track,  [EIOU]A 
move one car to the left track.  E[IOU]A 
Three further iterations of the while loop yield:  U[]OIEA 
Move one car to the right track.  []UOIEA 
2. The paritysort algorithm puts all the cars in evennumbered positions in front of all the cars in oddnumbered positions:  AEIOU[] 
While there are more than two cars on the left track,  
move one car to the right track,  AEIO[]U 
move one car to the siding.  AEI[O]U 
A further repetition of this loop yields:  A[EO]IU 
Move one car to the right track.  [EO]AIU 
Move two cars to the left track.  EO[]AIU 
Move two cars to the right track.  []EOAIU 
3. The center palindrome algorithm transforms a train by pairing the two outer cars, then the next pair of outermost cars, and so on, until only the center car is left and it is put at the end of the train:  AEIOU[] 
Move two cars from the left track to the siding.  AEI[OU] 
Move one car to the right track.  AE[OU]I 
While there are more than zero cars on the left track,  
move one car to the left track,  AEO[U]I 
move two cars to the right track.  A[U]EOI 
A further repetition of the loop yields the solution:  []AUEOI 
To avoid the calculation required in the initial moves, which are irrelevant to the grasp of a loop, we reformulated them as above so that the loop would apply only to trains of five cars.
Procedure
The participants were tested one at a time in a quiet room and in the sole presence of the experimenter. They learned the rules for moving cars, and they were told they had to read the description of a series of moves and to work out the effect of these moves on the final order of the cars in the train on the right side of the track. They read the description of the algorithm, which remained in view throughout the complete trial. We videorecorded the experimental sessions, and later transcribed them.
Results and discussion
The data from five of the 30 children were excluded from the analysis because either the children moved cars when solving a problem or a technical error occurred. The statistical analyses were performed on the remaining 25 participants. The analyses assessed whether the group of children as a whole was able to solve the problems at a level better than chance, whether they were more accurate with the problems without loops than with the problems with loops, and whether they differed in ability.
Numbers of children (N = 25) in Experiment 1 who made correct deductions of the rearrangements of cars according to three sorts of algorithms, either without loops or with loops
Sort of Algorithm  Without Loop  With Loop 

Reversal  20  13 
Parity sort  11  5 
Center palindrome  9  8 
The children were more accurate in deducing the consequences of algorithms without loops (53% correct) than of algorithms with loops (35% correct; Wilcoxon test, z = 2.12, p < .02, Cliff’s δ = .30; Prediction 2). An analysis of the individual problems showed that only the reversal yielded a reliable difference in difficulty: It was easier in the algorithm without a loop than in the algorithm with a loop (Wilcoxon test: z = 2.33, p < .02, Cliff’s δ = .28). The six problems differed in difficulty [Cochran’s Q test: χ^{2}(5) = 28.31, p < .001]. It may be that reversals are easy because they repeat a loop of two moves of single cars three times, so the children can grasp the loop better than they can the loops in the other algorithms, which repeat only twice. But any definitive explanation would call for a much larger sample of different rearrangements, of which, in principle, a countable infinity exist.
The children themselves differed in their ability to make accurate deductions from algorithms [Friedman nonparametric analysis of variance: χ^{2}(5) = 28.31, p < .0001: Prediction 3]. Three children made no correct deductions, and two children made only correct deductions. There was no reliable difference in accuracy between the sexes: Boys were 50% correct, and girls were 38% correct (Mann–Whitney test: z = .99, p = .32, Cliff’s δ = .23).
Experiment 1 corroborated the three predictions of the model theory. The children as a group deduced the consequences of each sort of algorithm much better than chance, they were more accurate for algorithms without loops than for algorithms with loops, and they differed in ability. We therefore devised Experiment 2 to find out whether children from the same population could themselves abduce algorithms containing loops of operations.
Experiment 2: Children’s abduction of algorithms
Method
Participants
The participants were 35 fifthgrade children (16 females and 19 males; mean age 11 years) attending three primary schools in Turin, Italy. The Ethics Committee of the University of Turin approved the study, and the children took part after their parents had given informed consent.
Design
Initial and final states of the ten problems in Experiment 2
Names of the Problems  Initial States of the Two Versions  Final States of the Two Versions 

1. Swap adjacent pairs  FEDCBA ■FEDCBA  EFCDAB ■EFCDAB 
2. Reversals  FEDCBA ■FEDCBA  ABCDEF ABCDEF■ 
3. Parity sort  FEDCBA ■FEDCBA  FDBECA ■FDB■ECA 
4. Backto palindrome  AABBCC AABBCC■  ABCCBA ABC■CBA 
5. Twoloop palindrome  CCBBAA ■CCBBAA  ABCCBA ABC■CBA 
Materials and procedure
The five sorts of problem, each in the two versions, are illustrated in Table 2. We used white cars labeled with letters (A, B, C, D, E, F), a cardboard tunnel, and photographs of the required rearrangements of the cars. The participants were tested one by one in a quiet room with only the experimenter present. They carried out an initial training in which they learned the rules for moving the cars and how to describe the moves using only the number of cars in a move, without referring to the cars by letter. They were told that the cars on the left track had to be rearranged into the order shown in the photograph behind the right track. They were also told that some trains had an unknown number of cars, so they would have to describe a method of rearranging a train of any length—that is, the tunnel hid many cars, and “we do not know how many.”
The key instructions began with these sentences for six cars: “Try to tell me in words, without moving the cars, how would you form this train [in the picture]. Remember not to use the names of the cars, but tell me how many cars move from one track or another.” Once the child had created an algorithm, the experimenter introduced the tunnel and reminded the child that it hid an unknown number of cars, which were part of the train that the child could see. The experimenter then constructed these trains of indefinite length in front of the child, who understood that the tunnel hid an unknown number of cars. The instructions for these recursive problems were: “Now, because we do not know how many cars there are in this train, we need rules that summarize the moves to form the train in the picture. The rules must be as short as possible: you must use the smallest number of words.” We videorecorded the experimental sessions and later transcribed the children’s algorithms.
Results and discussion
Coding of algorithms and of loops

While loops specify the termination condition in advance—for example, “and so on until the cars are finished.”

For loops specify the number of iterations in advance, though they might do so using a quantifier such as “all,” to refer to the unknown number of cars in a train—for example: “. . . we do like that for all the cars we can’t see,” “one by one take the cars and lead them back [to the left track] and then to the goal.”

Protoloops specify neither the termination condition nor the number of iterations, but indicate that the same move will be repeated—for example, “and so on,” “and we go always like that,” and “we move the car from the side to the left then to the goal, and also the last one.”
The two independent judges agreed in their coding of the algorithms on 92% of trials (Cohen’s κ = .84, p < .0001). They also agreed on 97% of trials about the occurrence of no loops, protoloops, for loops, and while loops in the algorithms (Cohen’s κ = .94, p < .0001). They resolved the discrepancies in both codings prior to the statistical analyses. Because the children often used quantifiers, such as “all the cars,” the while and for loops differed less in their informal versions than they do in formal programs, because the children described for loops without explicit numbers of required repetitions.
Translation from Italian of two children’s algorithms in Experiment 2 for swapping adjacent pairs of cars in a train of indefinite length, and their transcriptions into mAbducer’s notation
Move  Descriptions and Gestures  Transcription of the Move 

Participant 8’s transcript for an algorithm with a while loop that swaps adjacent pairs to rearrange ■FEDCBA into ■EFCDAB  
1  “One to the siding . . .” (draws in the air a trajectory from the left track to the siding)  ■ FEDCB[A] 
2  “. . . the other to the goal.” (draws in the air a trajectory from the left track to the right track)  ■ FEDC[A]B 
3  “One on the siding goes back then to the goal . . .” (draws in the air a trajectory from the siding to the left track and then to the right track)  ■ FEDCA[]B ■ FEDC[]AB 
4  “. . . and so on until all the cars are finished.” (moves one hand in front of the other in a continuous movement in a wheellike movement) The description is of a while loop, because it indicates moves applied to many cars and states the termination condition.  ■ FED[C]AB ■ FE[C]DAB ■ FEC[]DAB ■ FE[]CDAB ■ F[E]CDAB ■ [E]FCDAB ■ E[]FCDAB ■ []EFCDAB 
Participant 10’s transcript for an algorithm with a for loop that swaps adjacent pairs  
1  “One should always put a car to the siding . . .” (P10 made no gestures)  ■ FEDCB[A] 
2  “. . . and one to the goal . . .”  ■ FEDC[A]B 
3  “. . . then the one on the siding goes back . . . to the goal.”  ■ FEDCA[]B ■ FEDC[]AB 
4  “One to the siding . . .”  ■ FED[C]AB 
5  “. . . and the other to the goal.”  ■ FE[C]DAB 
6  “One back and then to the goal . . .”  ■ FEC[]DAB ■ FE[]CDAB 
7  “. . . and swap them, and do that for all the (cars of the) train.” The assertion is a for loop, because it indicates moves applied to many cars, specifying all of them in advance.  ■ F[E]CDAB ■ [E]FCDAB ■ E[]FCDAB ■ []EFCDAB 
Statistical analysis
The statistical analyses were designed to assess whether the group of children as a whole were able to formulate algorithms at a level better than chance, whether they were more accurate in their algorithms for trains of six cars than for trains of indefinite length, and whether they differed in ability.
Numbers of children (N = 35) in Experiment 2 who made correct abductions of five sorts of algorithm for trains of six cars and for trains of indefinite length
Sorts of Algorithm  Length of Trains  

Trains of Six Cars  Trains of Indefinite Length  
Swap adjacent pairs  27  19 
Reversal  25  4 
Parity sort  22  2 
Backto palindrome  22  0 
Twoloop palindrome  19  1 
The children were more accurate in abducing algorithms for trains of six cars (66% correct) than for trains of indefinite length (15% correct; Wilcoxon test: z = 5.16, p < .0001, Cliff’s δ = .88; Prediction 2). The same result occurred for each of the five pairs of problems (in Wilcoxon tests, z ranged from 2.8 to 4.7, p ranged from < .005 to < .0001, and Cliff’s δ ranged from .23 to .63). The children used loops in 10% of their algorithms for trains of six cars, and in 67% of their algorithms for trains of indefinite length, whether the algorithms were right or wrong (Wilcoxon test: z = 4.02, p < .0001, Cliff’s δ = .51). The ten problems differed in difficulty [Cochran’s Q test: χ^{2}(9) = 133.36, p < .001]. The algorithm for swapping adjacent pairs was easy, for both six cars and indefinite numbers of cars, perhaps because it is a single loop of onecar moves that is repeated three times for a sixcar problem. Likewise, the loop for reversals, as we mentioned before, is also simple. In contrast, the palindrome is the most difficult, if only because its algorithm uses two separate loops. The Appendix has descriptions of all five recursive algorithms. It also shows that their Kolmogorov complexity—the number of symbols required to describe them in the formal language of the mAbducer notation—predicts their rank order of difficulties for the children (see Khemlani et al., 2013, for the similar success of this metric for adult participants).
The children differed in ability to abduce the algorithms [Friedman nonparametric analysis of variance: χ^{2}(9) = 133.36, p < .0001; Prediction 3]. One child abduced no correct algorithms, whereas the most accurate children abduced five correct algorithms. The difference in accuracy between the sexes was not reliable: Boys were 37% correct, girls were 44% correct (Mann–Whitney test: z = 1.03, p = .30, Cliff’s δ = .20).
The most striking result was that 22 out of the 35 children formulated at least one correct recursive algorithm, which contained a loop of operations. This result shows that a sample of fifthgrade children performed reliably better than chance at the task. To the best of our knowledge, no previous study has obtained such a result. The results also corroborated our earlier finding that children could abduce algorithms for trains of six cars (Bucciarelli et al., 2016). In sum, the results corroborated the three predictions of the model theory. The children as a group deduced the consequences of each sort of algorithm rather better than chance, they were more accurate for algorithms without loops than for algorithms with loops, and they differed in ability.
General discussion
Move one less than the number of cars to the siding.
While there is at least one car on the siding,
move one car from the left track to the right track,
move one car from the siding to the left track.
Move one car to the right track.
Even though they were not allowed to move the actual cars on the track, they could imagine the effects of this recursive algorithm. It was easier for them to deduce the consequences of algorithms that were lists of actions rather than those that were recursive, containing a loop of moves (Prediction 2), as in the preceding example. The three algorithms differed in difficulty, and the one above was the easiest, perhaps because it has a loop of two moves of single cars that is repeated more often than are the loops in the other two algorithms. The repetition of simple moves could help children deduce the moves’ consequences, but other factors may be in play, such as the load on working memory. The space of possible rearrangements is boundless, and without results from a larger sample of algorithms, it is impossible to draw definite conclusions. Congruent with the role of working memory, however, the children in Experiment 1 differed in their ability to deduce the consequences of algorithms (Prediction 3).
Fifthgrade children can also abduce their own informal algorithms containing loops of moves. The sample as a whole in Experiment 2 were able to do so at a level better than chance (Prediction 1). This result contrasts with earlier findings that in computer programming, fifthgraders have difficulty coping with recursion (e.g., Dicheva & Close, 1996; Kurland & Pea, 1985). Yet, like deduction, the abductive task was easier for them when a list of actions sufficed for trains of six cars than when it called for a loop of actions on trains of indefinite length (Prediction 2). The five algorithms differed in difficulty; the Appendix presents them and shows that their Kolmogorov complexity predicts their difficulty (as it had for the adult participants in Khemlani et al., 2013). Again, the difference in the children’s abilities suggests that the load on working memory affects performance.
What does it take for you to abduce a recursive algorithm? Our studies corroborated the model theory described earlier in the article. It postulates that you need three interrelated abilities. First, you have to be able to solve the problems that the algorithm is going to solve. In the railway environment, you can do so using the partial meansends analysis. You can work backward from each car at the head of the goal. If you cannot carry out the actual moves on the track—and the participants in the Bucciarelli et al. (2016) study were not allowed to—then you have to imagine them. So, you carry out a kinematic simulation of the solution. But, solutions of rearrangements are not enough for an abduction.

(S 3)(R 1)(L 1)(R 1)(L 1)(R 1)(L 1)(R 1)

(S (n – 1))

(R 1)

((L 1)(R 1)) while there is at least one car on the siding.
There is an alternative minimal algorithm, which we described earlier (see also the Appendix).
The third and final step is to test your algorithm. You deduce its consequences for a train of a new length. You carry out a mental simulation of it on a longer train. You apply each of its moves on the train, and you check that the end result matches the required reversal. If it does, then the algorithm is complete, assuming that you can describe it (cf. Table 3).
Does any alternative theory of the abduction of algorithms give a different account of representation and process? Cognitive scientists have pursued many accounts of mental representations. Some have claimed that they are not required for intelligent behavior (e.g., Brooks, 1991). Others have argued against a causal role for visual images, and posited instead “mentalese”—that is, a language of thought made up of strings of symbols (e.g., Pylyshyn, 2003). In fact, it was impossible to formulate algorithms in our study without using mental representations. One that may be optimal for formulating an algorithm is a kinematic model. It is iconic, in that it uses time to represent time and that its spatial relations represent spatial relations in the world (see also Hegarty, 2004). Such mental models can underlie visual images, or they may be as abstract as the notation in the present article—a notation that the mAbducer program uses. Indeed, a brainimaging study has shown that people can reason from models without transforming them into visual images, which in fact impede reasoning (Knauff et al., 2003). Not all reasoning has to depend on iconic models: People who are taught logic can also learn to use formal rules of inference. Likewise, the model theory relies on a representation of meaning that is not iconic, and it is from this representation that it constructs models (Khemlani & JohnsonLaird, 2013). Theorists could argue that all representations rest on such a mentalese, which in turn rests on nerve impulses—just as mAbducer’s representations rest on machine language, which in turn rests on an electronic binary code. In both cases, however, the abduction of algorithms demands a higher level of representation, one that humans can envisage and manipulate consciously.
∃!x∃!y((car x) ∧ (train y) ∧ (atfrontof x y) ∧ (on lefttrack y)
(There is a car, x, and a train, y, such that x is at the front of y and y is on the left track.)
Hence, logical systems for spatial reasoning tend to make the wrong predictions of difficulty (see, e.g., Jahn, Knauff, & JohnsonLaird, 2007; Schaeken, Girotto, & JohnsonLaird, 1998). In contrast, iconic diagrams improve both the accuracy and speed of reasoning in comparison with noniconic verbal premises (Bauer & JohnsonLaird, 1993). It therefore seems that mental simulations should be based on iconic models rather than on logical expressions. The crux of abduction is to discover repeated sequences of operations. The mAbducer program finds them using a recursive process that starts with loops of half the length of the sequence of moves in a solution, and works its way downward through shorter lengths. Human reasoners must also search for loops, but they are fallible, and some patterns are too difficult for them to detect (Khemlani et al., 2013).
Mathematicians, programmers, and cognitive scientists reason about recursion. And many psychological studies have investigated novice programmers trying to formulate algorithms in a programming language (e.g., Kurland & Pea, 1985). Other studies have used arithmetic. For example, teenagers are better at calculating an arithmetical function when it is expressed in an iterative loop of operations than in a function that calls itself (Anzai & Uesato, 1982). However, those who start with iterative calculations do better than those who start with the selfreferential calculations. Likewise, the experience of informal algorithms in the railway setting could help budding programmers to master recursive functions. It might even provide a transparent environment in which to teach programming. No valid test exists for predicting the programming ability of individuals who know nothing about it. Children differed in their skill in abducing algorithms for rearrangements, and so the task could predict their ability to program.
Recursion underlies languages (e.g., Hauser et al., 2002), counting and arithmetic (e.g., Enderton, 2010), theory of mind (e.g., Corballis, 2011), and the recognition of visual patterns (e.g., Martins, Mursic, Oh, & Fitch, 2015). One controversy concerns whether all these cases of recursion are rooted in language (Hauser et al., 2002) or else in several cognitive domains (Jackendoff & Pinker, 2005). In a study of an agrammatical patient, Zimmerer and Varley (2010) showed that recursion was absent in the patient’s grammar, but present in other domains, such as arithmetic. As far as we know, noone has established a double dissociation between language and recursion. But, in many recursive domains, such as the formulation of grammatical sentences or inferences from the theory of mind, the underlying recursive principles are unconscious. In the present studies, however, children were conscious of explicit loops of actions from which they had to make deductions, and they attempted to create such loops in abducing informal programs. Both tasks can be carried out using the symbols of an artificial language, such as the notation for mAbducer or LOGO, and without any overt use of natural language. They could therefore provide a test bed to examine the recursive abilities of individuals bereft of language and even of members of other species.
In conclusion, our study of fifthgraders’ grasp of recursion revealed that they can deduce the consequences of some algorithms, and that they can abduce the loops of moves required for some algorithms. They are more accurate with algorithms that are lists than with algorithms that include loops. Simulations appear to be crucial: They unfold in time in a sequence of kinematic models, which have to be held in working memory, and processing capacity is therefore critical. This may account for the differences in ability from one child to another. Recursion is an unconscious foundation for many human skills, from perception to speech. It is a conscious component of logic and programming. The informal mastery of our participants in tasks for which they had no explicit preparation suggests that its roots are part of human competence. This ability may be founded on the simulation of sequences of events, and in turn on the ability to make such simulations the objects of conscious thought.
Author note
The data for this article are archived in a database to be found at https://osf.io/jg2fy. The research was funded in part by the Polish National Science Centre [Grant 2014/14/M/HS6/00916] (to R.M.). We are grateful to Matthew Traxler and the anonymous reviewers for their helpful advice and criticisms of earlier drafts.
References
 AamodtLeeper, G., Creswell, C., McGurk, R., & Skuse, D. H. (2001). Individual differences in cognitive planning on the Tower of Hanoi task: Neuropsychological maturity or measurement error? Journal of Child Psychology and Psychiatry, 42, 551–556.CrossRefPubMedCentralGoogle Scholar
 Anzai, Y., & Uesato, Y. (1982). Learning recursive procedures by middleschool children. In Proceedings of the Fourth Annual Conference of the Cognitive Science Society (pp. 100–102). Hillsdale: Erlbaum.Google Scholar
 Bauer, M. I., & JohnsonLaird, P. N. (1993). How diagrams can improve reasoning. Psychological Science, 4, 372–378. https://doi.org/10.1111/j.14679280.1993.tb00584.x CrossRefGoogle Scholar
 Berwick, R. C., Pietroski, P., Yankama, B., & Chomsky, N. (2011). Poverty of the stimulus revisited. Cognitive Science, 35, 1207–1242.CrossRefPubMedCentralGoogle Scholar
 Bona, M. (2012). Combinatorics of permutations (2nd). Boca Raton: Taylor & Francis.CrossRefGoogle Scholar
 Brooks, R. (1991). Intelligence without representation. Artificial Intelligence, 47, 139–160.CrossRefGoogle Scholar
 Bucciarelli, M., Mackiewicz, R., Khemlani, S. S., & JohnsonLaird, P. N. (2016). Children’s creation of algorithms: Simulations and gestures. Journal of Cognitive Psychology, 28, 297–318.CrossRefGoogle Scholar
 Caeyenberghts, K., Wilson, P. H., van Roon, D., Swinnen, S. P., & SmitsEngelsman, B. C. M. (2009). Increasing convergence between imagined and executed movement across development: Evidence for the emergence of movement representations. Developmental Science, 12, 474–483.CrossRefGoogle Scholar
 Chan Mow, I. (2008). Issues and difficulties in teaching novice computer programming. In M. Iskander (Ed.), Innovative techniques in instruction technology, elearning, eassessment (pp. 199–204). New York: Springer.Google Scholar
 Cherubini, P., & JohnsonLaird, P. N. (2004). Does everyone love everyone? The psychology of iterative reasoning Thinking & Reasoning, 10, 31–53.CrossRefGoogle Scholar
 Corballis, M. C. (2011). The recursive mind: The origins of human language, thought, and civilization. Princeton: Princeton University Press.Google Scholar
 Dicheva, D., & Close, J. (1996). Mental models of recursion. Journal of Educational Computing Research, 14, 1–23.CrossRefGoogle Scholar
 Enderton, H. B. (2010). Computability theory: An introduction to recursion theory. San Diego: Academic Press.Google Scholar
 Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: What is it, who has it, and how did it evolve? Science, 298, 1569–1579. https://doi.org/10.1126/science.298.5598.1569 CrossRefPubMedPubMedCentralGoogle Scholar
 Hegarty, M. (2004). Mechanical reasoning by mental simulation. Trends in Cognitive Sciences, 8, 280–285.CrossRefPubMedCentralGoogle Scholar
 Hopcroft, J. E., & Ullman, J. S. (1979). Introduction to automata theory, languages, and computation (1st). Boston: AddisonWesley.Google Scholar
 Jackendoff, R., & Pinker, S. (2005). The nature of the language faculty and its implications for evolution of language (Reply to Fitch, Hauser, and Chomsky). Cognition, 97, 211–225.CrossRefGoogle Scholar
 Jahn, G., Knauff, M., & JohnsonLaird, P. N. (2007). Preferred mental models in reasoning about spatial relations. Memory & Cognition, 35, 2075–2086.CrossRefGoogle Scholar
 JohnsonLaird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge: Harvard University Press.Google Scholar
 JohnsonLaird, P. N. (2006). How we reason. Oxford: Oxford University Press.Google Scholar
 Keen, R. (2011). The development of problem solving in young children: A critical cognitive skill. Annual Review of Psychology, 62, 1–21.CrossRefPubMedCentralGoogle Scholar
 Khemlani, S., Goodwin, G. P., & JohnsonLaird, P. N. (2015). Causal relations from kinematic simulations. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 1075–1080). Austin: Cognitive Science Society.Google Scholar
 Khemlani, S. S., & JohnsonLaird, P. N. (2013). The processes of inference. Argument & Computation, 4, 4–20.CrossRefGoogle Scholar
 Khemlani, S. S., Mackiewicz, R., Bucciarelli, M., & JohnsonLaird, P. N. (2013). Kinematic mental simulations in abduction and deduction Proceedings of the National Academy of Sciences, 110, 16766–16771.Google Scholar
 Knauff, M., Fangmeier, T., Ruff, C. C., & JohnsonLaird, P. N. (2003). Reasoning, models, and images: Behavioral measures and cortical activity. Journal of Cognitive Neuroscience, 15, 559–573.CrossRefPubMedCentralGoogle Scholar
 Kuhn, D. (2013). Reasoning. In P. D. Zelazo (Ed.), The Oxford handbook of developmental psychology (pp. 744–764). Oxford: Oxford University Press.Google Scholar
 Kurland, D. M., & Pea, R. D. (1985). Children’s mental models of recursive Logo programs. Journal of Educational Computing Research, 1, 235–243.CrossRefGoogle Scholar
 Li, M., & Vitányi, P. (1997). An introduction to Kolmogorov complexity and its applications (2nd). New York: Springer.CrossRefGoogle Scholar
 Martins, M., Mursic, Z., Oh, J., & Fitch, W. T. (2015). Representing visual recursion does not require verbal or motor resources. Cognitive Psychology, 77, 20–41.CrossRefGoogle Scholar
 Martins, M. D., Laaha, S., Freiberger, E. M., Choi, S., & Fitch, W. T. (2014). How children perceive fractals: Hierarchical selfsimilarity and cognitive development. Cognition, 133, 10–24.CrossRefPubMedCentralGoogle Scholar
 Mayer, R. E. (2013). Teaching and learning computer programming: Multiple research perspectives. London: Routledge.Google Scholar
 Miller, P. H., Kessel, F. S., & Flavell, J. H. (1970). Thinking about people thinking about people thinking about . . . : A study of social cognitive development. Child Development, 41, 613–623.Google Scholar
 Oaksford, M., & Chater, N. (2007). Bayesian rationality: The probabilistic approach to human reasoning. New York: Oxford University Press.CrossRefGoogle Scholar
 Papert, S. (1980). Mindstorms. New York: Basic Books.Google Scholar
 Pylyshyn, Z. (2003). Return of the mental image: Are there really pictures in the brain? Trends in Cognitive Sciences, 7, 113–118. https://doi.org/10.1016/S13646613(03)000032 CrossRefPubMedPubMedCentralGoogle Scholar
 Resnick, M. (1994). Turtles, termites, and traffic jams: Explorations in massively parallel microworlds. Cambridge: MIT Press.Google Scholar
 Rips, L. J. (1994). The psychology of proof. Cambridge: MIT Press.Google Scholar
 Roeper, T. (2009). The minimalist microscope: How and where interface principles guide acquisition. In J. Chandlee, M. Franchini, S. Lord, & G. M. Rheiner (Eds.), Proceedings of the 33rd Annual Boston University Conference on Language Development (pp. 24–48). Medford: Cascadilla Press.Google Scholar
 Schaeken, W. S., Girotto, V., & JohnsonLaird, P. N. (1998). The effect of an irrelevant premise on temporal and spatial reasoning. Kognitionswisschenschaft, 7, 27–32.CrossRefGoogle Scholar
 Schaeken, W. S., JohnsonLaird, P. N., & d’Ydewalle, G. (1996). Mental models and temporal reasoning. Cognition, 60, 205–234.CrossRefPubMedCentralGoogle Scholar
 Shanahan, M. (2016). The frame problem. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/frameproblem/
 Skoura, X., Vinter, A., & Papaxanthis, C. (2009). Mentally simulated motor actions in children. Developmental Neuropsychology, 34, 356–367.CrossRefPubMedCentralGoogle Scholar
 Sleeman, D. (1986). The challenges of teaching computer programming. Communications of the ACM, 29, 840–841.CrossRefGoogle Scholar
 Zimmerer, V., & Varley, R. A. (2010). Recursion in severe agrammatism. In H. van der Hulst (Ed.), Recursion and human language (pp. 393–405). Berlin: De Gruyter.Google Scholar