Attention, Perception, & Psychophysics

, Volume 79, Issue 3, pp 989–999 | Cite as

Syntax response–space biases for hands, not feet

Article

Abstract

A number of studies have shown a relationship between comprehending transitive sentences and spatial processing (e.g., Chatterjee, Trends in Cognitive Sciences, 5(2), 55–61, 2001), in which there is an advantage for responding to images that depict the agent of an action to the left of the patient. Boiteau and Almor (Cognitive Science, 2016) demonstrated that a similar effect is found for pure linguistic information, such that after reading a sentence, identifying a word that had appeared earlier as the agent is faster on the left than on the right, but only for left-hand responses. In this study, we examined the role of lateralized manual motor processes in this effect and found that such spatial effects occur even when only the responses, but not the stimuli, have a spatial dimension. In support of the specific role of manual motor processes, we found a response-space effect with manual but not with pedal responses. Our results support an effector-specific (as opposed to an effector-general) hypothesis: Manual responses showed spatial effects compatible with those in previous research, whereas pedal responses did not. This is consistent with theoretical and empirical work arguing that the hands are generally involved with, and perhaps more sensitive to, linguistic information.

Keywords

Laterality Transitive verbs Syntax Spatial cognition Motor effectors 

A growing number of empirical studies (Bergen, Lindsay, Matlock, & Narayanan, 2010; Chatterjee, Southwood, & Basilico, 1999; Maass & Russo, 2003; Richardson, Spivey, Barsalou, & McRae, 2003; Stanfield & Zwaan, 2001) and theoretical accounts (Chatterjee, 2001; Levinson, 1996; Talmy, 2000) have shown or argued that language and spatial processing interact a great deal. A number of these studies involved processing explicitly spatial language, whereas others have shown that transitive sentences (i.e., sentences in which an agent directs action toward a patient) involve either explicit or implicit construction of a spatial frame that consists of the agent on the left side of space and the patient on the right. Initial evidence for this effect was established through picture recognition and drawing tasks (Chatterjee et al., 1999). Later evidence showed that this effect results from a combination of the orthographic direction of one’s native language (where the effect is reversed in Arabic speakers) and a general advantage for responding to images with left-to-right moving action that is independent of language (Maass & Russo, 2003). Recently, Boiteau and Almor (2016) showed that a left-side advantage for agents (in an American-English-speaking population) occurs in purely verbal tasks and is therefore not restricted to cases that involve pictorial representation of linguistic descriptions. In the present article, we ask whether this effect is part of a larger phenomenon characterizing lateral biases in nonspatial cognition or whether this effect is specific to language. Also, we ask whether this effect is tied to a specific motor modality or if it is a more general, amodal, spatial representation. At the same time, this investigation will provide some insight into the utility of a syntax–space connection—for example, it could be part of a primitive neural system that manual co-speech gestures tap into—and show that the heart of the issue may lie in bilateral linguistic processing.

Boiteau and Almor (2016) had right-handed participants read transitive sentences (e.g., Mary pushed Bill on the playground.) presented one word at a time at the center of a computer monitor and then respond to a probe word (e.g., the agent or patient of the sentence they just read) appearing on the left or right side of the screen. Across several different experiments, the authors found that agents were responded to faster than patients on the left side of the screen, and patients faster than agents on the right side, whereas other words in the sentence showed no spatial bias, an effect that they dubbed the “syntax–space effect.” Interestingly, this effect only occurred with left-hand responses, and did not occur at all when participants responded vocally. This effect could be, in part, related to findings regarding language lateralization in the brain. Although language processing has been shown to be left-lateralized, it is not entirely so (Bozic, Fonteneau, Su, & Marslen-Wilson, 2015). Thus, it is possible that successful comprehension of a transitive sentence involves both left- and right-lateralized regions. The right-hemispheric regions involved in linguistic processing may be more prone to influence from nonlinguistic spatial processes (e.g., from right superior parietal regions), which would be more apparent when the left hand (and right hemisphere) is used to make some language-related judgment (Zwaan & Yaxley, 2003). This explanation may help us understand why a syntax–space effect is generally only found on the left hand and may provide evidence as to the syntactic competence of the right hemisphere. Along these lines, it is uncertain whether the feet, which are also lateralized but less involved linguistically (see below), and in particular the left foot, would show any kind of syntax–space effect if it were responding to word probes.

Thus, two important questions remain when considering earlier findings. First, would the syntax–space effect be present when only the responses are lateralized and not the test probes? Without demonstrating this, one could argue that the various syntactic-spatial interactions that have been found are effects produced by manipulating the spatial dimensions of the stimuli. Second, is this effect only found through manual responding or does any lateralized motor effector produce similar results? Answering this question will help us understand why such an effect might exist. As we will show below, the hands and feet differ in very important ways in their relation to language and other forms of cognition. In considering differences between hand and foot responses we will entertain two competing hypotheses: effector general (hands and feet evoking the same spatial effect) and effector specific (hands, not feet, showing a spatial effect).

First, we know that the hands are central to linguistic processing and development. For example, some researchers have proposed connections between praxis and language (e.g., Kobayashi & Ugawa, 2013; Papagno, Della Sala, & Basso, 1993). Granito, Scorolli, & Borghi (2015) found a benefit for responding to concrete words with the hands and abstract relations with the mouth. Davoli, Du, Montana, Garverick, and Abrams (2010) showed that semantic judgments made toward stimuli near the hands are degraded. Manual gesture is an essential part of language, to the point that it may facilitate (Jamalian & Tversky, 2012; Krauss, 1998) or interfere with (Gunter, Weinbrenner, & Holle, 2015; Rauscher, Krauss, & Chen, 1996) language processing in a variety of capacities. Indeed, the use of deictic pointing aids language acquisition (Kalagher & Yu, 2006), as well as tracking referents in a discourse (So, Kita, & Goldin-Meadow, 2009). Spontaneous gestures accompanying speech reflect the prosody of the speaker, indicating that speech and manual gesture co-occur naturally (Guellaï, Langus, & Nespor, 2014). Additionally, in the course of human development, a stage of gestural babbling precedes vocal babbling, and simple gestures may be used to communicate preverbally (Petitto & Marentette, 1991; Volterra & Iverson, 1995). Sign languages are rich linguistic systems, nearly indistinguishable neuroanatomically from spoken languages (Emmorey et al., 2003). The hands are also an important tool for writing and typing, critical survival skills in a literate society (Corballis, 2011). Greenfield (1991) showed that the development of syntax mirrors the ability to construct hierarchically organized patterns with the hands. Finally, from an evolutionary perspective, some have suggested that the faculty of language evolved out of a gestural protolanguage (Rizzolatti & Arbib, 1998). Given these wide-ranging interactions, we might also expect to find a privileged connection between the hands and syntactic processes.

Regarding the hands and feet in psycholinguistic research, there are important differences in terms of processing, although previous studies have mostly looked at embodied semantic processing. For example, Buccino et al. (2005) found that hand- and foot- related verbs differentially affect motor evoked potentials measured in the hands and feet. Scorolli and Borghi (2007) found that the processing of sentences describing hand and foot actions facilitate manual and pedal responses, respectively. Hauk, Johnsrude, and Pulvermüller (2004), demonstrated that action verbs relating to the face, legs, and arms activate the same neural regions involved in performing actions with these parts of the body. Another study showed that damage to both hand and leg motor areas can result in linguistic processing difficulties associated with action verbs (Neininger & Pulvermüller, 2003). These studies at least suggest that manual and pedal responding could have differential effects with regards to semantic processing in a way that depends on the content of the sentence, but they do not make any specific predictions about whether feet would produce a syntax–space effect.

Finally, outside of language research, most studies have found only minimal differences between responding with the hands and feet. Two effects of interest here are the Simon effect (Simon & Rudell, 1967) and the spatial-numerical association of response codes (SNARC: the association of small numbers with the left and large numbers with the right; Dehaene, Bossini, & Giraux, 1993). The Simon effect occurs when a stimulus and response share some task-irrelevant dimension (i.e., location), thereby facilitating responding. Although much of the Simon-effect literature has investigated this effect through the use of manual responses, it is generally accepted that it is not the effector that matters but the spatial positioning of the response (Hommel, 2011; Simon, 1969; Wallace, 1971). Nicoletti and Umiltà (1985) found that lateralized stimuli elicited Simon effects with manual and pedal responses, with the latter about 20 ms slower than the former overall. Similarly, the majority of studies investigating the SNARC effect have involved responding with the hands (e.g., Calabria & Rossetti, 2005; Dehaene, Bossini, & Giraux, 1993; Fischer, 2003; Wood, Nuerck, & Willmes, 2006), usually in response to centrally presented numbers (unlike in the syntax–space tasks, in which probes were presented to the left or the right). However, similar to the Simon effect, the SNARC is found whether responding with the hands or the feet (Schwarz & Müller, 2006). Presumably (though not necessarily), related effects could also be found using the feet in other phenomena in which space has been found to be related to nonspatial dimensions: for example, stimulus duration (i.e., shorter–left, longer–right, an effect known as the spatial–temporal association of response codes or STEARC; Ishihara, Keller, Rossetti, & Prinz, 2008; Vallesi, Binns, & Shallice, 2008); nonnumerical regular sequences such as the alphabet (i.e., earlier–left, later–right; Gevers, Reynvoet, & Fias, 2003); the order of stimuli presented in working memory tasks (Van Dijck & Fias, 2011); and the linguistic markedness of stimuli, in which more and less linguistically marked stimuli are associated with “left” and “right,” respectively (i.e., the MARC effect; Nuerk, Iverson, & Willmes, 2004; Roettger & Domahs, 2015). The MARC effect may be of particular interest to the present investigation given that it is a linguistic effect that seems to be superficially related to the SNARC effect.

In summary, transitive verbs have been shown to interact with spatial processing, seemingly generating a left-to-right frame of action (Chatterjee, 2001), which is partly but not entirely related to orthographic direction (Maass & Russo, 2003). Moreover, this effect is absent with vocal responses, appears to be more sensitive to left-hand responses, and only occurs for the arguments of the verb (i.e., agent and patient) and not other words in the sentence (Boiteau & Almor, 2016). Finally, there are some striking differences between the syntax–space and SNARC effects. If the syntax–space effect were simply a linguistic instantiation of the SNARC, we would expect both vocal responses and other words in the sentence to show spatial interactivity. Testing other modes of response is then a crucial way of distinguishing these two superficially related effects. If the syntax–space effect were to behave like SNARC with both hands and feet producing similar results, it would suggest that the spatial frame accessed during language processing is either text-based and epiphenomenal, a curiosity arising out of the visuospatial medium of reading, or a general left-versus-right motor processing effect. However, if the SNARC and syntax–space effects were not to behave in the same ways—that is, if only the hands produced a syntax–space effect—it would suggest that when the hands are involved in a linguistic task, they are facilitated or inhibited by the spatial frames arising from language comprehension. Thus, the purpose of this study was to first test whether or not the syntax–space effect observed previously will be found when the responses (but not the stimuli) have a lateral spatial code (Exp. 1; henceforth, E1). If the syntax–space effect stems from a similar mechanism as the SNARC (e.g., a mental number or text line), then we would expect to observe a spatial response effect to agents and patients. However, we also might expect the same result regardless of whether or not the SNARC and syntax–space effects are related, since Boiteau and Almor (2016) also observed interactions between probe words and response hand that occurred independent of the stimulus space (however, Boiteau and Almor never presented probes in a central position, making this result difficult to interpret). Thus, we hypothesized that a spatial–linguistic interaction would occur, similar to the one observed by Boiteau and Almor, except for responses and not for stimuli. Second, in light of the differences we have reviewed between the SNARC and syntax–space effects (and despite the similarities between them), we predicted we would find evidence supporting the effector-specific hypothesis, in which only the hands would show a spatial effect, and not the feet. Finding such an interaction would help differentiate the syntax–space and SNARC effects and would provide evidence that the spatial frames generated by syntax are tied to (and perhaps result from) manual processing, and thus would reinforce the view that the syntax of language, and not just its meaning, is grounded in spatial and motor processes.

Experiments 1 and 2

The main purpose of these experiments was to test whether or not we would observe a syntax–space effect when responses, but not stimuli, have a spatial dimension (E1 and E2) and whether or not hands (E1) and feet (E2) elicit the same type of effect. Participants read transitive sentences in the active and passive voice and generated responses with their hands or feet using left- and right-lateralized buttons and pedals. Boiteau and Almor (2016) found that other words in the sentence besides the agent and patient did not show a spatial bias and that passivizing a sentence (i.e., Bill was pushed by Mary on the playground.) eliminated the spatial effect. This may be because the increased syntactic complexity of passive over active sentences leads to an increase in the activation of left hemispheric language processing regions (Mack, Meltzer-Asscher, Barbieri, & Thompson, 2013). We include this condition in E1 and E2 to as closely replicate Boiteau and Almor’s (2016) Experiment 3 as possible minus the spatial dimension of the probe words, and in addition test if this active–passive difference is still present when only responses are lateralized. Therefore, in the present experiments we predicted that a spatial effect would be found for the hands but not for the feet, in the form of a four-way interaction between effector type, grammatical voice, probe word, and response lateralization, characterized by a three-way interaction for the hands but not for the feet. Finally, since Boiteau and Almor found an advantage to making negative responses (i.e., for probe words not appearing in the previous sentence) on the left hand over the right hand—an instantiation of the MARC effect, in which more linguistically marked stimuli are responded to faster on the left than the right (Nuerk et al., 2004)—we predict a similar finding here. No study has previously investigated the impact of effector type on the MARC effect; however, given the fact that the MARC effect is argued to arise from a matching of linguistic markedness, we predict that effector type should not matter, and that both hands and feet would show the left-side/negative-response bias observed previously. Although the MARC effect is not the main focus of this article, finding or not finding one in these two experiments would add valuable insight into the nature of that effect. For example, if it were found to operate on both manual and pedal responses, we could safely conclude that it is an issue of linguistic markedness. However, finding it only with the hands might suggest that this linguistic-markedness effect is tied into manual co-speech gestures, in which the two hands may be used to stress contrasting alternatives.

Method

Participants

Ninety-one participants (74 female, 17 male; mean age = 19.11 years) took part in E1, and 84 participants (66 female, 18 male; mean age = 19.62 years) took part in E2 for extra credit toward a psychology class. Although all participants reported being right-handed, we acquired detailed hand (Edinburgh Inventory; Oldfield, 1971) and foot (Waterloo Footedness Questionnaire; Elias, Bryden, & Bulman-Fleming, 1998) dominance data from only 83 (68 female, 15 male) of the E1 participants and 82 (65 female, 17 male) of the E2 participants. Thus, only the data from these participants were used in the analyses below. The average scores on the Edinburgh Inventory were .77 for E1 and .87 for E2.1 For the foot dominance data, the majority of participants were right-footed (E1, M = .55; E2, M = .7). All participants were native speakers of English with normal or corrected-to-normal visual acuity (self-reported). Participants gave their informed consent to participate in this research under the guidelines of the University of South Carolina Institutional Review Board.

E1 procedure

In this experiment, participants read sentences, judged probe words, and answered comprehension questions. The room was darkened, and participants placed their heads in a chinrest, centering their field of vision on the middle of the computer monitor. The experiment was run in E-Prime 2.0. Each participant ran through 160 experimental trials, divided into two blocks of 80 trials. Ten practice trials preceded each block. Participants could not proceed into the experimental block until they had achieved 65% accuracy on these practice trials. During the practice blocks, participants received feedback for their responses, but not during the experimental blocks. Figure 1 shows a sample trial. Each trial began with a white fixation cross presented in the center of a black screen for 1 s. Then a transitive sentence appeared one word at a time (500 ms/word) in the center of the screen in white, size-18 Courier New font. Transitive sentences were in either the active or the passive voice, with 80 active and 80 passive sentences per participant. No sentences were repeated within participants, but across participants the sentences alternated voice. The item order was randomized separately for each participant. The sentences consisted of proper-named agents, and patients engaged in a variety of activities (e.g., kissing, chasing). Each sentence consisted of either two male characters, two female characters, a female agent and male patient, or a male agent and female patient, with equal numbers of gender combination types across items. The lengths of agent and patient names were equated (agent: M = 5.37, range 3–9; patient: M = 5.35, range: 3–10), t(159) = 0.29, p = .77. All of the items used can be found in the Appendix of Boiteau and Almor (2016). Following the transitive sentence, a second fixation cross appeared, this one presented for 1,500 ms. Next, a probe word was presented in the center of the screen either until the participant responded or until 4 s had passed (whichever came first). The probe was one of the following: a present probe, one of the words that had appeared in the sentence the participant had just read, or a false probe, a proper name that was not used in any of the actual items, with equal numbers of present and false probes shown per block and session. Among the present probes, half were the agent of the sentence and half the patient. Participants were instructed to respond “yes” or “no” with either a left or a right button press on a Psychology Sofware Tools (PST) Serial Response Box, using the index fingers of their respective hands, to judge whether or not the probe word had appeared in the sentence they had just read. The only difference between the two 80-item blocks was the yes/no button mapping, with the order of left–yes/right–no and left–no/right–yes mappings being counterbalanced across participants. Following the probe word a third fixation was presented (1,500 ms), and then a yes/no comprehension question worded in the active or passive voice for 4,500 ms. The purpose of these questions was to ensure that participants were reading to understand and not just treating the sentences as lists of words. Finally, a brief rest (randomly ranging from 2,000 to 4,000 ms) followed this comprehension question, during which a fixation cross remained on the screen. We recorded accuracy and response times (RTs) for probe and comprehension question responses, but focused our analyses on log-transformed RTs.2
Fig. 1

Schematic of sample trials from all experiments

E2 procedure

The procedure was identical to that of E1. However, participants responded by pressing a left or right PST foot pedal connected to the Serial Response Box, using either their left or their right foot, with which foot was used to respond for “yes” or “no” answers varying by block, as before.

Thus, in the analyses of these experiments (which are reported together below) the factors of interest were Voice (active, passive), Word (agent, patient), Lateralization of Effector (left, right), and Effector Type (hand, foot).

Results

Main analysis

Following the procedures from Boiteau and Almor (2016), we removed all trials with incorrect probe responses (2.91%), those with incorrect responses to the comprehension questions (15.98%), and those with extreme responses (1.62%; defined as two standard deviations away from the mean)—a total of 20.52% of trials.

The data analyses were conducted in the lme4 package (Bates, Maechler, & Bolker, 2011) in R (version 3.1.1; R Development Core Team, 2012), using mixed-effects modeling3 with the factors Voice (active vs. passive), Word (agent vs. patient), Lateralization (left vs. right), and Effector (hand vs. feet) as fixed effects, as well as all possible interactions between the four factors, and participants and items as random effects4 (Baayen, Davidson, & Bates, 2008; Barr, Levy, Scheepers, & Tily, 2013). This round of analyses only focused on correct “yes” responses. We compared the full model with a model with the four-way interaction removed and found a significant difference, χ 2(1) = 4.09, p = .043, generally in line with our hypothesis. We next explored the exact statistical basis of this interaction. The full model results are reported in Table 1 and shown in Fig. 2.5
Table 1

Fixed effects for the max model

β

Condition

Coefficient Estimates

t

p

Est.

Std. Error

β 0

(Intercept)

6.813

0.024

286.74

<.001***

β 1

Passive

0.06

0.015

3.988

<.001***

β 2

Patient

0.061

0.015

3.994

<.001***

β 3

Right

–0.123

0.015

–7.967

<.001***

β 4

Hand

–0.159

0.033

–4.771

<.001***

β 5

Passive × Patient

–0.141

0.021

–6.63

.244

β 6

Passive × Right

0.013

0.021

0.603

.546

β 7

Patient × Right

–0.015

0.022

–0.669

.504

β 8

Passive × Hand

0.029

0.021

1.335

.182

β 9

Patient × Hand

0.031

0.021

1.446

.148

β 10

Right × Hand

0.079

0.021

3.726

<.001***

β 11

Passive × Patient × Right

0.002

0.03

0.064

.949

β 12

Passive × Patient × Hand

–0.066

0.03

–2.203

.028*

β 13

Passive × Right × Hand

–0.031

0.03

–1.018

.309

β 14

Patient × Right × Hand

–0.036

0.03

–1.215

.224

β 15

Passive × Patient × Right × Hand

0.084

0.042

1.988

.047*

The baseline level for the Word factor is agent, for Voice is active, for Lateralization is left, and for Effector is foot. * p < .05, *** p < .001

Fig. 2

Graphical results of Experiments 1 and 2, showing the interaction between effector, voice, lateralization, and word. Bars represent SEs

To better understand the four-way interaction, we divided the datasets into E1 and E2 groups and tested the three-way interactions between lateralization, voice, and word separately. For E1 (the hand group) the interaction was significant, χ 2(1) = 9.34, p = .002, whereas for E2 (the foot group) it was not, χ 2(1) < 1, p = .99. This supports our main hypothesis that the syntax–space effect would be found for manual and not pedal responses.

Hand group

Next, we explored the three-way interaction of the E1 group, again by dividing up the data according to the Voice factor, now into active versus passive datasets, looking at the interaction between lateralization and word in each dataset and following the procedure described above, removing the highest-order interaction first and comparing this to the full model. We found a significant interaction for the active set, χ 2(1) = 6.28, p = .01, but the passive interaction was only marginally so, χ 2(1) = 3, p = .08. This finding is similar to that reported in Experiment 3 of Boiteau and Almor (2016), in which the spatial effect was reduced for the passive voice but present for the active voice.
  • Active: The data of the hand group responding to items in the active voice were analyzed further by creating left- and right-response groups and measuring separately for both sets the main effects of word. For the left group (hand responses to active-voice items), we observed a highly significant effect of word, with responses being faster to agents than to patients, χ 2(1) = 27.73, p < .001, whereas for the right group (also hand responses to active-voice items), the difference was only marginally significant, χ 2(1) = 2.94, p = .09.

  • Passive: The data of the hand group responding to passive-voice items were analyzed to test for the main effects of lateralization and word, since the interaction was not significant. Both word, χ 2(1) = 89.29, p < .001, and lateralization, χ 2(1) = 13.55, p < .001, were significant, with faster responses to the patients (who were in focus in the passive sentences) and with the right hand.

Foot group

Returning to the foot-response group, since we did not find a three-way interaction between lateralization, voice, and word, we tested for lower-order effects, again removing each interaction in turn and comparing it with the next-largest model. No interaction emerged between probe word and lateralization, χ 2(1) = 0.49, p = .498, nor was there an interaction between voice and lateralization, χ 2(1) = 0.76, p = .38. However, the interaction between word and voice was significant, χ 2(1) = 79.53, p < .001. We also found a main effect of lateralization, with right-foot responses being faster than left-foot responses, χ 2(1) = 193.73, p < .001.

The Voice × Word interaction was characterized by faster agent than patient responses for the active voice, χ 2(1) = 22.9, p < .001, and faster patient than agent responses for the passive voice, χ 2(1) = 55.62, p < .001. This is in essence a replication of Ferreira (2003) with pedal responses.

False probes

Finally, we analyzed responses to the false probes (“no” responses) separately, to test whether the factors Voice, Lateralization, and Effector and all possible interactions between these three variables affected probe words unassociated with any linguistic context. The three-way interaction was not significant, χ 2(1) = 0.63, p = .43 (see Table 2 for the max model), nor was the Voice × Lateralization interaction, χ 2(1) = 0.11, p = .74, nor the Voice × Effector interaction, χ 2(1) = 0.15, p = .7. However, the interaction between effector and lateralization was significant, χ 2(1) = 5.79, p = .02 (see Fig. 3). Finally, the main effect of voice was also not significant, χ 2(1) = 0.01, p = .91.
Table 2

Fixed effects for the max model of false-probe responses

β

Condition

Coefficient Estimates

t

p

Est.

Std. Error

β 0

(Intercept)

6.773

0.022

302.068

<.001***

β 1

Passive

0.0001

0.01

0.088

.93

β 2

Right

–0.002

0.01

–0.152

.879

β 3

Hand

–0.142

0.031

–4.564

<.001***

β 4

Passive × Right

–0.005

0.013

–0.334

.739

β 5

Passive × Hand

–0.004

0.013

–0.278

.781

β 6

Right × Hand

0.015

0.013

1.156

.248

β 7

Passive × Right × Hand

0.015

0.019

0.793

.428

The baseline level for the Voice factor is active, for Lateralization is left, and for Effector is foot. *** p < .001

Fig. 3

Graphical results of Experiments 1 and 2, showing the interaction between effector and lateralization for false-probe responses. Colored bars represent SEs

To get a better understanding of the interaction between lateralization and effector, we divided the datasets into E1 (hand) and E2 (foot) groups, and for each group tested the main effect of lateralization. For the hand group, the effect was significant, χ 2(1) = 6.43, p = .001, with faster responses for the left than for the right hand (i.e., a MARC effect; Nuerk et al., 2004). However, the effect was not significant for the foot group, χ 2(1) < 1, p = .95 (i.e., the absence of a MARC effect).

Discussion

In this article, we investigated the basis of reports that transitive actions and sentences activate a spatial representation in which agents are positioned to the left of patients (Boiteau & Almor, 2016; Chatterjee, 2001; Chatterjee et al., 1999; Maass & Russo, 2003). For the first time, we showed that such a spatial effect can be found when the stimuli do not possess any spatial6 dimension of their own. This in itself is an important finding, showing that the syntax–space effect is not task-contingent, arising solely from manipulating the position of a word or figure in an image. Moreover, we seem to be tapping into an abstract spatial representation that is tied to manual processing, as opposed to a low-level (perhaps textual) representation. We also replicated findings from Boiteau and Almor (2016), showing that the spatial effect is absent in the passive voice as opposed to the active, which has consistently shown a left-hand (or left-side) agent advantage. Interestingly, we showed that this syntax–space effect is largely dependent on the use of the hands, and that using another lateralized motor effector (i.e., the feet) does not produce a spatial effect. We also found that the MARC effect (in which negative responses are faster on the left side) are only present with manual responses, and not feet. Finally, we found that for foot responses, whereas no interaction with effector lateralization occurred, there was an interaction between probe word and grammatical voice, replicating the findings of Ferreira (2003), who found agents to be more accessible in the mental model following active sentences, and patients to be more accessible following passives.

Regarding the primary finding from these experiments (i.e., a syntax–space effect for the hands but not the feet), this further distinguishes this effect from a simple linguistic instantiation of the SNARC effect and other interactions between magnitude and space (Bueti & Walsh, 2009; Hubbard, Piazza, Pinel, & Dehaene, 2005). Our findings (in conjunction with those of Boiteau & Almor, 2016) show that the syntax–space effect arises during left-hand responses. This raises the possibility that the syntax–space effect is a result of neural laterality, wherein the right hemisphere may be more sensitive to spatial information than is the left (Zwaan & Yaxley, 2003), and furthermore, that these representations are more easily activated by manual responses than by other lateralized motor effectors. However, this right-hemispheric hypothesis will have to remain speculation, since we did not use a method (such as neuroimaging or a divided visual field paradigm) that could confirm this to be the case. Note also that this explanation is somewhat at odds with the left-to-right advantage (dubbed the Chatterjee effect) found with processing scenes, which has been posited to be due to the left hemisphere’s specialization for processing motion trajectories that move from left to right (Chatterjee, 2001). This discrepancy suggests either that the right-hemisphere-processing explanation is incorrect or that the syntax–space and Chatterjee effects are distinct phenomena.

One possible objection to our interpretation is that, on the basis of the appearance of Fig. 2, our results could merely reflect a floor effect on the right hand that masks what would otherwise be a larger difference in RTs between responses to agent and patient probes. Although the results of the present study cannot rule out this objection, the results from our earlier research challenge its viability. Notably, in certain experiments Boiteau and Almor (2016) found the left hand responding faster than the right; thus, right-hand responses in tasks similar to the ones we used here do not generally show a floor effect. One possible way to address this concern more directly would be to test participants with different degrees of handedness, and although we did include this variable in our analyses (and found no effect of a participant’s Edinburgh inventory score), we only tested right-handed participants. It may be the case that testing a left-handed (and left-footed) population would result in a different pattern of results, or would at least go some way to address this critique. However, separating participants on this variable might introduce other confounds. For example, Annett (1992) found superior performance of left-handed males over right-handed males on the Rey–Osterrieth complex figure test, which taps into visuospatial and motor processing. Given our interest in spatial processing, this baseline difference in visuospatial and motor processing could be problematic. More generally, according to one theory of handedness (i.e., the right shift theory; Annett, 2002), right-hemispheric (and, thus, left-hand) performance weakens in right-handers during ontological development, but a corresponding weakening of left-hemispheric performance does not occur in left-handers.

Finally, our finding that a spatial effect exists for actives and not passives suggests several different, though not necessarily mutually exclusive, explanations: (1) Participants generate surface-structure (in which word order is important) and deep-structure (in which word order is irrelevant) spatial schemata with agents and patients to the left or right, which align in the case of active sentences and mismatch in the case of passives. (2) Syntactic (subject vs. object), not semantic (agent vs. patient), form receives the highest consideration in generating a spatial schema. (3) Processing passive sentences is more left-lateralized neurally than is procesing of active sentences (Mack et al., 2013). (4) The greater cognitive demands of passive sentences prevent a spatial schema from forming, suggesting that such schema formation is not a necessary aspect of linguistic comprehension. Future studies will have to tease apart these different possibilities.

It is also important to note that in our results we found another example of how language and hand are connected: a MARC effect (which so far has only been demonstrated with manual responses; Nuerk et al., 2004; Roettger & Domahs, 2015) in the hand group, but not in the foot group. This is in stark contrast with Schwarz and Müller (2006), who found equally large SNARC effects on the hands and feet. One possible explanation for this discrepancy could be tied to manual gesture, in which the hands may be employed to provide a visual aid to the audience (and perhaps even a spatial aid to the speaker) and to demarcate contrasting alternatives such as good and bad, right and wrong, or even and odd. Indeed, people tend to gesture even when no audience is present to read the visual cues (e.g., while talking on the phone). The MARC effect may thus arise out of, or at least be reinforced by, the use of the hands in making such gestures. Because the foot serves no such supportive role in language, associations with space might then be purely manual-centric. This same argument might also be applied to the syntax–space effect. Pointing gestures and indications of motion trajectory are used by signers and speakers alike to facilitate production and comprehension (Emmorey, 1999; Kalagher & Yu, 2006; Liddell, 2003; McNeill & Pedelty, 1995; So et al., 2009). When a discourse contains multiple entities (such as an agent and a patient) that need to be stored for subsequent reference, regions of the brain involved in spatial processing may be recruited to maintain the discourse model (Almor, Smith, Bonhila, Fridriksson, & Rorden, 2007; Boiteau, Bowers, Nair, & Almor, 2014). When accessing such entities, manual motor effectors would in turn be primed. From this perspective, the spatial representation generated in right parietal cortex might actually serve a communicative function, priming manual gesture and thus creating a visuospatial approximation of a verbal description. Along similar lines, as was pointed out by one of the anonymous reviewers of our manuscript, the sentences we used were only presented in the past tense. Other researchers have found that the temporal reference frame influences the types of gestures a speaker will produce (e.g., Núñez, Cooperrider, Doan, & Wassmann, 2012). Thus, it is possible that the temporal frame of our stimuli may have incidentally created a shift in spatial focus, which in turn produced the results we saw, with reductions in left-hand performance to the patient probes. This would also be a fruitful area for future research, to help us better understand the nature of the spatial reference frame produced by comprehending transitive actions.

In conclusion, we found evidence for a spatial interaction with transitive sentences that occurs only for the hands, not the feet. Moreover, the feet seem especially indifferent to spatial–linguistic interactions (i.e., no MARC effect). This is likely related to the specific dual role that the hands play in manipulating objects in space and gesturing during speech. On the other hand, pedal responses do show sensitivity to well-documented effects such as the accessibility of agents and patients following active and passive sentences (Ferreira, 2003). This could reflect the existence of an amodal representation of the sentence in memory that includes semantic roles, and thus affects processing independently of the modality of the response. The discrepancy between these findings may give us a clue as to how various language–space–hand interactions may arise. Specifically, these interactions may be related to the larger difference in neuroanatomical space separating the foot and mouth versus that separating the hand and mouth (Hauk, Johnsrude, & Pulvermüller, 2004), in addition to the many other hand–language connections that we mentioned in the introduction: the link between praxis and language (e.g., Kobayashi & Ugawa, 2013), the facilitation (Krauss, 1998) or interference (Rauscher, Krauss, & Chen, 1996) found between gesture and language, and the manual actions of writing and texting (Corballis, 2011). These examples in conjunction with the findings from our study demonstrate that, in contrast to the more general, and likely amodal, representations associated with language comprehension, right-lateralized manual representations and processes may be specifically tied to the syntactic representation of the sentence. Although the relative preservation of language abilities following right-hemisphere damage shows that these processes are not critical for successful language processing, our findings nevertheless suggest that such processes could play a role in many unimpaired instances of language processing.

Footnotes

  1. 1.

    A score of 1 indicates a strong right hand or foot preference, and –1, a strong left preference.

  2. 2.

    We log-transformed the data because the RTs had a high positive skew, G 1 = 1.35.

  3. 3.

    The advantage of using mixed-effects modeling over traditional analysis of variance (ANOVA) was that in one analysis both participants and items could be included as random effects, whereas ANOVA requires two separate analyses, one by participants and one by items.

  4. 4.

    The exact model was as follows: log(RT) ~ Lateralization × Word × Voice × Effector + (1 | Participant) + (1 | Item). Note that attempting to include random slopes by participants and items resulted in the models failing to converge.

  5. 5.

    We also tested whether or not hand or foot dominance modulated the four-way interaction, but failed to find any significant interactions. These analyses are available upon request from the first author.

  6. 6.

    One might object that we have actually provided evidence for a temporal interaction that incidentally activates a spatial representation similar to the STEARC effect, mentioned in the introduction. Note, however, that Boiteau and Almor (2016) found that position in the sentence is not what mainly drives this effect, since the final word in the sentence (which should have a strong rightward bias) showed no interaction with space. Along the same lines as this temporal–spatial explanation, passive sentences should have, but did not, elicit a stronger STEARC effect, because there is a larger temporal gap between the patient and agent in these sentences than in active sentences.

Notes

Author note

This work was partially supported by Grant Nos. NIH R21AG030445 and NSF BCS0822617. We thank the following members of the Language and Cognition aLab for their help with running participants: Christiana Keinath, Chris Mitropolous, Jay Lichtenburg, Sydney Darrow, and Dustin Smith.

References

  1. Almor, A., Smith, D. V., Bonhila, L., Fridriksson, J., & Rorden, C. (2007). What is in a name? Spatial brain circuits are used to track discourse references. NeuroReport, 18, 1215–1219. doi: 10.1097/WNR.0b013e32810f2e11 CrossRefPubMedGoogle Scholar
  2. Annett, M. (1992). Spatial ability in subgroups of left- and right-handers. British Journal of Psychology, 83, 495–515.CrossRefGoogle Scholar
  3. Annett, M. (2002). Handedness and brain asymmetry: The right shift theory. New York, NY: Taylor & Francis Inc.Google Scholar
  4. Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390–412. doi: 10.1016/j.jml.2007.12.005 CrossRefGoogle Scholar
  5. Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68, 255–278. doi: 10.1016/j.jml.2012.11.001 CrossRefGoogle Scholar
  6. Bates, D., Maechler, M., & Bolker, B. (2011). lme4: Linear mixed-effects models using S4 classes (R package version 0.999999-0). Retrieved from http://CRAN.R-project.org/package=lme4
  7. Bergen, B. K., Lindsay, S., Matlock, T., & Narayanan, S. (2010). Spatial and linguistic aspects of visual imagery comprehension. Cognitive Science, 31, 733–764.CrossRefGoogle Scholar
  8. Boiteau, T. W., & Almor, A. (2016). Transitivity, space, and hand: The spatial grounding of syntax. Cognitive Science. doi: 10.1111/cogs.12355. Advance online publication.PubMedGoogle Scholar
  9. Boiteau, T. W., Bowers, E., Nair, V., & Almor, A. (2014). The neural representation of plural discourse entities. Brain and Language, 137, 130–141.CrossRefPubMedGoogle Scholar
  10. Bozic, M., Fonteneau, E., Su, L., & Marslen-Wilson, W. D. (2015). Grammatical analysis as a distributed neurobiological function. Human Brain Mapping, 36, 1190–1201. doi: 10.1002/hbm.22696 CrossRefPubMedGoogle Scholar
  11. Buccino, G., Riggio, L., Melli, G., Binkofski, F., Gallese, V., & Rizzolatti, G. (2005). Listening to action-related sentences modulates the activity of the motor system: A combined TMS and behavioral study. Cognitive Brain Research, 24, 355–363.CrossRefPubMedGoogle Scholar
  12. Bueti, D., & Walsh, V. (2009). The parietal cortex and the representation of time, space, number and other magnitudes. Philosophical Transactions of the Royal Society B, 364, 1831–1840. doi: 10.1098/rstb.2009.0028 CrossRefGoogle Scholar
  13. Calabria, M., & Rossetti, Y. (2005). Interference between number processing and line bisection: A methodology. Neuropsychologia, 43, 779–783. doi: 10.1016/j.neuropsychologia.2004.06.027 CrossRefPubMedGoogle Scholar
  14. Chatterjee, A. (2001). Language and space: Some interactions. Trends in Cognitive Sciences, 5, 55–61.CrossRefPubMedGoogle Scholar
  15. Chatterjee, A., Southwood, M. H., & Basilico, D. (1999). Verbs, events, and spatial representations. Neuropsychologia, 37, 395–402. doi: 10.1016/S0028-3932(98)00108-0 CrossRefPubMedGoogle Scholar
  16. Corballis, M. C. (2011). The recursive mind: The origins of human language, thought, and civilization. Princeton, NJ: Princeton University Press.Google Scholar
  17. Davoli, C. C., Du, F., Montana, J., Garverick, S., & Abrams, R. A. (2010). When meaning matters, look but don’t touch: The effects of posture on reading. Memory & Cognition, 38, 555–562. doi: 10.3758/MC.38.5.555 CrossRefGoogle Scholar
  18. Dehaene, S., Bossini, S., & Giraux, P. (1993). The mental representation of parity and number magnitude. Journal of Experimental Psychology: General, 122, 371–396. doi: 10.1037/0096-3445.122.3.371 CrossRefGoogle Scholar
  19. Elias, L. J., Bryden, M. P., & Bulman-Fleming, M. B. (1998). Footedness is a better predictor than is handedness of emotional lateralization. Neuropsychologia, 36, 37–43.CrossRefPubMedGoogle Scholar
  20. Emmorey, K. (1999). The confluence of space and language in signed languages. In P. Bloom (Ed.), Language and space. Cambridge, MA: MIT Press.Google Scholar
  21. Emmorey, K., Grabowski, T., McCullough, S., Damasio, H., Ponto, L. L. B., Hichwa, R. D., & Bellugi, U. (2003). Neural systems underlying lexical retrieval for sign language. Neuropsychologia, 41, 85–95.CrossRefPubMedGoogle Scholar
  22. Ferreira, F. (2003). The misinterpretation of noncanonical sentences. Cognitive Psychology, 47, 146–203.CrossRefGoogle Scholar
  23. Fischer, M. H. (2003). Spatial representations in number processing—Evidence from a pointing task. Visual Cognition, 10, 493–508. doi: 10.1080/13506280244000186 CrossRefGoogle Scholar
  24. Gevers, W., Reynvoet, B., & Fias, W. (2003). The mental representation of ordinal sequences is spatially organized. Cognition, 87, B87–B95. doi: 10.1016/S0010-0277(02)00234-2 CrossRefPubMedGoogle Scholar
  25. Granito, C., Scorolli, C., Borghi, A. M., & Lidzba, K. (2015). Naming a lego world. The role of language in the acquisition of abstract concepts. PLOS ONE 10(1), e0114615Google Scholar
  26. Greenfield, P. M. (1991). Language, tools, and brain: The ontogeny and phylogeny of hierarchically organized sequential behavior. Behavioral and Brain Sciences, 14, 531–551.CrossRefGoogle Scholar
  27. Guellaï, B., Langus, A., & Nespor, M. (2014). Prosody in the hands of the speaker. Frontiers in Psychology, 5, 70. doi: 10.3389/fpsyg.2014.00070 Google Scholar
  28. Gunter, T. C., Weinbrenner, J. E. D., & Holle, H. (2015). Inconsistent use of gesture space during abstract pointing impairs language comprehension. Frontiers in Psychology, 6, 80. doi: 10.3389/fpsyg.2015.00080 CrossRefPubMedPubMedCentralGoogle Scholar
  29. Hauk, O., Johnsrude, I., & Pulvermüller, F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41, 301–307. doi: 10.1016/S0896-6273(03)00838-9 CrossRefPubMedGoogle Scholar
  30. Hommel, B. (2011). The Simon effect as tool and heuristic. Acta Psychologica, 136, 189–202. doi: 10.1016/j.actpsy.2010.04.011 CrossRefPubMedGoogle Scholar
  31. Hubbard, E. M., Piazza, M., Pinel, P., & Dehaene, S. (2005). Interactions between number and space in parietal cortex. Nature Reviews Neuroscience, 6, 435–448. doi: 10.1038/nrn1684 CrossRefPubMedGoogle Scholar
  32. Ishihara, M., Keller, P. E., Rossetti, Y., & Prinz, W. (2008). Horizontal spatial representations of time: Evidence for the STEARC effect. Cortex, 44, 454–461.CrossRefPubMedGoogle Scholar
  33. Jamalian, A., & Tversky, B. (2012). Gestures alter thinking about time. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Building bridges across cognitive sciences around the world: Proceedings of the 34th Annual Meeting of the Cognitive Science Society (pp. 551–557). Austin, TX: Cognitive Science Society.Google Scholar
  34. Kalagher, H., & Yu, C. (2006). The effects of deictic pointing in word learning. Paper presented at the Fifth International Conference of Development and Learning, Bloomington, Indiana.Google Scholar
  35. Kobayashi, S., & Ugawa, Y. (2013). Relationships between aphasia and apraxia. Journal of Neurology & Translational Neuroscience, 2, 1028.Google Scholar
  36. Krauss, R. M. (1998). Why do we gesture when we speak? Current Directions in Psychological Science, 7, 54–60.CrossRefGoogle Scholar
  37. Levinson, S. C. (1996). Language and space. Annual Review of Anthropology, 25, 353–382.CrossRefGoogle Scholar
  38. Liddell, S. K. (2003). Grammar, gesture, and meaning in American Sign Language. New York, NY: Cambridge University Press.CrossRefGoogle Scholar
  39. Maass, A., & Russo, A. (2003). Directional bias in the mental representation of spatial events: Nature or culture? Psychological Science, 14, 296–301. doi: 10.1111/1467-9280.14421 CrossRefPubMedGoogle Scholar
  40. Mack, J. E., Meltzer-Asscher, A., Barbieri, E., & Thompson, C. K. (2013). Neural correlates of processing passive sentences. Brain Sciences, 3, 1198–1214.CrossRefPubMedPubMedCentralGoogle Scholar
  41. McNeill, D., & Pedelty, L. L. (1995). Right brain and gesture. In K. Emmorey & J. Reilly (Eds.), Language, gesture, and space (pp. 63–85). Hillsdale, NJ: Erlbaum.Google Scholar
  42. Neininger, B., & Pulvermüller, F. (2003). Word-category specific deficits after lesions in the right hemisphere. Neuropsychologia, 41, 53–70.CrossRefPubMedGoogle Scholar
  43. Nicoletti, R., & Umiltà, C. (1985). Responding with hand and foot: The right/left prevalence in spatial compatibility is still present. Perception & Psychophysics, 38, 211–216. doi: 10.3758/BF03207147 CrossRefGoogle Scholar
  44. Nuerk, H.-C., Iverson, W., & Willmes, K. (2004). Notational modulation of the SNARC and the MARC (linguistic markedness of response codes) effect. Quarterly Journal of Experimental Psychology, 57A, 835–863. doi: 10.1080/02724980343000512 CrossRefGoogle Scholar
  45. Núñez, R., Cooperrider, K., Doan, D., & Wassmann, J. (2012). Contours of time: Topographic construals of past, present, and future in the Yupno valley of Papua New Guinea. Cognition, 124, 25–35.CrossRefPubMedGoogle Scholar
  46. Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 97–113. doi: 10.1016/0028-3932(71)90067-4 CrossRefPubMedGoogle Scholar
  47. Papagno, C., Della Sala, S., & Basso, A. (1993). Ideomotor apraxia without aphasia and aphasia without apraxia: The anatomical support for a double dissociation. Journal of Neurology, Neurosurgery and Psychiatry, 56, 286–289.CrossRefPubMedPubMedCentralGoogle Scholar
  48. Petitto, L. A., & Marentette, P. (1991). Babbling in the manual mode: Evidence for the ontogeny of language. Science, 251, 1483–1496.CrossRefGoogle Scholar
  49. R Development Core Team. (2012). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from www.R-project.org Google Scholar
  50. Rauscher, F. H., Krauss, R. M., & Chen, Y. (1996). Gesture, speech, and lexical access: The role of lexical movements in speech production. Psychological Science, 7, 226–231.CrossRefGoogle Scholar
  51. Richardson, D. C., Spivey, M. J., Barsalou, L. W., & McRae, K. (2003). Spatial representations activated during real-time comprehension of verbs. Cognitive Science, 27, 767–780. doi: 10.1207/s15516709cog2705_4 CrossRefGoogle Scholar
  52. Rizzolatti, G., & Arbib, M. A. (1998). Language within our grasp. Trends in Neurosciences, 21(5): 188–194.Google Scholar
  53. Roettger, T. B., & Domahs, F. (2015). Grammatical number elicits SNARC and MARC effects as a function of task demands. Quarterly Journal of Experimental Psychology, 68, 1231–1248.CrossRefGoogle Scholar
  54. Schwarz, W., & Müller, D. (2006). Spatial associations in number-related tasks: A comparison of manual and pedal responses. Experimental Psychology, 53, 4–15. doi: 10.1027/1618-3169.53.1.4 CrossRefPubMedGoogle Scholar
  55. Scorolli, C., & Borghi, A. M. (2007). Sentence comprehension and action: Effector specific modulation of the motor system. Brain Research, 1130, 119–124.CrossRefPubMedGoogle Scholar
  56. Simon, J. R. (1969). Reaction towards the source of stimulation. Journal of Experimental Psychology, 81, 174–176. doi: 10.1037/h0027448 CrossRefPubMedGoogle Scholar
  57. Simon, J. R., & Rudell, A. P. (1967). Auditory S–R compatibility: The effect of an irrelevant cue on information processing. Journal of Applied Psychology, 50, 300–304.CrossRefGoogle Scholar
  58. So, W. C., Kita, S., & Goldin-Meadow, S. (2009). Using the hands to identify who does what to whom: Gesture and speech go hand-in-hand. Cognitive Science, 33, 115–125. doi: 10.1111/j.1551-6709.2008.01006.x CrossRefPubMedPubMedCentralGoogle Scholar
  59. Stanfield, R. A., & Zwaan, R. A. (2001). The effect of implied orientation derived from verbal context on picture recognition. Psychological Science, 12, 153–156.CrossRefPubMedGoogle Scholar
  60. Talmy, L. (2000). Towards a cognitive semantics (Vol. 1). Cambridge, MA: MIT Press.Google Scholar
  61. Vallesi, A., Binns, M. A., & Shallice, T. (2008). An effect of spatial–temporal association of response codes: Understanding the cognitive representations of time. Cognition, 107, 501–527.CrossRefPubMedGoogle Scholar
  62. van Dijck, J.-P., & Fias, W. (2011). A working memory account for spatial-numerical associations. Cognition, 119, 114–119.CrossRefPubMedGoogle Scholar
  63. Volterra, V., & Iverson, J. M. (1995). When do modality factors affect the course of language acquisition? In K. Emmorey & J. Reilly (Eds.), Language, gesture, and space (pp. 371–390). Hillsdale, NJ: Erlbaum.Google Scholar
  64. Wallace, R. A. (1971). S–R compatibility and the idea of a response code. Journal of Experimental Psychology, 88, 354–360.CrossRefPubMedGoogle Scholar
  65. Wood, G., Nuerck, H.-C., & Willmes, K. (2006). Crossed hands and the SNARC effect: A failure to replicate Dehaene, Bossini, and Giraux (1993). Cortex, 42, 1069–1079. doi: 10.1016/S0010-9452(08)70219-3 CrossRefPubMedGoogle Scholar
  66. Zwaan, R. A., & Yaxley, R. H. (2003). Hemispheric differences in semantic-relatedness judgments. Cognition, 87, B79–B86.CrossRefPubMedGoogle Scholar

Copyright information

© The Psychonomic Society, Inc. 2017

Authors and Affiliations

  • Timothy W. Boiteau
    • 1
    • 2
  • Cameron Smith
    • 1
    • 2
  • Amit Almor
    • 1
    • 2
    • 3
  1. 1.Department of PsychologyUniversity of South CarolinaColumbiaUSA
  2. 2.Institute for Mind and BrainUniversity of South CarolinaColumbiaUSA
  3. 3.Linguistics ProgramUniversity of South CarolinaColumbiaUSA

Personalised recommendations