Relating categorization to set summary statistics perception
- 554 Downloads
Two cognitive processes have been explored that compensate for the limited information that can be perceived and remembered at any given moment. The first parsimonious cognitive process is object categorization. We naturally relate objects to their category, assume they share relevant category properties, often disregarding irrelevant characteristics. Another scene organizing mechanism is representing aspects of the visual world in terms of summary statistics. Spreading attention over a group of objects with some similarity, one perceives an ensemble representation of the group. Without encoding detailed information of individuals, observers process summary data concerning the group, including set mean for various features (from circle size to face expression). Just as categorization may include/depend on prototype and intercategory boundaries, so set perception includes property mean and range. We now explore common features of these processes. We previously investigated summary perception of low-level features with a rapid serial visual presentation (RSVP) paradigm and found that participants perceive both the mean and range extremes of stimulus sets, automatically, implicitly, and on-the-fly, for each RSVP sequence, independently. We now use the same experimental paradigm to test category representation of high-level objects. We find participants perceive categorical characteristics better than they code individual elements. We relate category prototype to set mean and same/different category to in/out-of-range elements, defining a direct parallel between low-level set perception and high-level categorization. The implicit effects of mean or prototype and set or category boundaries are very similar. We suggest that object categorization may share perceptual-computational mechanisms with set summary statistics perception.
KeywordsCategorization Prototype Boundary Summary statistics Ensemble Mean Range
Categorization is one of the most important mechanisms for facilitating perception and cognition, helping to overcome cognitive-perceptual bottlenecks (Cowan, 2001; Luck & Vogel, 1997) and perceive the “gist” of the scene (Alvarez & Oliva, 2009; Cohen, Dennet & Kanwisher, 2016; Hochstein & Ahissar, 2002; Hock, Gordon, & Whitehurst, 1974; Iordan, Greene, Beck, & Fei-Fei, 2015, 2016; Jackson-Nielsen, Cohen & Pitts, 2017; Oliva & Torralba, 2006; Posner & Keele, 1970). Categorization follows and expands on the natural categories of objects in our environment, the intrinsic correlational structure of the world (Goldstone & Hendrickson, 2010; Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). There is long-term debate concerning the mechanisms and cerebral sites of categorization, with recent studies suggesting that there are multiple sites and processes of categorization (Ashby & Valentin, 2017; Nosofsky, Sanders, Gerdom, Douglas, & McDaniel, 2017). Thus, categorization itself may be categorized by task or goal (Ashby & Maddox, 2011), neural circuit (Iordan et al., 2015; Nomura & Reber, 2008), utility (J. D. Smith, 2014), and context (Barsalou, 1987; Koriat & Sorka, 2015, 2017; Roth & Shoben, 1983). The most common and accepted theoretical mechanisms for categorization are still rule based, defining clear boundaries between categories (Davis & Love, 2010; Goldstone & Kersten, 2003; Sloutsky, 2003; E. E. Smith, Langston, & Nisbett, 1992) and their cortical representations (Iordan et al., 2015, 2016; Kriegeskorte et al., 2008), and prototype-based or exemplar-based, defining family resemblance (Ashby & Maddox, 2011; Goldstone & Kersten, 2003; Iordan et al., 2016; Maddox & Ashby, 1993; Medin, Altom, & Murphy, 1984; Nosofsky, 2011; Posner and Keele, 1968; Rosch, 1973; Rosch, Mervis, et al., 1976; see also Clapper, 2017).
We have suggested that these phenomena, categorization and set perception, may be related since they share basic characteristics (Hochstein 2016a, 2016b; Hochstein, Khayat, Pavlovskaya, Bonneh, & Soroker, 2018). In both cases, when viewing somewhat similar, but certainly not identical items, we consider them as if they were the same, as a shortcut to representing them and prescribing a single appropriate response (Ariely, 2001; Medin, 1989; Rosch & Mervis, 1975; Rosch, Mervis, et al., 1976). When we globally spread attention, and see a flock of sheep in a meadow, a shelf of alcohol bottles at a bar, a line of cars in traffic, or a copse of trees in a forest, we are both categorizing these objects as sheep, alcohol bottles, cars, and trees, and relating to the average properties of each set. Similarly, in laboratory experiments, we present a set of circles (Alvarez & Oliva, 2008; Ariely, 2001; Corbett & Oriet, 2011; Khayat & Hochstein, 2018), line segments (Khayat & Hochstein, 2018; Robitaille & Harris, 2011), or faces (Haberman & Whitney, 2007, 2009), and observers perceive the nature of the images as circles, lines, or faces and relate to their average properties. All animals in the category “dogs” have four legs and a tail, but they may vary in color, size, and so forth. All circles in a set are round, though they may vary in size or brightness. Categorization emphasizes relevant or common properties and deemphasizes irrelevant or uncommon properties, reducing differences among category members (Fabre-Thorpe, 2011; Goldstone & Hendrickson, 2010; Hammer, Diesendruck, Weinshall, & Hochstein, 2009; Rosch, Mervis, et al., 1976; Rosch, Mervis, et al., 1999, Rosch & Lloyd, 1978, Rosch 2002). Similarly, set perception captures summary statistics without noting individual values. Categorization, like ensemble perception, may depend on rapid feature extraction, to determine presence of defining characteristics of objects.
In particular, set perception includes set mean and range (Ariely, 2001; Chong & Treisman, 2003, 2005; Khayat & Hochstein, 2018; Hochstein et al., 2018), and categorization might rely on the related properties of prototype (or mean exemplar; e.g. Ashby & Maddox, 2011) and/or intercategory boundaries (or category range; e.g., Goldstone & Kersten, 2003). This conceptual similarity has been confirmed by the recent finding that set characteristics are perceived implicitly and automatically (Khayat & Hochstein, 2018), just as objects are categorized implicitly and automatically at their basic category level (Potter & Hagmann, 2015; Rosch, Mervis, et al., 1976). Finally, it has been suggested that determining whether a group of objects in a scene belong to the same category may actually depend on their characteristics that allow them to be seen as a set (Utochkin, 2015). The similarities of categories and sets led us to ask if the detailed properties of their perception are also similar, so that it may be hypothesized that similar mechanisms are responsible for their cerebral representation.
The goal of the current research is to detail the similarity between set and category perception by applying to categories the very same tests that we used to study implicit set perception (Khayat & Hochstein, 2018). The following section briefly reviews the results of these previous tests.
We note in advance that there are important differences between categorization and set perception. Object categories are learned over a lifetime of experience, while set ensemble statistics can be acquired on the fly. Different life experience may lead to individual differences in categorization and choice of object seen as the category prototype. Categorization may involve semantic processes, while set perception has been demonstrated for simple visual features (though including face emotion). Thus, it would be difficult to claim that ensemble perception and categorization are identical, or take place at the same cortical site. However, their being different makes comparing them even more important, since if they share essential properties, they may depend on similar or analogous processes, albeit at different cortical sites. This is the aim of the current study.
We studied implicit perception and memory of set statistics by presenting a rapid serial visual presentation (RSVP) sequence of images of items differing by low-level properties (circles of different size, lines of different orientation, discs of different brightness; see Fig. 1b), and testing only memory of the seen members of the sequence (Khayat & Hochstein, 2018). Note that the mean of the set—the mean size circle, mean orientation line, or mean brightness disk—was sometimes included in the set sequence and sometimes not. Following set RSVP presentation, we presented two images simultaneously, side by side. One of these images was of an item that had been seen in the image sequence—the SEEN item—and one was a NEW item, not seen in the sequence. Observer memory was tested by asking participants to choose which of the two simultaneously presented image items had been seen in the sequence. Participants were informed that always one item had been SEEN and one would be NEW. We did not inform them that sometimes one test element would have the property that was the mean of all of the items presented in the sequence and that this test item could be the SEEN item (i.e., a member of the RSVP sequence, in which case it was, of course, included in the sequence) or it could be the NEW item, (i.e., not a member of the previously viewed sequence, and thus, in this case, it had not been presented in the RSVP sequence). We also did not inform them that sometimes the NEW, non-sequence-member was outside the range of the properties of the seen sequence elements. Not mentioning to the participants the words “mean” and “range,” the goal was to test whether observers would automatically perceive set mean property and choose the test item that matched this mean—irrespective of whether this test item was the one that had been seen in the sequence or if it was the foil, the test item that was new and never been seen before. Similarly, would observers automatically perceive the range of the properties of the set and easily reject foils that were outside the range of the items in the sequence?
Member recall test trial subtypes
SEEN test image (correct)
NEW test image (Incorrect)
We concluded that since the stimulus sequence was quite rapid, participants had difficulty remembering all the members of the RSVP set, and maybe even any one of them. Instead, they automatically used their implicit perception of the sequence set mean and range to respond positively to test elements that matched or were close to the set mean. Thus, performance was more accurate for test SEEN elements that equaled the mean–SEENmean–NEWin (see Fig. 2a–b, middle bars; Fig. 2c–d, left bars). When the NEW test element was equal to the set mean, it was frequently chosen as if it were a member (i.e., as if it had been seen in the set sequence). Participants actually chose this mean NEW element more frequently than the actual nonmean SEEN element—SEENin–NEWmean (see Fig. 2a–b, leftmost bars; note that accuracy below 0.5 means that the NEW element was chosen more frequently than the SEEN one.)
In addition, we found a range effect (i.e., participants rejected out-of-range nonmembers; SEENmean–NEWout and SEENin–NEWout) more frequently than in-range NEW test elements (SEENmean–NEWin, SEENin–NEWin, SEENin–NEWmean). This is shown in Fig. 2a–b, right two bars, and in Fig. 2e–f, right bars, compared with left bars in each graph. The same effect was seen for response time (RT; Fig. 2g), which was shorter for out-of-range than in-range NEW test elements, indicating they were rejected more rapidly as well as more frequently.
We concluded that participants automatically and implicitly determined the mean and range of the RSVP sequence even though they were not instructed to do so and even though this had no bearing on performance of the task at hand, which was just to try to remember the seen sequence elements. Furthermore, they did so on the fly for each trial, independently, since each trial had a different sequence mean and range.
Perception of set mean and range is not only implicit. In another study, Hochstein et al. (2018) asked observers to explicitly compare means of two arrays of variously oriented bars (mean comparison) or report presence of a bar with an outlier orientation among the array elements (outlier detection). It was found that mean comparison depended on the difference between the array means, and outlier detection depended on the distance of the target from the array range edge (see also Hochstein, 2016a, 2016b; Hochstein, Khayat, Pavlovskaya, Bonneh, & Soroker, 2018). Thus, both set mean and range are perceived both explicitly and implicitly.
The goal of the current study is to test whether there are identical effects in the related perceptual phenomenon of categorization.
Experiment 1. Category prototype and boundary effects
Prototypes as averages
We investigate here the nontrivial comparison between stimulus sets and object categories. The stimuli in previous studies of statistical perception were very similar, in each case, usually differing by a single varying feature (e.g., Ariely, 2001; Corbett & Oriet, 2011), or a combination of features forming a single high-level feature (e.g., facial expression; Haberman & Whitney, 2007, 2009). In contrast, categories might be thought of as a set of objects composed of combinations of multiple features, with only some of these features necessarily present in each category exemplar (where membership is defined by family resemblance). Thus, we compare the mean of the set elements with the prototype of category exemplars, based on the view that prototypes are the central or most common representations of a category (Goldstone & Kersten, 2003), possessing the mean values of its attributes (Langlois & Roggman, 1990; Reed, 1972; Rosch & Lloyd, 1978; Rosch, Mervis, et al., 1976; Rosch, Simpson, & Miller, 1976). Note, however, that comparing these perceptual procedures does not depend on this definition of prototype, or even on prototype theory itself. Comparing categorization with set summary perception is valid simply because in both cases several stimuli are perceived as belonging together, perhaps inducing the same response, because they share some characteristics and differ in others.
Similarly, we compare knowledge of category boundaries with perception of set range edges. As shown above, perceiving set range edges allows for rapid detection of outlier elements, and even unconscious perception of these edges allows for rapid rejection of out-of-range elements when trying to remember which elements were previously viewed. This was called the “range effect” (Khayat & Hochstein 2018). Similarly, knowing category boundaries allows for rapid separation of objects that belong to different categories, which we shall call a “boundary effect.” Thus, we compare properties of set perception and categorization in terms of observers’ implicit determination and knowledge of both the set mean and category prototype, as well as, the set range edges and the category boundaries. That is, having found that observers perceive rapidly and implicitly the mean and range of element sets, and that they use this information when judging memory of sequence stimuli, we now test if the same characteristics are present for object categories. Do observers of a sequence of objects determine automatically and implicitly their category and use the implied prototype (whether shown or not shown in the sequence) and the boundaries of the implied category, when later choosing images as having been seen in the sequence? These will be called the prototype and boundary effects, respectively. If we find similar characteristics in these processes, for categorization as for set perception, we will suggest that they may share basic perceptual-cognitive mechanisms.
We note at the outset that there are important differences between perceiving set summary statistics and categorizing objects. We perceive the mean size, orientation, brightness, and so forth, of sets that we see just once, sets which are unrelated to any other sets seen before. Presented with a set of images, sequentially or simultaneously, we derive the mean and range of the size, orientation, brightness, and so forth, of that set, on the fly and trial by trial. Thus, presented with a single stimulus in isolation, it is logically inconsistent to ask to what set it belongs. In contrast, by their very nature, categories are learned over a lifetime of experience, and with this knowledge, we can know immediately to what category a group of objects, or even a single object belongs. In fact, one of the defining characteristics of “basic” categories is that these are the names given to single objects (e.g., cat, car, fork, apple; Potter & Hagmann, 2015; Rosch, Mervis, et al., 1976). The situation with categorization is unlike that with sets, where we derive the set mean, on the fly, as we are presented with set members. Instead, when encountering an object (or group of objects belonging to a single category), we know the category to which it belongs, and we also know what is the prototype of that category and the category boundaries; there is no need, and no possibility, of deriving anew the category, prototype, and boundaries of a group of familiar objects (though we can learn new categories of unfamiliar objects; see Hochstein et al., 2019). Furthermore, categories may be learned and recognized semantically, while the basic features of sets are often nonsemantic. Nevertheless, and this is the basic argument of the current study, there may be similarities, if not identities, of mechanisms for representing set means/ranges and category prototypes/boundaries. We set out here to find the degree of similarity between these very different phenomena before endeavoring to uncover underlying mechanisms. Finding similarities, despite the differences enumerated above, would suggest that there are relationships between low-level and high-level representations of images, objects, categories, and concepts.
Categories with examples of their prototypes and other exemplars
Typical exemplars (Prototypes & Common)
Potted plant, Cactus
Watermelon plants, Vine
Legless Lizard, Commodore
German Shepard, Labrador
Chi Wawa, Bull-Terrier
Cannon, Molotov bottle
Harry Potter, The Bible
The Hobbit, Comics
Whisker, Slicing Knife
Teddy bear, Rubik’s cube
Top, Plastic food
Office desk, Writing desk
Reception desk, Cubicle desk
Formula 1, Model T
TV screen, Laptop
Hair dryer, Shaver
Bowling, Super Mario
Musical note, The Beatles
Mexican band, Accordion
Jesus, Western Wall
Buddha, Praying man
Test tubes, Atom
Random couple argument
Peace symbol, David star
Scouts symbol, Recycle symbol
9′11 Plane crash, Tsunami
Volcano eruption, Avalanche
The Godfather, Cinema & Popcorn
Wolf & full-moon, Hannibal Lecter
Scared face, Creepy doll
Mickey Mouse, The Simpsons
Scooby-Doo, Hello Kitty
Graduation ceremony, Parade
Passport & Suitcase, Backpackers
Heartbeat icon, Workout
Slippery sign, Toxic (skull) sign
Unstable bridge, Medusa
Martin Luther King, Hiroshima
Che Guevara, Mayan temples
Data of 15 in-house participants, students at the Hebrew University of Jerusalem, were included in the analysis of Experiment 1 (age range = 20–27 years, mean = 23.4 years; four males, 11 females). We also have results for 226 Amazon Mechanical Turk (MTurk) participants for Experiment 3. Participants provided informed consent and received compensation for participation and reported normal or corrected-to-normal vision.
Stimuli and procedure
Procedures for Experiment 1 took place in a dimly lit room, with participants seated 50 cm from a 24-in. Dell LCD monitor. We have less information as to their identity and precise experimental conditions of the Experiment 3 Amazon MTurks (we excluded ~25% of these data for trials with RTs <200 ms or >4 s and for subjects with <33% remaining trials or <60% correct responses overall, thus including as many trials/subjects as possible, excluding data that are clearly not responses to the stimulus; e.g., Fabre-Thorpe, 2011). Stimuli were generated using Psychtoolbox Version 3 for MATLAB 2015a (Brainard, 1997). MTurk testing used Adobe flash. Images, chosen from the Google Images database, were presented against a gray background (RGB: 0.5, 0.5, 0.5).
Stimuli consisted of rapid serial visual presentation (RSVP) of a sequence of high-level objects or scene images presented in the center of the display, with a fixed size of 10.4-cm high × 14.7-cm wide, as demonstrated in Fig. 3 (see also examples of images in Fig. 8). Experiment 1 was divided into three blocks of 65 RSVP trials each, with a short break between them, to complete 195 trials total per participant; Experiment 3 had 60 trials total for MTurk observers; one session/participant.
A set of images (12 for in-house students; nine for MTurks) was presented in each RSVP sequence, with 167 ms stimulus onset asynchrony (100 ms stimulus + 67 ms interstimulus interval), and the sequence was followed by a 100 ms masking stimulus. Then, after 1.5 s, two images were presented side by side, simultaneously, for the membership test; one, an object image that was SEEN in the sequence, and one a novel, NEW object image. Sequence SEEN and NEW images were randomly placed to the left and right of fixation in the middle half of the width and height of the screen, and participants indicated position of the SEEN image by key press. Images remained present until observer response. Since participants tend to perceive and remember better early and late elements, known as primacy and recency effects, in general and specifically in summary representations (Hubert-Wallander & Boynton, 2015), we excluded from the test member images the first and last two RSVP sequence images.
Thirty-nine categories (20 for MTurks of Experiment 3) were included in the experiment (see Table 2), including manmade and natural objects (animate, inanimate, and plants), and abstract conceptual scenes from different category levels. Each category was repeated in each trial subtype (see below), with entirely different images for each trial. For each category, we chose the three images that seemed to us to be closest to prototypical, and used them in the three test subtypes including a prototype (as nonmember or as member versus nonmember same/different category). Of the 39 categories used for Experiment 1, 20 were later tested in Experiment 2, and only these were used in Experiment 3. For almost all the 20 categories, which were also tested in Experiment 2 (see below), high typicality was confirmed; we discarded data for the few discrepant images (<6% of trials). For the remaining 19 more conceptual categories, which were not tested in Experiment 2 (and not used in Experiment 3), we depended on examples from the literature (e.g., Iordan et al., 2016; McCloskey & Glucksberg, 1978; Potter, Wyble, Pandav, & Olejarczyk, 2010) and experimenter judgement for in-house student participants (who came from the same cohort as experimenter NK). Note that if we err and choose nontypical images as prototypes, this would add noise and reduce results’ significance; thus, the results themselves confirm our choice. For the entirely new MTurk tests, we used a different approach, depending on Experiment 2, as described below. We purposely chose both basic and superordinate categories, as well as conceptual categories, to broaden the potential impact of our results.
Member recall test trial subtypes
SEEN test image (correct)
NEW test image (incorrect)
Statistical tests and data analysis
Analysis of variance (ANOVA) tests with repeated measures were conducted to verify that performance accuracy differences were due to the difficulty derived by effects emerging from the different trial subtypes, rather than within-participant differences in performance. For the two-way repeated-measures ANOVAs, testing student participant effects of SEEN object typicality and NEW object category, we combined data for NEW object same category, whether prototypical (NEWprot) or not (NEWin); t tests (one-tailed) between the averaged results of all participants for different subtype combinations were performed to investigate prototype and boundary representations effects. Since it is difficult to remember all the sequence images, we expect participants to correctly prefer as SEEN those test images with objects that are prototypes of the sequence category (expected fraction correct for SEENprot–NEWin > for SEENin–NEWin) and mistakenly choose the NEW test image when it is the category prototype, though not seen in the sequence (expected fraction correct for SEENin–NEWin > for SEENin–NEWprot), and to reject, as seen in the sequence, those that are of a different category (fraction correct for SEENprot–NEWout > for SEENprot–NEWin; and SEENin–NEWout > SEENin–NEWin).
We performed a two-way repeated-measures ANOVA on the Fig. 4 results. The overall prototype effect—the effect of one of the objects being the prototype of the category of the objects presented in the sequence—was significant, F(1, 14) = 18.07, p < .001; the boundary effect—the effect of the nonmember being of another category than the sequence objects—was highly significant, F(1, 14) = 298.64, p < .001, and the interaction between them was significant, as well, F(1,14) = 13.36, p < .005. The interaction effect suggests that the prototype effect may be larger in some cases, as we shall see in the following paragraph.
The first factor to influence performance is the presence of category prototypical objects (prototypes and most common or familiar objects) in one of the test images. The presence of typical exemplars influenced accuracy (% correct responses) and RT, which together we call the prototype effect. As seen in the three left bars of Fig. 4a and 5a–b, prototype presence affected accuracy: accuracy SEENprot–NEWin > SEENin–NEWin > SEENin–NEWprot. Prototype presence also affected response time (RT), as in Fig. 4b: RT correct choice of member SEENprot–NEWin < SEENin–NEWin; RT incorrect choice of nonmember SEENin–NEWprot < SEENin–NEWin).
It is possible that when including subtypes with NEWout test images (i.e., images of an object of a different category; subtypes SEENin–NEWout and SEENprot–NEWout) in the above two-factor ANOVA calculation, the effect of the presence of a different category (NEWout) reduces the prototype effect. Thus, to test the prototype effect alone, we conducted a one-way repeated-measures ANOVA on the three subtypes, with test image objects in the category boundaries (see Fig. 5). This one-factor ANOVA showed a significant prototype effect—students: F(2, 28) = 11.78, p < .001; MTurk: F(2, 346) = 26.96, p < .001. We conclude that, as predicted, when comparing trials containing only objects from the relevant category (subtypes SEENprot–NEWin, SEENin–NEWin, SEENin–NEWprot), the prototype had a major influence on observer response, which tended to attribute it as a member of the RSVP sequence regardless of whether it was or was not.
On the other hand, there is no significant difference between the case where the SEEN image object is prototypical or not when the NEW object is outside the category (accuracy for SEENprot–NEWout = 0.88 versus for SEENin–NEWout = 0.86; p = .59; see Fig. 4a). The boundary effect overrides the prototype effect (leading to the interaction effect in the two-way repeated-measures ANOVA, above).
We conclude that, due to limited attentional resources, participants are unable to fully perceive and memorize all individual objects, but still succeed in having a good representation of the category itself. This is striking, since the stimuli were presented in RSVP manner, with brief periods between stimuli. Nevertheless, observers were able to detect the sequence category and derive its prototype. They were successful in both category and prototype determination for sequences that included basic level, subordinate, superordinate, or even conceptual categories. They tend to relate the most representative object (the prototype) to the category of the presented object images and assume it was present in the sequence (see Fig. 5a: students; Fig. 5b: MTurks). We performed post hoc t tests between the different subtypes to find details of the effect, as shown in Fig. 5a–b. The prototype effect is clearly present when comparing the relevant trial subtypes (SEENprot–NEWin, SEENin–NEWin, SEENin–NEWprot), which significantly differ from each other (students: p < .05 for subtypes SEENin–NEWin versus SEENprot–NEWin or SEENin–NEWprot and p < .01 for SEENprot–NEWin versus SEENin–NEWprot; MTurks: p < .001 for all comparisons). These subtypes create a staircase shape from low performance of 0.54 ± 0.04 (MTurk: 0.64 ± 0.01; mean ± SE) proportion correct for SEENin–NEWprot, via 0.63 ± 0.02 (0.7 ± 0.008) correct for SEENin–NEWin, to best performance of 0.78 ± 0.02 (0.76 ± 0.01) correct for SEENprot–NEWin. We ask below if this is an all-or-none prototype-or-not-prototype effect, or if it is a graded effect, as objects are more or less typical of the category. Note that, surprisingly, even when the prototype was not present in the object sequence, it was often chosen as present when presented as the NEW test image. Nevertheless, when choosing between a nonprototypical SEEN image and a prototypical NEW image (SEENin–NEWprot), having actually seen the image in the sequence is slightly more important than typicality (0.54 and 0.64 for students and MTurks, respectively; significantly > .50). This is different than the results found for the low-level feature set, as is easily seen in the proportion correct for the SEENin–NEWprot subtype (>.5) compared to the analogous SEENin–NEWmean subtype (<.5). We believe that the difference derives from the greater observer memory for images of real objects, compared to memory of absolute values of simple features of abstract images (circle size, line orientation, disc brightness).
On the other hand, besides the prototype effect, there is still some degree of recognition of test objects having been seen in the sequence. Thus, as demonstrated in Fig. 6c–d, choosing the prototypical object is faster when it is a sequence member (correct: SEENprot–NEWin; and SEENprot–NEWout for students; students: 1304 ms ± 50 ms; MTurk: 1288 ms ± 25 ms) than when it is not (SEENin–NEWprot incorrect; students: 1663 ms ± 150 ms; MTurk: 1495 ms ± 41 ms; t test: p < .05, p < .001, respectively). Even choosing the nonprototypical seen image is faster than choosing the typical new image (see Fig. 6a–b, middle two diamonds; t test: p = .061, p < .01). This latter speed joins the greater accuracy (see above) to indicate it is not a speed–accuracy trade-off.
This effect was observed also in response time measurements for correct responses, as shown in Fig. 7b. Responses were significantly faster for trials where the nonmember object was outside category boundaries (i.e., belongs to a different category, 1279 ms ± 54 ms), than in trials where both test objects were from the category of the RSVP sequence (1476 ms ± 65 ms; p < .01). Taken together, the increase in accuracy and decrease in RT indicate a consistent trend of reducing task difficulty by introducing nonmember test objects from a different category, rather than a speed–accuracy trade-off.
Experiment 2. Scoring object typicality
So far, we have compared results for category and set sequence member recall and effects of prototype—mean and boundaries—range edge on choice of member image in a 2-AFC task. In addition, Khayat and Hochstein (2018) measured how these mean and range effects are graded with the distance of the test item from the mean or from the range edge. To complete and quantify the comparisons, we would like to do the same for the prototype and category effects seen here. To this end, we need a measure of the distance of our test objects from their category prototype. (It would be nice to measure how far away from a category are objects from different categories, but this seemed too difficult for the present study.)
The current experiment was therefore designed to measure the subjective distance of objects from their category prototype, and to learn for each category which object is the prototype itself. To this end, we asked 50 MTurk participants to choose one of two image objects as a member of a previously named category, and used their response speed as a measure of the closeness of the object to the prototype. We will then use these results in Experiment 3 to measure the graded prototype effect. It has been well documented that responses are faster for prototypes than for non-prototypes (Ashby & Maddox, 1991, 1994; McCloskey & Glucksberg, 1979; Rips, Shoben, & Smith, 1973; Rosch, Simpson, & Miller, 1976). We note in the Discussion that responses may also be faster for more familiar objects, and that there is debate concerning the relationship between familiarity and typicality.
Stimuli and procedure
We present the name of a category in the middle of the screen for 1s, (font: Arial 32, white), followed, after 1.0 s, by two test images, one of an object belonging to the named category, and one of a different category (attempting to choose objects that were from a different category but not too far from the named category; see Experiment 1, Method section). Images were presented to the left and right of the center of the display, in the middle half of the width and height of the screen. Images remained present until observer response.
Observer task was to choose, by key press, the image with an object that belongs to the named category. We hypothesize that the closer the object is to the category prototype, the faster will be the response, expecting participants to recognize prototypical objects as members of the named category quicker than they do atypical members. For example, participants will recognize an apple as a fruit faster than a kiwi, a cow as a mammal faster than a dolphin, and baseball as a sport faster than mountain climbing.
We tested 50 Amazon Mechanical Turk participants (MTurks). Participants performed two sessions of 300 trials/session. They were tested on 20 categories, as indicated in Table 2 (starred categories), 10 categories per session, with 30 test objects for each category.
As expected, response times varied among objects (maximum: 2.04 s; minimum: 0.65 s; mean range for 20 categories: 0.65 s), and there was significant correlation among participants (mean standard error between participants was 6% of the RT).
Experiment 3. Graded typicality
For Experiment 3, we tested MTurk participants (see Experiment 1, Method section) with the 20 starred categories in Table 2 and tested in Experiment 2. We use the mean across-participant RT found in Experiment 2 as the basis for the typicality ranking of objects for Experiment 3. Note that different MTurk participants were tested in Experiments 2 and 3 (Experiment 1 was with in-house student participants). For Experiment 3, all objects presented in the test pairs were from the same category as the previously presented sequence (only bottom three subtypes of Table 3), so that we are now testing the graded prototype effect, and not the range effect (seen in Experiment 1; Figs. 4 and 7).
Figure 10 displays the graded prototype effect. We measure the proportion correct, which is the probability of choosing the member object as having been seen in the category sequence, as a function of the typicality index of the member object (see Fig. 10a). Typicality is ranked from 1 to 30, where 1 is the closest to the prototype (i.e., the shortest average RT measured in Experiment 2). Note the gradual decrease in choosing the member as it is further from the prototype. Similarly, as the nonmember is gradually further from typical—that is, the mean RT to this object was greater in Experiment 2, so this object is more often rejected, and is less often chosen as the member (see Fig. 10c).
The choice of an image is not dependent only on that image, however, since there are always two images displayed and we ask participants to choose between them. Thus, the relative measure between the two images should determine which image participants choose. Having found that sequence member object closeness to the prototype and sequence nonmember distance from the category prototype both add to correct choice of the member, we now plot choice accuracy as a function of the difference between the distances of the nonmember and the member. This is shown in Fig. 11a, where we also show the parallel graph for low-level features (Fig. 11b; from Khayat & Hochstein, 2018).
These graphs, including the high-level categorization graphs, are not without noise. Noise comes from the random second image in the membership tests, from interparticipant differences, and from the very nature of our using RT as a determinant for typicality. Nevertheless, the good fit to a single trendline suggests that our conclusion is well founded, as follows. When viewing a sequence of objects belonging to a single category, observers often fail to recall the identity of each object seen, and instead, when asked which of two objects was included in the sequence, depend, on recognition of the category seen, knowledge of the prototypical object, and estimation of the distance of the two test objects from the category prototype.
The current results confirm and extend those of recent studies suggesting that statistical representations generalize over a wide range of visual attributes, from simple features to complex objects, giving accurate summaries over space and time (Alvarez & Oliva, 2009; Ariely, 2001; Attarha & Moore, 2015; Chong & Treisman, 2003; Gorea et al., 2014; Haberman & Whitney, 2009; Hubert-Wallander & Boynton, 2015). This result is now extended to object categories, as well. These efficient representations overcome severe capacity limitations of perceptual resources (Alvarez & Oliva, 2008; Robitaille & Harris, 2011), and they are formed rapidly and early in conscious visual representations (Chong & Treisman, 2003), without focused attention (Alvarez & Oliva, 2008; Chong & Treisman, 2005) and without conscious awareness of individual stimuli and their features (Demeyere, Rzeskiewicz, Humphreys, & Humphreys, 2008; Pavlovskaya, Soroker, Bonneh, & Hochstein, 2015). Thus, their underlying computations play a fundamental role in visual perception and the rapid extraction of information from large and complex sources of data. In particular, we propose that categorization mimics set summary statistics perception processes that share its characteristics. Note that rapid gist perception does not imply low cortical level representation—on the contrary, it is the result of rapid feed-forward computation along the visual hierarchy (Hochstein & Ahissar, 2002).
Regarding high-level categories, we revealed two phenomena that match those found for low-level features, by using a similar experimental design for the two experiments: an RSVP sequence followed by a 2-AFC experiment test of image memory.
(1) Typicality effect: The typicality level of an object was well represented, as it biased participants’ decision toward choosing the more typical exemplar (of the presented category) as the member of the RSVP sequence. The typicality effect led to faster and more accurate responses for member test items, and also to choice of the incorrect item, when it had superior typicality (see Figs. 4–6 and 10–11). Thus, the more typical object was chosen as present in the sequence, whether it was or was not actually present there. The typicality effect is similar to the set mean value effect found for low-level features. (2) Boundary effect: Categorical boundary representation assisted participants in rejecting images with objects that do not belong to the category of the RSVP sequence; they therefore correctly chose the member image and achieved higher performance levels in these trials (see Figs. 4 and 7). This effect is similar the set range edges effect.
Furthermore, using a dedicated response-time test to rank the typicality of items within their category, we find that the typicality effect is graded, similar to the set mean value effect (see Figs. 10 and 11). The degree to which observers preferentially choose category items as having been members of the trial sequence is directly related to the degree of typicality of the test items. Both member and nonmember items are chosen more frequently as they are closer to prototypical; member items, correctly, and nonmember items, incorrectly. In particular, the relative typicality of the member test item versus the nonmember test item strongly affected observer choice of which item they reported as member of the sequence (see Fig. 11). Participants associated the more typical object to the displayed RSVP sequence, regardless of whether the prototype actually was or was not a member of the set. It is as if when viewing the sequence of objects, they perceived the category, but had only a poor representation of its individuals. This is exactly what was found for set perception (Khayat & Hochstein, 2018; Ward, Bear, & Scholl, 2016; but see Usher, Bronfman, Talmor, Jacobson, & Eitam, 2018).
We propose that participants unconsciously considered prototypes as better representatives of the categories than less typical exemplars and correspondingly chose them as members of the sequence, perhaps because prototypes usually contain the most common attribute values shared among the category members (Goldstone & Kersten, 2003; Rosch & Mervis, 1975).
As in the low-level experiment, participants were not informed about the categorical content of the RSVP sequences, and so they had no knowledge concerning the involvement of prototypes, categories, and so forth, and they only followed the instructions of an image memory task. The similarity of the effects emerging from the two experiments implies that statistical and categorical representations are cognate phenomena that share perceptual characteristics, and perhaps are generated by similar computations.
Note that both the category prototype and boundary effects are based on participants’ implicit categorization, extracted from the images in the RSVP sequences.
The results indicate that they adjusted their responses toward the relevant category, even though they were not guided to take category information into consideration in the alleged memory test. While participants concentrated on the RSVP images themselves, it seems that category context extraction overcame the cognitive abilities of memorizing the objects or scenes presented by the images.
Nevertheless, we note that accuracy in this experiment was superior to that in our previous set summary statistics experiment (compare Figs. 4 and 2; Khayat & Hochstein 2018). This may well be due to accurate memory of some sequence items, which is easier for object images than for abstract items (circles, disks, or line segments), which differ only in size, brightness, or orientation. This result also confirms that participants are trying to recall the actual objects displayed in the sequence—they sometimes succeed in remembering them—and they are not consciously trying only to categorize the images.
Categorical perception is often influenced by context (Barsalou, 1987; Cheal & Rutherford, 2013; Joubert, Rousselet, Fize, & Fabre-Thorpe, 2007; Koriat & Sorka, 2015, 2017; Roth & Shoben, 1983). Water, for example, may be associated with different categories, depending on context. It is a drink, a liquid for bathing or cleaning, or the medium of marine animals. Thus, the category to which participants associated each sequence object would naturally be affected by other sequence objects. We conclude that the current categorization processes occurred rapidly and intuitively, based on the variety of sequence objects, but also on earlier processing of interactions between objects and their contexts (Barsalou, 1987; Joubert et al., 2007; Koriat & Sorka, 2015, 2017; Roth & Shoben, 1983).
Differences between low-level parameter sets and high-level categories
There are several differences between the low-level and the high-level results that should be pointed out. For the low level, we measured not only the graded mean effect, but also the graded range effect (i.e., the gradual effect of the distance of the presented nonmember element from the edge of the range of the presented sequence). This range effect has its equivalent in the boundary effect seen in Fig. 4. To extend this to a graded effect would require measuring the distance between an object of one category from the “edge” of a different category. This is beyond the scope of the current study.
A second difference to be noted is that it is easier to remember particular pictures of objects than specific elements in a sequence that differ only in a low-level feature (orientation, size, or brightness). Thus, as mentioned above, performance in the high-level test is superior overall. (Note performance axis difference between Figs. 10a, c and b, d.)
Another significant difference between testing the low-level set features and the high-level category objects is that the set of low-level elements, and their range and mean, are determined on the fly for each trial, by the sequence of stimuli actually presented. In contrast, the high-level categories are, of course, learned from life experience, and their prototype and boundaries are known immediately when seeing the first object in the sequence (or first few if the category is ambiguous). Categorization is thus predetermined, and not a result of the experience in the experiment itself. At the same time, there may well be interparticipant differences in the way they categorize objects, and, in particular, in the specific objects that they consider prototypical.
Related to the latter two differences is another. Categories are often denoted and remembered by their name, introducing a semantic element to the association of a variety of objects to a single category. This is not so for the low-level features studied previously. Nevertheless, recall that the world contains, naturally and intrinsically, objects that cluster separately in feature space, and thus categories that are language independent (Goldstone & Hendrickson, 2010; Rosch, Mervis, et al., 1976).
Implications for categorization processes
There is ongoing debate concerning category representation in terms of the boundaries between neighboring categories, in terms of a single prototype (category members resemble this prototype more than they resemble other categories’ prototypes), or in terms of a group of common exemplars (new objects belong to the same category as the closest familiar object). Our finding that participants respond on the basis of both the mean and range of sets, and similarly on the basis of the prototype and boundary of object categories may suggest a hybrid categorization process model.
Concerning the single prototype versus multiple exemplar theories, our results may support prototype theory, since we find that participants choose test objects that are more prototypical, rather than recalling viewed exemplars. Nevertheless, category prototypes may be a secondary readout of fuzzy representations of multiple exemplars (see below).
We believe that the parallel found between set summary perception and perception of categories suggests there might be a common representation mechanism. We suggest that a population code (Georgopoulos, Schwartz, & Kettner, 1986) might underlie set representation of mean and range, and the same may be true for category prototype and boundaries (Bonnasse-Gahot & Nadal, 2008; Nicolelis, 2001; Tajima et al., 2016).
Observers clearly perceive not only the category of the sequence objects but also their typicality (compare Evans & Treisman, 2005; Potter et al., 2010). Furthermore, Benna and Fusi (2019) suggested that related items (descendants from a common ancestor in an ultrametric tree of items) may be efficiently represented in a sparse and condensed manner by representing their common “ancestor” or generator plus differences of each item from it. Thus, representations of set and category items might inherently include representation of mean and prototype, respectively. Prototype theory is not new, of course, but it is strengthened by the current finding of the resemblance of categorization with set perception.
There is some debate concerning the relationship between object familiarity and category typicality (Nosofsky, 1988; Palmeri & Gauthier, 2004; Shen & Reingold, 2001). Responses are more rapid for familiar objects (Wang, Cavanagh & Green, 1994; familiar faces: Ramon, Caharelô, & Rossion, 2011; familiar words: Glass, Cox & LeVine, 1974; familiar size: Konkle & Oliva, 2012) or typical objects (Ashby & Maddox, 1991, 1994; McCloskey & Glucksberg, 1979; Rips et al., 1973; Rosch, 1973; Rosch, Simpson, & Miller, 1976), but familiar objects are often deemed more typical (Iordan, Green, Beck, & Fei-Fei, 2016; Malt & Smith, 1982) and unfamiliar objects are quickly rejected from category membership (Casey, 1992). Thus, our use of reaction times for judging typicality may have included familiarity, and our finding that participants chose more typical objects may have included choice of more familiar objects. Nevertheless, while Rosch (1973) found that categorization responses are faster to prototypical objects, Ashby, Boynton, and Lee (1994) did not find a “meaningful correlation between response time and stimulus familiarity” when not related to category. In our experiments, choosing the prototype or familiar object as having been SEEN when it was not shown in the trial sequence, is surprising and not expected based only on familiarity. Rather, such a result would be consistent with a situation where sequence object representations included a representation of their prototype (e.g., see Benna & Fusi, 2019).
Our results resemble the Deese–Roediger–McDermott (DRM; Roediger & McDermott, 1995) finding that when presented with a list of related words, participants recall a nonpresented “lure” word with the same frequency as the presented words. In the DRM paradigm, participants study lists of words (e.g., tired, bed, awake, rest, dream, night, blanket, doze, slumber, snore, pillow, peace, yawn, drowsy) that are related to a nonpresented lure word (e.g., sleep). On a later test, participants often claim that they previously studied the related lure words. Similarly, it was found that after learning a set of distortions of a random dot pattern, participants learn the undistorted pattern—the prototype—more easily than a new distortion, though only after a first viewing (Posner & Keele, 1968). These results may be added to the ensemble and categorization results, relating different situations—semantic and perceptual—where perceiving related items induces representation and recall of the mean or prototype processes, suggesting that similar processes may underlie them.
Such recall is referred to as “false” memories, since false recognition of the related lure words is indistinguishable from true recognition of studied words (Gallo 2006; Schacter & Addis, 2007). Our results, too, reflect “false” memories, since participants indicate recall of items that were not presented in the sequence. This is equally true for our study of category prototype recall and our studies, and those of many others, of set ensemble presentation and recall of the set mean—even in its absence from the presented sequence. Nevertheless, the term “false memory” is generally used in reference to recall of events (Zaragoza, Hyman, & Chrobak, 2019) and narratives (Frenda, Nichols, & Loftus, 2011) that did not occur or were not narrated. Finding false memory of category prototype certainly extends this notion from abstract mean parameter (size, orientation, brightness, etc.) to more concrete objects and semantic categories, but this is still far from false episodic memory. Further study is required to decide if these different types of false memory are related, and if so, what is the relationship between them.
Perceiving category exemplars in terms of the category prototype may be the source of categorical priming (e.g., Fazio, Williams, & Powell, 2000; Ray, 2008), whereby responses to unseen exemplars (and in particular to the category prototype) are faster when primed by previously perceiving another category exemplar. Interestingly, similar effects have been found for sets (Marchant & de Fockert, 2009), and there is even negative priming for unconscious viewing of single unusual shapes (DeSchepper & Treisman, 1996).
We conclude that while observing the projected images, participants first, implicitly generalized them into a category. Then, at the membership test, they use this categorical context to classify the probability of presence within the sequence of the test images. That is, when visual memory capacity is insufficient, then this implicit categorical context affects their judgment. If indeed categorizations are executed by similar computations as in statistical perception of the visual system, then it is possible that these are only particular embodiments of a general system, which efficiently determines our perception and behavior. It is especially poignant that set mean perception and categorization, which help behaving in a too-rich and too-complex environment by applying shortcuts to perception, may share perceptual-computational mechanisms, perhaps at different cortical levels. We have suggested that the neural mechanism used is a population code (Georgopoulos et al. 1986) that encodes both the mean and the range of the stimulus set (Hochstein, 2016a, 2016b; Pavlovskaya, Soroker, Bonneh, & Hochstein, 2017a, 2017b; see also Brezis, Bronfman, Jacoby, Lavidor, & Usher, 2016; Brezis, Bronfman, & Usher, 2018). Using a population code to determine set mean answers the question of how the visual system computes mean values without knowing values for each element separately (whether represented when viewed and forgotten, or never explicitly represented). Due to broad tuning and overlap of neuron receptive fields, a population code is necessarily used for perceiving individual element values and may be used directly, with a broader range of neurons over space and time, to perceive set mean values. We now suggest the same type of population code may be used for categorization. Category prototype and boundaries could be the read out of fuzzy representations of multiple exemplars. It has already been suggested that ensemble summary statistics might serve as the basis for rapid visual categorizations (Utochkin 2015).
A distinction was made between automatic, intuitive global-attention scene gist perception, using vision at a glance, versus explicit, focused-attention vision with scrutiny (Hochstein & Ahissar, 2002). Gist is acquired automatically and implicitly by bottom-up processing, and details are added to explicit perception by further top-down guided processes. The current study demonstrates that even when it is observers’ intention to detect and remember the details of each image in a sequence—an intention that in this case often leads to failure—nevertheless, the automatic, implicit process of gist perception succeeds in acquiring both set and category information.
A question that still needs to be addressed is the cerebral correlates of mechanisms underlying these processes. An investigation using physiological techniques (fMRI or EEG), while participants perform behavioral tasks, as in the current study, might indicate brain regions or electrophysiological patterns of activity that are specific to systems that generate these automatic representations. Such a study might also test the notion that similar sites perform set mean and range perception as well as categorization.
We thank Yuri Maximov, the lab’s talented programmer, and lab comembers Safa’ Abassi and Miriam Carmeli. Thanks to Stefano Fusi, Merav Ahissar, Udi Zohary, Israel Nelken, Robert Shapley, Howard Hock, and Anne Treisman (of blessed memory), for helpful discussions of earlier drafts of this paper. This study was supported by a grant from the Israel Science Foundation (ISF).
The data for the experiments reported will be made available online, and none of the experiments was preregistered.
- Ashby, F. G., & Valentin, V.V. (2017) Multiple systems of perceptual category learning: Theory and cognitive tests. In H. Cohen & C. Lefebvre (Eds.), Handbook of categorization in cognitive science (2nd ed., pp 157–188). Amsterdam, Netherlands: Elsevier.Google Scholar
- Barsalou, L. W. (1987). The instability of graded structure: Implications for the nature of concepts. In U. Neisser (Ed.), Concepts and conceptual development: Ecological and intellectual factors in categorization (pp 101–140). Cambridge, UK: Cambridge University Press.Google Scholar
- Benna, M. K., & Fusi, S. (2019) Are place cells just memory cells? Memory compression leads to spatial tuning and history dependence. bioRxiv 624239. https://doi.org/10.1101/624239
- Casey, P. J. (1992) A Reexamination of the roles of typicality and category dominance in verifying category membership. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(4), 823–834.Google Scholar
- Fabre-Thorpe, M. (2011). The characteristics and limits of rapid visual categorization. Frontiers in Psychology, 2, 243, 1–12.Google Scholar
- Gallo, D. A. (2006). Associative illusions of memory. New York, NY: Taylor & Francis.Google Scholar
- Goldstone, R. L., & Kersten, A. (2003). Concepts and categorization. In I. B. Weiner (Ed.), Handbook of psychology (pp. 597–621). Hoboken, NJ: Wiley.Google Scholar
- Haberman, J., & Whitney, D. (2012). Ensemble perception: Summarizing the scene and broadening the limits of visual processing. In J. Wolfe & L. Robertson (Eds.), From perception to consciousness: Searching with Anne Treisman (pp. 339–349). New York, NY: Oxford University Press.CrossRefGoogle Scholar
- Hochstein, S. (2016b). How the brain represents statistical properties. Perception, 45, 272.Google Scholar
- Hochstein, S., Khayat, N., Pavlovskaya, M., Bonneh, Y. S., & Soroker, N. (2018). Set Summary perception, outlier pop out, and categorization: A common underlying computation? Paper presented at the 41stEuropean Conference on Visual Perception, Trieste, Italy.Google Scholar
- Hochstein, S., Khayat, N., Pavlovskaya, M., Bonneh, Y., Soroker, N., & Fusi, S. (2019). Perceiving category set statistics on the fly. Journal of Vision, 19.Google Scholar
- Khayat, N., & Hochstein, S. (2018). Perceiving set mean and range: Automaticity and precision. Journal of Vision, 18(23), 1–14.Google Scholar
- Nosofsky, R. M. (1988). Exemplar-based accounts of relations between classification, recognition, and typicality. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14(4), 700–708.Google Scholar
- Oliva, A., & Torralba, A. (2006). Building the gist of a scene: The role of global image features in recognition. In S. Martinez-Conde, S. L. Macknik, L. M. Martinez, J.-M. Alonso, & P.U. Tse (Eds.), Progress in brain research, visual perception, fundamentals of awareness: Multi-sensory integration and high-order perception, 155B, 23–36.CrossRefGoogle Scholar
- Pavlovskaya, M., Soroker, N., Bonneh, Y., & Hochstein, S. (2017a). Statistical averaging and deviant detection in heterogeneous arrays. 40th European Conference on Visual Perception Abstracts, 40, 160.Google Scholar
- Pavlovskaya, M., Soroker, N., Bonneh, Y., & Hochstein, S. (2017b). Statistical averaging and deviant detection may share mechanisms. Washington, DC: Society for Neuroscience.Google Scholar
- Roediger, H. L., & McDermott, K. B. (1995). Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(4), 803–814.Google Scholar
- Rosch, E. (1999). Reclaiming cognition: The primacy of action, intention and emotion. Journal of Consciousness Studies, 6(11/12), 61–77.Google Scholar
- Rosch, E. (2002). Principles of categorization. In D. Levitin (Ed.), Foundations of cognitive psychology: Core readings (pp. 251–270). Cambridge, MA: MIT Press. (Original work published 1978)Google Scholar
- Rosch, E., & Lloyd, B. B. (Eds.). (1978). Cognition and categorization. Hillsdale, NJ: Erlbaum.Google Scholar
- Yamanashi-Leib, A., Kosovicheva, A., & Whitney, D. (2016). Fast ensemble representations for abstract visual impressions. Nature Communications, 7, 13186, 1–10.Google Scholar
- Zaragoza, M. S., Hyman, I., & Chrobak, Q. M. (2019). False memory. In N. Brewer & A. B. Douglass (Eds.), Psychological science and the law (pp 182–207). New York, NY: Guilford Press.Google Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.