Encyclopedia of Computational Neuroscience

Living Edition
| Editors: Dieter Jaeger, Ranu Jung

Somatosensory Cortex: Neural Coding of Shape

  • Jeffrey M. YauEmail author
Living reference work entry

Latest version View entry history

DOI: https://doi.org/10.1007/978-1-4614-7320-6_384-4

Keywords

Somatosensory Cortex Proprioceptive Information Shape Representation Stimulus Element Orientation Tuning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Definition

We appreciate object shape information by touch alone. Spatial feature representations, initially carried by populations of slowly adapting type I afferents (SA1s) in the peripheral nervous system, are centrally encoded by neural populations residing in the somatosensory cortex. Cortical neurons function as spatial filters that select out specific object features falling within their receptive fields (RFs) on the skin. This neural selectivity, or tuning, changes across a hierarchy of processing stages spanning primary (S1) and secondary somatosensory (S2) cortices (Fig. 1a). While somatosensory neurons at the first cortical processing stage (area 3b in S1) have small receptive fields and respond to simple contour features like oriented bars and edges, neurons at intermediate and later processing stages (area 2 and S2) respond to stimulation of larger skin regions and are selective for more complex spatial features. In individual neurons and across neural populations, spatial tuning evolves in time, reflecting extensive and rapid network processing. Decoding of object shape information requires integrating activity over neural populations tuned for cutaneous and proprioceptive information.
Fig. 1

Somatosensory system and shape perception. (a) The somatosensory cortical system in primates comprises primary (S1) and secondary (S2) somatosensory cortices. S1 consists of a densely interconnected set of regions: Brodmann areas 1, 2, and 3. (b) Tactile shape perception requires the integration of cutaneous and proprioceptive information. The phone’s straight edge (red dashed line) is processed within each finger pad and is represented as an oriented edge feature in the activity of cortical neurons

Detailed Description

We rely on our sense of touch to perceive and manipulate objects. Consider the act of holding a smartphone in one hand (Fig. 1b) – although much of the phone’s body is cradled in the palm of your hand, notice how your fingers are deftly positioned around the phone’s frame, allowing you to grip it easily and securely. If you gently adjust and tighten your grip, you may notice how the phone’s frame presses into each individual finger, and you can focus your attention on how the frame passes over your skin along a single edge (Fig. 1b, red dashed line). Shape perception is based in part on this cutaneous information, and importantly, regardless of where your fingers and thumb fall on the phone as you adjust your grip, you maintain a stable perception of the object in your hand. This stable object representation is encoded in the distributed neural activity over shape-sensitive neurons in the somatosensory cortex.

To understand how shape information is represented in the brain, we must consider how this information is initially encoded at the hand. Specialized mechanoreceptors in the skin respond to different cutaneous aspects of object contact (Johnson 2001). Note that separate afferent populations also carry proprioceptive information about hand conformation, which is necessary for stereognosis (i.e., haptic perception of objects) – this information is integrated with cutaneous shape information in the cortex (see below). Individual afferents display no spatial tuning and only respond when a stimulus impinges the afferent’s small RF. In this manner, a shape’s spatial profile is carried in the total activity over a population of afferents. Of the cutaneous afferents, SA1 fibers carry the most refined spatial information, and the SA1 population activity conveys a low-pass filtered neural image of a tactile stimulus that is isomorphic with respect to the contacted shape. Ultimately, information about object shape must be extracted from a pattern of activation across a two-dimensional sheet of receptors embedded in the skin. The challenge to characterizing somatosensory shape coding, then, is in understanding how cortical neurons process this two-dimensional spatial information conveyed by peripheral afferents.

Spatial Filtering

A large body of neurophysiological evidence implicates area 3b of S1 cortex as the primary cortical recipient of sensory input projecting from the afferent systems, sent via the dorsal column nuclei and the thalamus (Bensmaia and Yau 2011). Area 3b neurons function as linear spatial filters that cover small regions on a single finger pad: A neuron’s RF is a linear approximation of the effects of stimulus elements inside the RF on the neuron’s response. Stimulus contact in any single RF region can result in an increase or decrease in spiking activity, and a neuron’s overall response to each stimulus pattern is given by the sum of the effects of the stimulus elements over all RF regions. This can be formalized as
$$ r(t)={b}_o+{b}_1\cdot 5 em{x}_1(t)+{b}_2\cdot 5 em{x}_2(t)+\dots {b}_i{x}_i(t) $$
(1)
where r(t) is the spiking activity predicted in response to the stimulus pattern at time t, b 0 is the baseline firing rate, b i is the effect strength (positive or negative) of a stimulus element in the ith region of the skin, and x i (t) is the stimulus value in the ith region of the skin (e.g., as in a bitmap, where a value of 1 indicates the presence of a stimulus element and 0 indicates no contact) at time t. The model can be rewritten in matrix form in order to solve for the set of weights defining the linear RF:
$$ r= Xb $$
(2)
where r is a vector containing firing rates over all response times, X is the stimulus matrix, and b is a vector of weights (plus a constant term) describing the RF. The matrix equation can be solved for b:
$$ b={\left({X}^TX\right)}^{\left(-1\right)}{X}^Tr $$
(3)
where X T X is the stimulus autocorrelation matrix (which is an identity matrix when a white noise stimulus devoid of temporal and spatial correlation is employed). RF maps for cortical neurons recorded from area 3b, derived from linear regression or reverse correlation, display obvious structured organization and typically consist of a central excitatory field flanked by one or more inhibitory fields (DiCarlo et al. 1998).
The shape and arrangement of these RF structures, or kernels, clearly underlie the spatial tuning of area 3b neurons. Specifically, a neuron may respond vigorously to a small bar indented in its RF at a particular orientation, and its response strength will decrease as the bar’s orientation deviates from this “preferred” orientation (Fig. 2a). The shape and orientation of the RF structures tend to match that of the “preferred” bar orientation. Because of the clear correspondence between RF composition and spatial selectivity, orientation tuning in area 3b neurons can also be parameterized with two-dimensional Gabor filter RF models (Bensmaia et al. 2008; Fig. 2b). Although the neural basis for the linear RF composition is unknown, such organization likely results from the patterned convergence of afferent inputs projecting through the medial lemniscal system. However, cortical response selectivity is likely also shaped by recurrent network interactions among cortical populations (see below).
Fig. 2

Neural coding of bar orientation. (a) Top: spiking activity of an example somatosensory neuron to a bar stimulus presented briefly within the neuron’s RF at different orientations reveals clear response modulation that is consistent over repeated trials (rows in the raster plot). Bottom: orientation-tuning curve shows the neuron’s response as a function of bar orientation and a clear preference for bars oriented near 67.5°. (b) Top: RF map for a different orientation-tuned cortical neuron computed by averaging responses (scale bar) to small punctate probes presented at each RF location. Bottom: RF map capturing orientation selectivity with a two-dimensional Gabor function (relative response weight for each pixel given by scale bar)

Neural responses become more selective as one ascends the somatosensory processing pathway. As a result, while the responses of afferents and of neurons in area 3b can be accounted for effectively using linear models, the responses of neurons in area 1, the primary recipient of projections from area 3b, cannot. Indeed, neurons in area 1 exhibit many properties that are similar to their counterparts in area 3b (e.g., orientation tuning), but detailed RF comparisons consistently reveal more complex spatial selectivity in area 1. Response properties continue to grow more complex and nonlinear in area 2 (the most caudal S1 region in the postcentral gyrus) and in S2 (located in the superior bank of the lateral sulcus). RF size increases dramatically to span multiple fingers and even one or both hands in their entirety (in the case of S2 neurons). Linear RF models account for little response variance in most neurons in these populations as shape coding transitions from orientation selectivity to curvature tuning (Yau et al. 2009): Individual neurons respond preferentially to curved and angled contour fragments pointing in narrow direction ranges (Fig. 3). Although these response patterns cannot be approximated with two-dimensional spatial filter models, they can be modeled explicitly with tuning functions in the curvature direction domain. Thus, we can consider transitions in shape coding within the somatosensory system as projections of shape representations from an initial two-dimensional skin-based coordinate space into orientation and curvature feature spaces. Neural computations that transform object information into higher-order contour derivatives (orientation is a first-order spatial derivative; curvature can be defined as the rate of change in orientation along a contour, a second-order derivative) may be especially efficient for building compact and sparse shape representations. In other words, these transformations in shape representations can help minimize the number of active neurons required for coding shape, thereby reducing the metabolic demands related to shape processing.
Fig. 3

Neural coding of contour curvature. (a) Gray scale indicates average spiking activity of a neuron in area 2 responding to contour fragments indented into the skin at different curvature directions. (b) Raster plots and peri-stimulus time histograms showed curvature responses sorted by direction (rows in a). Central bouquet plot depicts the same neuron’s average responses as a function of direction and reveals its strong preference for leftward (180°) pointing curves

Position Tolerance

Shape-selective responses in many area 2 and S2 neurons exhibit position consistency: Tuning preferences are maintained even as stimulus features are moved over different RF locations. Although stimulus position changes can result in general response strength modulations, the relative preference for particular ranges of edge orientation or contour curvature direction is preserved regardless of where the stimulus is presented within and across finger pads (Fitzgerald et al. 2006). This position tolerance is computationally demanding and requires nonlinear shape coding mechanisms.

Temporal Dynamics

Shape coding in the somatosensory system is a dynamic process which takes place over tens to hundreds of milliseconds after a stimulus contacts the skin. Dynamic shape coding in the peripheral afferent system and cortical neurons has been studied by characterizing selectivity using spatiotemporal receptive field (STRF) models, which capture the temporal modulation of neural RFs, in addition to their spatial tuning properties (Fig. 4). (These models are a temporal extension of the spatial filtering models described above.) Cortical neurons display a range of spatiotemporal response patterns, and the most common STRF (in areas 3b and 1) consists of initial excitation, flanked by inhibitory regions (“surround inhibition”) and followed by a long period of (“replacing”) inhibition (Sripati et al. 2006). Because of this composition, the majority of cortical STRFs are space-time inseparable: RFs cannot be decomposed into the product of a spatial kernel (excitatory region) and a temporal kernel (e.g., an exponential response decay or a difference of two exponential decays). (Note that some cortical STRFs lack surround inhibition, like afferent STRFs, making them space-time separable.) Although the neural basis for these dynamic response patterns is still unknown, there is little question that evolution of RF composition depends on rapid intracortical interactions. Importantly, particular aspects of dynamic shape coding, especially the spatiotemporal profile of inhibition, may play a critical role in establishing response properties like invariance to scanning velocity (DiCarlo and Johnson 1999), i.e., a preference for spatial patterns that is consistent across a range of scanning speeds. Moreover, recurrent network interactions likely also contribute to the dynamic feature selectivity observed in cortical neurons: Orientation selectivity builds gradually in areas 3b and 1, after population spiking activity peaks (Bensmaia et al. 2008), and curvature signaling similarly lags an initial buildup of spiking activity in areas 2 and S2 (Yau et al. 2013). Such response time courses may reflect the role of cortical inhibition in sculpting neural selectivity and sharpening shape representations over time.
Fig. 4

Dynamic spatial response properties. Example spatiotemporal receptive field (STRF) for a peripheral afferent unit (top) and a cortical neuron (bottom). Pixel color (scale bar) indicates response contribution of each RF location (spikes per second per micrometer). The majority of cortical STRFs in areas 3b and 1 comprise an excitatory center coupled with surround inhibition and trailed by replacing inhibition

Proprioception

Representation of three-dimensional object shape requires the integration of cutaneous and proprioceptive information. Consider again the act of holding and perceiving a smartphone in your hand: All of the cutaneous processing described above only accounts for your ability to perceive the phone’s straight and curved boundary edges in contact with your fingers. The exact computation used by somatosensory neurons to combine these cutaneous representations with hand conformation information is unknown, but neuroimaging and neurophysiological evidence implicate neural populations residing in area 2 (Mountcastle 2005). Indeed, area two neurons, while selective for cutaneous stimulus patterns, are additionally sensitive to the three-dimensional arrangement of the hand and fingers (Iwamura and Tanaka 1978). Accordingly, area 2 lesions result in clear haptic object perception deficits (Carlson 1981). There is also evidence that area two responses may be influenced by motor command signals in the context of active object exploration (London and Miller 2013): This proprioceptive feedback (which may be the result of efference copy) from the motor system could serve to gate self-generated sensory signals and to refine shape representations acquired during object manipulations.

Conclusion

Shape representations are carried in population activity of somatosensory cortical neurons. Across different levels of a cortical hierarchy, shape representations transition from occupying a skin-centered coordinate space to higher-order feature spaces. Response selectivity grows more complex at successively higher processing stages in the somatosensory cortical pathway: Linear responses give way to nonlinear responses. Neural response patterns evolve rapidly in time, reflecting intracortical interactions that sharpen shape representations. In many respects, the principles of shape coding in the somatosensory system resemble those that apply to shape coding in the visual system (although equivalence may be restricted to two-dimensional shape representations). Despite our advancements in understanding neural shape coding in the somatosensory cortex, the computations underlying three-dimensional object representations, which require the integration of cutaneous and proprioceptive information, are not well understood. Similarly, whether and how haptic shape information interacts with other object characteristics like texture and temperature remains to be tested. Finally, the computations underlying the decoding of tactile shape representations for perceptual access and executive functioning remain to be characterized.

Cross-References

References

  1. Bensmaia SJ, Yau JM (2011) The organization and function of somatosensory cortex. In: Hertenstein MJ, Weiss SJ (eds) The handbook of touch, 1st edn. Springer, New York, pp 161–187Google Scholar
  2. Bensmaia SJ, Denchev PV, Dammann JF III, Craig JC, Hsiao SS (2008) The representation of stimulus orientation in the early stages of somatosensory processing. J Neurosci 28:776–786PubMedCrossRefGoogle Scholar
  3. Carlson M (1981) Characteristics of sensory deficits following lesions of Brodmann’s areas 1 and 2 in the postcentral gyrus of Macaca mulatta. Brain Res 204:424–430PubMedCrossRefGoogle Scholar
  4. DiCarlo JJ, Johnson KO (1999) Velocity invariance of receptive field structure in somatosensory cortical area 3b of the alert monkey. J Neurosci 19:401–419PubMedGoogle Scholar
  5. DiCarlo JJ, Johnson KO, Hsiao SS (1998) Structure of receptive fields in area 3b of primary somatosensory cortex in the alert monkey. J Neurosci 18:2626–2645PubMedGoogle Scholar
  6. Fitzgerald PJ, Lane JW, Thakur PH, Hsiao SS (2006) Receptive field properties of the macaque second somatosensory cortex: representation of orientation on different finger pads. J Neurosci 26:6473–6484PubMedCrossRefPubMedCentralGoogle Scholar
  7. Iwamura Y, Tanaka M (1978) Postcentral neurons in hand region of area 2: their possible role in the form discrimination of tactile objects. Brain Res 150:662–666PubMedCrossRefGoogle Scholar
  8. Johnson KO (2001) The roles and functions of cutaneous mechanoreceptors. Curr Opin Neurobiol 11:455–461PubMedCrossRefGoogle Scholar
  9. London BM, Miller LE (2013) Responses of somatosensory area 2 neurons to actively and passively generated limb movements. J Neurophysiol 109:1505–1513PubMedCrossRefPubMedCentralGoogle Scholar
  10. Mountcastle VB (2005) The sensory hand. Neural mechanisms in somatic sensation. Harvard University Press, CambridgeGoogle Scholar
  11. Sripati AP, Yoshioka T, Denchev P, Hsiao SS, Johnson KO (2006) Spatiotemporal receptive fields of peripheral afferents and cortical area 3b and 1 neurons in the primate somatosensory system. J Neurosci 26:2101–2114PubMedCrossRefPubMedCentralGoogle Scholar
  12. Yau JM, Pasupathy A, Fitzgerald PJ, Hsiao SS, Connor CE (2009) Analogous intermediate shape coding in vision and touch. Proc Natl Acad Sci U S A 106:16457–16462PubMedCrossRefPubMedCentralGoogle Scholar
  13. Yau JM, Connor CE, Hsiao SS (2013) Representation of tactile curvature in macaque somatosensory area 2. J Neurophysiol 109:2999–3012PubMedCrossRefPubMedCentralGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Johns Hopkins UniversityBaltimoreUSA