Advertisement

BMC Neuroscience

, 10:P158 | Cite as

Visualization of higher-level receptive fields in a hierarchical model of the visual system

Open Access
Poster presentation
  • 1.1k Downloads

Keywords

Receptive Field Image Sequence Intermediate Layer Input Sequence Complex Cell 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Early visual receptive fields have been measured extensively and are fairly well mapped. Receptive fields in higher areas, on the other hand, are very difficult to characterize, because it is not clear what they are tuned to and which stimuli to use to study them. Early visual receptive fields have been reproduced by computational models. Slow feature analysis (SFA), for instance, is an algorithm that finds functions that extract most slowly varying features from a multi-dimensional input sequence [1]. Applied to quasi-natural image sequences, i.e. image sequences derived from natural images by translation, rotation and zoom, SFA yields many properties of complex cells in V1 [2].

A hierarchical network of SFA units learns invariant object representations much like in IT [3]. These successes suggest that units of intermediate layers in the network might share properties with cells in V2 or V4. The goal of this project is therefore to develop techniques to visualize and characterize such units to understand how cells in V2/V4 might work. This is nontrivial because the units are highly nonlinear. The algorithm is gradient-based and applied in a cascade within the network. We start with a natural image patch as an input, which then gets optimized by gradient ascent to maximize the output of one particular unit. Figure 1 shows such optimal stimuli for units in the first (a, b) and the second layer (c, d). The latter can be associated with cells in V2/V4. We plan to extend this to higher layers and larger receptive fields and will also develop techniques to visualize the invariances of the units, i.e. those variations to the input that have little effect on the unit's output. The long-term goal is to provide a good stimulus set for characterizing cells in V2/V4.
Figure 1

Optimal stimuli of units in the first layer (a, b) and the second layer (c, d) of a hierarchical SFA network optimized for slowness and trained with quasi-natural image sequences.

References

  1. 1.
    Wiskott L, Sejnowski TJ: Slow feature analysis: Unsupervised learning of invariances. Neural Computation. 2002, 14: 715-770. 10.1162/089976602317318938.PubMedCrossRefGoogle Scholar
  2. 2.
    Berkes P, Wiskott L: Slow feature analysis yields a rich repertoire of complex cell properties. J Vision. 2005, 5: 579-602. 10.1167/5.6.9.CrossRefGoogle Scholar
  3. 3.
    Franzius M, Wilbert N, Wiskott L: Invariant object recognition with slow feature analysis. Proc 18th Int'l Conf on Artificial Neural Networks. 2008, 961-970.Google Scholar

Copyright information

© Hinze et al; licensee BioMed Central Ltd. 2009

This article is published under license to BioMed Central Ltd.

Authors and Affiliations

  • Christian Hinze
    • 1
  • Niko Wilbert
    • 1
    • 2
  • Laurenz Wiskott
    • 1
    • 2
  1. 1.Institute for Theoretical Biology, Humboldt UniversityBerlinGermany
  2. 2.Bernstein Center for Computational NeuroscienceHumboldt UniversityBerlinGermany

Personalised recommendations