Skip to main content

A Possible Encoding of 3D Visual Space in Primates

  • Conference paper
  • First Online:
Book cover Computational Intelligence (IJCCI 2016)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 792))

Included in the following conference series:

Abstract

Killian et al. were the first to report on entorhinal neurons in primates that show grid-like firing patterns in response to eye movements. We recently demonstrated that these visual grid cells can be modeled with our RGNG-based grid cell model. Here we revisit our previous approach and develop a more comprehensive encoding of the presumed input signal that incorporates binocular movement information and fixation points that originate from a three-dimensional environment. The resulting volumetric firing rate maps exhibit a peculiar structure of regularly spaced activity columns and provide the first model-based prediction on the expected activity patterns of visual grid cells in primates if their activity were to be correlated with fixation points from a three-dimensional environment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The full set of parameters used is given in the appendix (Table 1).

  2. 2.

    The notation \(g{\centerdot }\alpha \) is used to reference the element \(\alpha \) within the tuple.

References

  1. Barry, C., Burgess, N.: Neural mechanisms of self-location. Current Biol. 24(8), R330–R339 (2014)

    Article  Google Scholar 

  2. Barry, C., Ginzberg, L.L., OKeefe, J., Burgess, N.: Grid cell firing patterns signal environmental novelty by expansion. Proc. Nat. Acad. Sci. 109(43), 17687–17692 (2012)

    Article  Google Scholar 

  3. Barry, C., Hayman, R., Burgess, N., Jeffery, K.J.: Experience-dependent rescaling of entorhinal grids. Nat. Neurosci. 10(6), 682–684 (2007)

    Google Scholar 

  4. Boccara, C.N., Sargolini, F., Thoresen, V.H., Solstad, T., Witter, M.P., Moser, E.I., Moser, M.B.: Grid cells in pre- and parasubiculum. Nat. Neurosci. 13(8), 987–994 (2010)

    Article  Google Scholar 

  5. Burak, Y.: Spatial coding and attractor dynamics of grid cells in the entorhinal cortex. Current Opin. Neurobiol. 25(0), 169 – 175 (2014). Theoretical and computational neuroscience

    Google Scholar 

  6. Domnisoru, C., Kinkhabwala, A.A., Tank, D.W.: Membrane potential dynamics of grid cells. Nature 495(7440), 199–204 (2013)

    Article  Google Scholar 

  7. Fritzke, B.: A growing neural gas network learns topologies. In: Advances in Neural Information Processing Systems, vol. 7, pp. 625–632. MIT Press (1995)

    Google Scholar 

  8. Fyhn, M., Molden, S., Witter, M.P., Moser, E.I., Moser, M.B.: Spatial representation in the entorhinal cortex. Science 305(5688), 1258–1264 (2004)

    Article  Google Scholar 

  9. Giocomo, L., Moser, M.B., Moser, E.: Computational models of grid cells. Neuron 71(4), 589–603 (2011)

    Article  Google Scholar 

  10. Hafting, T., Fyhn, M., Molden, S., Moser, M.B., Moser, E.I.: Microstructure of a spatial map in the entorhinal cortex. Nature 436(7052), 801–806 (2005)

    Article  Google Scholar 

  11. Jacobs, J., Weidemann, C.T., Miller, J.F., Solway, A., Burke, J.F., Wei, X.X., Suthana, N., Sperling, M.R., Sharan, A.D., Fried, I., Kahana, M.J.: Direct recordings of grid-like neuronal activity in human spatial navigation. Nat. Neurosci. 16(9), 1188–1190 (2013)

    Article  Google Scholar 

  12. Kerdels, J.: A computational model of grid cells based on a recursive growing neural gas. Ph.D. thesis, FernUniversität in Hagen, Hagen (2016)

    Google Scholar 

  13. Kerdels, J., Peters, G.: Analysis of high-dimensional data using local input space histograms. Neurocomputing 169, 272–280 (2015)

    Article  Google Scholar 

  14. Kerdels, J., Peters, G.: A new view on grid cells beyond the cognitive map hypothesis. In: 8th Conference on Artificial General Intelligence (AGI 2015) (2015)

    Google Scholar 

  15. Kerdels, J., Peters, G.: Modelling the grid-like encoding of visual space in primates. In: Proceedings of the 8th International Joint Conference on Computational Intelligence, IJCCI 2016, Volume 3: NCTA, Porto, Portugal, 9–11 November 2016, pp. 42–49 (2016)

    Google Scholar 

  16. Killian, N.J., Jutras, M.J., Buffalo, E.A.: A map of visual space in the primate entorhinal cortex. Nature 491(7426), 761–764 (2012)

    Article  Google Scholar 

  17. Krupic, J., Bauza, M., Burton, S., Barry, C., O’Keefe, J.: Grid cell symmetry is shaped by environmental geometry. Nature 518(7538), 232–235 (2015). https://doi.org/10.1038/nature14153

  18. Lundgaard, I., Li, B., Xie, L., Kang, H., Sanggaard, S., Haswell, J.D.R., Sun, W., Goldman, S., Blekot, S., Nielsen, M., Takano, T., Deane, R., Nedergaard, M.: Direct neuronal glucose uptake heralds activity-dependent increases in cerebral metabolism. Nat. Commun. 6, 6807 (2015)

    Article  Google Scholar 

  19. Moser, E.I., Moser, M.B.: A metric for space. Hippocampus 18(12), 1142–1156 (2008)

    Article  Google Scholar 

  20. Moser, E.I., Moser, M.B., Roudi, Y.: Network mechanisms of grid cells. Philos. Trans. R. Soc. B Biol. Sci. 369(1635) (2014)

    Google Scholar 

  21. Sargolini, F., Fyhn, M., Hafting, T., McNaughton, B.L., Witter, M.P., Moser, M.B., Moser, E.I.: Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science 312(5774), 758–762 (2006)

    Article  Google Scholar 

  22. Stensola, H., Stensola, T., Solstad, T., Froland, K., Moser, M.B., Moser, E.I.: The entorhinal grid map is discretized. Nature 492(7427), 72–78 (2012)

    Article  Google Scholar 

  23. Stensola, T., Stensola, H., Moser, M.B., Moser, E.I.: Shearing-induced asymmetry in entorhinal grid cells. Nature 518(7538), 207–212 (2015)

    Article  Google Scholar 

  24. Welinder, P.E., Burak, Y., Fiete, I.R.: Grid cells: the position code, neural network models of activity, and the problem of learning. Hippocampus 18(12), 1283–1300 (2008)

    Article  Google Scholar 

  25. Yartsev, M.M., Witter, M.P., Ulanovsky, N.: Grid cells without theta oscillations in the entorhinal cortex of bats. Nature 479(7371), 103–107 (2011)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jochen Kerdels .

Editor information

Editors and Affiliations

Appendix

Appendix

1.1 Recursive Growing Neural Gas

The recursive growing neural gas (RGNG) has essentially the same structure as the regular growing neural gas (GNG) proposed by Fritzke [7]. Like a GNG an RGNG \(g\) can be described by a tupleFootnote 2:

$$ g := \left( U,C,\theta \right) \in G, $$

with a set \(U\) of units, a set \(C\) of edges, and a set \(\theta \) of parameters. Each unit \(u\) is described by a tuple:

$$ u := \left( w,e\right) \in U,\quad w \in W := \mathbb {R}^n \cup G,\;\; e \in \mathbb {R}, $$

with the prototype \(w\), and the accumulated error \(e\). Note that in contrast to the regular GNG the prototype \(w\) of an RGNG unit can either be a \(n\)-dimensional vector or another RGNG. Each edge \(c\) is described by a tuple:

$$ c := \left( V,t\right) \in C,\quad V \subseteq U \,\wedge \, |V| = 2,\;\; t \in \mathbb {N}, $$

with the units \(v \in V\) connected by the edge and the age \(t\) of the edge. The direct neighborhood \(E_u\) of a unit \(u \in U\) is defined as:

$$ E_u := \left\{ k | \exists \left( V,t\right) \in C,\;\; V = \left\{ u,k\right\} ,\; \; t \in \mathbb {N}\right\} . $$

The set \(\theta \) of parameters consists of:

$$ \theta := \left\{ \epsilon _b, \epsilon _n, \epsilon _r, \lambda , \tau , \alpha , \beta , M\right\} . $$

Compared to the regular GNG the set of parameters has grown by \(\theta {\centerdot }\epsilon _r\) and \(\theta {\centerdot }M\). The former parameter is a third learning rate used in the adaptation function \(A\) (see below). The latter parameter is the maximum number of units in an RGNG. This number refers only to the number of “direct” units in a particular RGNG and does not include potential units present in RGNGs that are prototypes of these direct units.

Like its structure the behavior of the RGNG is basically identical to that of a regular GNG. However, since the prototypes of the units can either be vectors or RGNGs themselves, the behavior is now defined by four functions. The distance function

$$ D\!\left( x,y\right) : W \times W \rightarrow \mathbb {R} $$

determines the distance either between two vectors, two RGNGs, or a vector and an RGNG. The interpolation function

$$ I\!\left( x,y\right) : \left( \mathbb {R}^n \times \mathbb {R}^n\right) \cup \left( G \times G \right) \rightarrow W $$

generates a new vector or new RGNG by interpolating between two vectors or two RGNGs, respectively. The adaptation function

$$ A\!\left( x,\xi ,r\right) : W \times \mathbb {R}^n \times \mathbb {R} \rightarrow W $$

adapts either a vector or RGNG towards the input vector \(\xi \) by a given fraction \(r\). Finally, the input function

$$ F\!\left( g,\xi \right) : G \times \mathbb {R}^n \rightarrow G \times \mathbb {R} $$

feeds an input vector \(\xi \) into the RGNG \(g\) and returns the modified RGNG as well as the distance between \(\xi \) and the best matching unit (BMU, see below) of \(g\). The input function \(F\) contains the core of the RGNG’s behavior and utilizes the other three functions, but is also used, in turn, by those functions introducing several recursive paths to the program flow.

\(\varvec{F\!\left( g,\xi \right) }\): The input function \(F\) is a generalized version of the original GNG algorithm that facilitates the use of prototypes other than vectors. In particular, it allows to use RGNGs themselves as prototypes resulting in a recursive structure. An input \(\xi \in \mathbb {R}^n\) to the RGNG \(g\) is processed by the input function \(F\) as follows:

  • Find the two units \(s_1\) and \(s_2\) with the smallest distance to the input \(\xi \) according to the distance function \(D\):

    $$ \begin{array}{lll} s_1 &{}:=&{} \text {arg min}_{u \in g{\centerdot }U} D\!\left( u{\centerdot }w,\,\xi \right) , \\ s_2 &{}:=&{} \text {arg min}_{u \in g{\centerdot }U \setminus \left\{ s_1\right\} } D\!\left( u{\centerdot }w,\,\xi \right) . \end{array} $$
  • Increment the age of all edges connected to \(s_1\):

    $$ \varDelta c{\centerdot }t = 1, \quad c \in g{\centerdot }C \,\wedge \, s_1 \in c{\centerdot }V\;. $$
  • If no edge between \(s_1\) and \(s_2\) exists, create one:

    $$ g{\centerdot }C \,\Leftarrow \, g{\centerdot }C \,\cup \, \left\{ \left( \left\{ s_1,s_2\right\} ,0 \right) \right\} . $$
  • Reset the age of the edge between \(s_1\) and \(s_2\) to zero:

    $$ c{\centerdot }t \,\Leftarrow \, 0, \quad c \in g{\centerdot }C \,\wedge \, s_1,s_2 \in c{\centerdot }V\;. $$
  • Add the squared distance between \(\xi \) and the prototype of \(s_1\) to the accumulated error of \(s_1\):

    $$ \varDelta s_1{\centerdot }e = D\!\left( s_1{\centerdot }w,\,\xi \right) ^2. $$
  • Adapt the prototype of \(s_1\) and all prototypes of its direct neighbors:

    $$ \begin{array}{lll} s_1{\centerdot }w &{}\Leftarrow &{} A\!\left( s_1{\centerdot }w,\,\xi ,\,g{\centerdot }\theta {\centerdot }\epsilon _b \right) , \\ s_n{\centerdot }w &{}\Leftarrow &{} A\!\left( s_n{\centerdot }w,\,\xi ,\,g{\centerdot }\theta {\centerdot }\epsilon _n\right) ,\, \forall s_n \in E_{s_1}. \end{array} $$
  • Remove all edges with an age above a given threshold \(\tau \) and remove all units that no longer have any edges connected to them:

    $$ \begin{array}{lll} g{\centerdot }C &{}\Leftarrow &{} g{\centerdot }C \,\setminus \, \left\{ c| c \in g{\centerdot }C \,\wedge \, c{\centerdot }t > g{\centerdot }\theta {\centerdot }\tau \right\} ,\\ g{\centerdot }U &{}\Leftarrow &{} g{\centerdot }U \,\setminus \, \left\{ u| u \in g{\centerdot }U \,\wedge \, E_u = \emptyset \right\} . \end{array} $$
  • If an integer-multiple of \(g{\centerdot }\theta {\centerdot }\lambda \) inputs was presented to the RGNG \(g\) and \(|g{\centerdot }U| < g{\centerdot }\theta {\centerdot }M\), add a new unit \(u\). The new unit is inserted “between” the unit \(j\) with the largest accumulated error and the unit \(k\) with the largest accumulated error among the direct neighbors of \(j\). Thus, the prototype \(u{\centerdot }w\) of the new unit is initialized as:

    $$ \begin{array}{ll} u{\centerdot }w := I\!\left( j{\centerdot }w,k{\centerdot }w\right) ,&{} j = \text {arg max}_{l \in g{\centerdot }U}\left( l{\centerdot }e\right) ,\\ &{}k = \text {arg max}_{l \in E_j}\left( l{\centerdot }e\right) . \end{array} $$

    The existing edge between units \(j\) and \(k\) is removed and edges between units \(j\) and \(u\) as well as units \(u\) and \(k\) are added:

    $$ \begin{array}{lll} g{\centerdot }C &{}\Leftarrow &{} g{\centerdot }C \,\setminus \, \left\{ c|c \in g{\centerdot }C \,\wedge \, j,k \in c{\centerdot }V\right\} ,\\ g{\centerdot }C &{}\Leftarrow &{} g{\centerdot }C \,\cup \, \left\{ \left( \left\{ j,u\right\} ,0\right) , \left( \left\{ u,k\right\} ,0\right) \right\} . \end{array} $$

    The accumulated errors of units \(j\) and \(k\) are decreased and the accumulated error \(u{\centerdot }e\) of the new unit is set to the decreased accumulated error of unit \(j\):

    $$ \begin{array}{rll} \varDelta j{\centerdot }e &{}=&{} -g{\centerdot }\theta {\centerdot }\alpha \cdot j{\centerdot }e,\quad \varDelta k{\centerdot }e = -g{\centerdot }\theta {\centerdot }\alpha \cdot k{\centerdot }e, \\ u{\centerdot }e &{}:=&{} j{\centerdot }e\;. \end{array} $$
  • Finally, decrease the accumulated error of all units:

    $$ \varDelta u{\centerdot }e = -g{\centerdot }\theta {\centerdot }\beta \cdot u{\centerdot }e, \quad \forall u \in g{\centerdot }U\;. $$

The function \(F\) returns the tuple \(\left( g,d_\text {min}\right) \) containing the now updated RGNG \(g\) and the distance \(d_\text {min} := D\! \left( s_1{\centerdot }w,\,\xi \right) \) between the prototype of unit \(s_1\) and input \(\xi \). Note that in contrast to the regular GNG there is no stopping criterion any more, i.e., the RGNG operates explicitly in an online fashion by continuously integrating new inputs. To prevent unbounded growth of the RGNG the maximum number of units \(\theta {\centerdot }M\) was introduced to the set of parameters.

\(\varvec{D\!\left( x,y\right) }\): The distance function \(D\) determines the distance between two prototypes \(x\) and \(y\). The calculation of the actual distance depends on whether \(x\) and \(y\) are both vectors, a combination of vector and RGNG, or both RGNGs:

$$ D\!\left( x,y\right) := \left\{ \begin{array}{ll} D_{RR}\!\left( x,y\right) &{} \text {if}\;\; x,y \in \mathbb {R}^n,\\ D_{GR}\!\left( x,y\right) &{} \text {if}\;\; x \in G \,\wedge \, y \in \mathbb {R}^n ,\\ D_{RG}\!\left( x,y\right) &{} \text {if}\;\; x \in \mathbb {R}^n \,\wedge \, y \in G ,\\ D_{GG}\!\left( x,y\right) &{} \text {if}\;\; x,y \in G. \end{array} \right. $$

In case the arguments of \(D\) are both vectors, the Minkowski distance is used:

$$ \begin{array}{ll} D_{RR}\!\left( x,y\right) := \left( \sum _{i=1}^{n} \left| x_i - y_i\right| ^p \right) ^{\frac{1}{p}}, &{}x = \left( x_1,\ldots ,x_n\right) , \\ &{} y = \left( y_1,\ldots ,y_n\right) , \\ &{} p \in \mathbb {N}. \end{array} $$

Using the Minkowski distance instead of the Euclidean distance allows to adjust the distance measure with respect to certain types of inputs via the parameter \(p\). For example, setting \(p\) to higher values results in an emphasis of large changes in individual dimensions of the input vector versus changes that are distributed over many dimensions [13]. However, in the case of modeling the behavior of grid cells the parameter is set to a fixed value of \(2\) which makes the Minkowski distance equivalent to the Euclidean distance. The latter is required in this context as only the Euclidean distance allows the GNG to form an induced Delaunay triangulation of its input space.

In case the arguments of \(D\) are a combination of vector and RGNG, the vector is fed into the RGNG using function \(F\) and the returned minimum distance is taken as distance value:

$$ \begin{array}{lll} D_{GR}\!\left( x,y\right) &{}:=&{} F\!\left( x,y\right) \!{\centerdot }d_\text {min},\\ D_{RG}\!\left( x,y\right) &{}:=&{} D_{GR}\!\left( y,x\right) . \end{array} $$

In case the arguments of \(D\) are both RGNGs, the distance is defined to be the pairwise minimum distance between the prototypes of the RGNGs’ units, i.e., single linkage distance between the sets of units is used:

$$ D_{GG}\!\left( x,y\right) := \min _{u \in x{\centerdot }U,\, k \in y{\centerdot }U}\; D\!\left( u{\centerdot }w,\,k{\centerdot }w\right) . $$

The latter case is used by the interpolation function if the recursive depth of an RGNG is at least \(2\). As the RGNG-based grid cell model has only a recursive depth of \(1\) (see next section), the case is considered for reasons of completeness rather than necessity. Alternative measures to consider could be, e.g., average or complete linkage.

\(\varvec{I\!\left( x,y\right) }\): The interpolation function \(I\) returns a new prototype as a result from interpolating between the prototypes \(x\) and \(y\). The type of interpolation depends on whether the arguments are both vectors or both RGNGs:

$$ I\!\left( x,y\right) := \left\{ \begin{array}{ll} I_{RR}\!\left( x,y\right) &{} \text {if}\;\; x,y \in \mathbb {R}^n,\\ I_{GG}\!\left( x,y\right) &{} \text {if}\;\; x,y \in G. \end{array} \right. $$

In case the arguments of \(I\) are both vectors, the resulting prototype is the arithmetic mean of the arguments:

$$ I_{RR}\!\left( x,y\right) := \frac{x+y}{2}. $$

In case the arguments of \(I\) are both RGNGs, the resulting prototype is a new RGNG \(a\). Assuming w.l.o.g. that \(|x{\centerdot }U| \ge |y{\centerdot }U|\) the components of the interpolated RGNG \(a\) are defined as follows:

$$ \begin{array}{rl} a &{}:= I\!\left( x,y\right) ,\\ a{\centerdot }U &{}:= \left\{ \left( w,0\right) \left| \begin{array}{l} w=I\!\left( u{\centerdot }w,k{\centerdot }w\right) ,\\ \quad \;\forall u \in x{\centerdot }U,\\ \quad \;\, k = \displaystyle {\mathop {\text {arg min}}_{l \in y{\centerdot }U}} D\!\left( u{\centerdot }w, \,l{\centerdot }w\right) \end{array}\right. \right\} , \end{array} $$
$$ \begin{array}{rl} a{\centerdot }C &{}:= \left\{ \left( \left\{ l,m\right\} ,0\right) \left| \begin{array}{l} \exists c \in x{\centerdot }C \\ \;\wedge \quad u,k \in c{\centerdot }V\\ \;\wedge \quad l{\centerdot }w = I\!\left( u{\centerdot }w,\cdot \right) \\ \;\wedge \quad m{\centerdot }w = I\!\left( k{\centerdot }w,\cdot \right) \end{array} \right. \right\} ,\\ a{\centerdot }\theta &{}:= x{\centerdot }\theta \;. \end{array} $$

The resulting RGNG \(a\) has the same number of units as RGNG \(x\). Each unit of \(a\) has a prototype that was interpolated between the prototype of the corresponding unit in \(x\) and the nearest prototype found in the units of \(y\). The edges and parameters of \(a\) correspond to the edges and parameters of \(x\).

\(\varvec{A\!\left( x,\xi ,r\right) }\): The adaptation function \(A\) adapts a prototype \(x\) towards a vector \(\xi \) by a given fraction \(r\). The type of adaptation depends on whether the given prototype is a vector or an RGNG:

$$ A\!\left( x,\xi ,r\right) := \left\{ \begin{array}{ll} A_{R}\!\left( x,\xi ,r\right) &{} \text {if}\;\; x \in \mathbb {R}^n,\\ A_{G}\!\left( x,\xi ,r\right) &{} \text {if}\;\; x \in G. \end{array} \right. $$

In case prototype \(x\) is a vector, the adaptation is performed as linear interpolation:

$$ A_{R}\!\left( x,\xi ,r\right) := \left( 1 - r\right) x + r\,\xi . $$

In case prototype \(x\) is an RGNG, the adaptation is performed by feeding \(\xi \) into the RGNG. Importantly, the parameters \(\epsilon _b\) and \(\epsilon _n\) of the RGNG are temporarily changed to take the fraction \(r\) into account:

$$ \begin{array}{rl} \theta ^* &{}:= \left( \;r,\;\;r \cdot x{\centerdot }\theta {\centerdot }\epsilon _r,\;\; x{\centerdot }\theta {\centerdot }\epsilon _r,\;\; x{\centerdot }\theta {\centerdot }\lambda ,\;\; x{\centerdot }\theta {\centerdot }\tau ,\right. \\ &{}\left. \quad \quad x{\centerdot }\theta {\centerdot }\alpha ,\;\; x{\centerdot }\theta {\centerdot }\beta ,\;\; x{\centerdot }\theta {\centerdot }M\right) ,\\ x^* &{}:= \left( x{\centerdot }U,\,x{\centerdot }C,\,\theta ^*\right) ,\\ A_{G}\!\left( x,\xi ,r\right) &{}:= F\!\left( x^*,\xi \right) \!{\centerdot }x\;. \end{array} $$

Note that in this case the new parameter \(\theta {\centerdot }\epsilon _r\) is used to derive a temporary \(\epsilon _n\) from the fraction \(r\).

This concludes the formal definition of the RGNG algorithm.

1.2 Activity Approximation

The RGNG-based model describes a group of neurons for which we would like to derive their “activity” for any given input as a scalar that represents the momentary firing rate of the particular neuron. Yet, the RGNG algorithm itself does not provide a direct measure that could be used to this end. Therefore, we derive the activity \(a_u\) of a modelled neuron \(u\) based on the neuron’s best and second best matching BL units \(s_1\) and \(s_2\) with respect to a given input \(\xi \) as:

$$ a_u := e^{-\frac{\left( 1-r\right) ^2}{2\sigma ^2}}, $$

with \(\sigma = 0.2\) and ratio \(r\):

$$ r := \frac{D\!\left( s_2{\centerdot }w,\,\xi \right) - D\!\left( s_1{\centerdot }w,\,\xi \right) }{D\!\left( s_1{\centerdot }w,\,s_2{\centerdot }w\right) },\quad s_1,s_2 \in u{\centerdot }w{\centerdot }U, $$

using a distance function \(D\). This measure of activity allows to correlate the response of a neuron to a given input with further variables.

1.3 Parameterization

Each layer of an RGNG requires its own set of parameters. In case of our two-layered grid cell model we use the sets of parameters \(\theta _1\) and \(\theta _2\), respectively. Parameter set \(\theta _1\) controls the main top layer RGNG while parameter set \(\theta _2\) controls all bottom layer RGNGs. Table 1 summarizes the parameter values used for the simulation runs presented in our previous work [15], while Table 2 contains the parameters of the simulation runs presented in this paper. For a detailed characterization of these parameters we refer to Kerdels [12].

Table 1 Parameters of the RGNG-based model used for the simulation runs in our previous work [15]. Parameters \(\theta _1\) control the top layer RGNG while parameters \(\theta _2\) control all bottom layer RGNGs of the model
Table 2 Parameters of the RGNG-based model used for the simulation runs presented in this paper (Sect. 4)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kerdels, J., Peters, G. (2019). A Possible Encoding of 3D Visual Space in Primates. In: Merelo, J.J., et al. Computational Intelligence. IJCCI 2016. Studies in Computational Intelligence, vol 792. Springer, Cham. https://doi.org/10.1007/978-3-319-99283-9_14

Download citation

Publish with us

Policies and ethics