Estimating Visual Motion Using an Event-Based Artificial Retina

  • Luma Issa Abdul-KreemEmail author
  • Heiko Neumann
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 598)


Biologically inspired computational models of visual processing often utilize conventional frame-based cameras for data acquisition. Instead, the Dynamic Vision Sensor (DVS) emulates the main processing sequence of the mammalian retina and generates spike-trains to encode temporal changes in the luminance distribution of a visual scene. Based on such sparse input representation we propose neural mechanisms for initial motion estimation and integration functionally related to the dorsal stream in the visual cortical hierarchy. We adapt the spatio-temporal filtering scheme as originally suggested by Adelson and Bergen to make it consistent with the input representation generated by the DVS. In order to regulate the overall activation of single neurons against a pool of neighboring cells, we incorporate a competitive stage that operates upon the spatial as well as the feature domain. The impact of such normalization stage is evaluated using information theoretic measures. Results of optical flow estimation were analyzed using synthetic ground truth data.


Event-based vision Optic flow Neuromorphic sensor Neural model Motion integration 



LIAK. has been supported by grants from the Ministry of Higher Education and Scientific Research (MoHESR) Iraq and from the German Academic Exchange Service (DAAD). HN. acknowledges support from DFG in the Collaborative Research Center SFB/TR (A companion technology for cognitive technical systems). The authors would like to thank M. Schels for his help in recording biological motion.


  1. 1.
    Litzenberger, M., Belbachir, A.N., Donath, N., Gritsch, G., Garn, H., Kohn, B., Posch, C., Schraml, S.: Estimation of vehicle speed based on asynchronous data from a silicon retina optical sensor. In: IEEE Intelligent Transportation Systems Conference Toronto, Canada, pp. 17–20 (2006)Google Scholar
  2. 2.
    Liu, S., Delbruck, T.: Neuromorphic sensory systems. Neurobiology 20, 288–295 (2010)Google Scholar
  3. 3.
    Lichtsteiner, P., Posch, C., Delbruck, T.: A 128 \(\times \) 128 120 db 15 \(\mu s\) latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43, 566–576 (2008)CrossRefGoogle Scholar
  4. 4.
    Delbruck, T., Lichtsteiner, P.: Fast sensory motor control based on event-based hybrid neuromorphic-procedural system. In: IEEE International Symposiom on Circuit and System, pp. 845–848 (2007)Google Scholar
  5. 5.
    Litzenberger, M., Posch, C., Bauer, D., Belbachir, A.N., Schon, P., Kohn, B., Garn, H.: Embedded vision system for real-time object tracking using an asynchronous transient vision sensor. In: 12th - Signal Processing Education Workshop. IEEE DSPW, pp. 173–178 (2006)Google Scholar
  6. 6.
    Ni, Z., Pacoret, C., Benosman, R., Ieng, S., Regnier, S.: Asynchronous event-based high speed vision for microparticle tracking. J. Microsc. 43, 1365–2818 (2011)Google Scholar
  7. 7.
    Abdul-Kreem, L.I., Neumann, H.: Bio-inspired model for motion estimation using address event representation. In: 10th International Conference on Computer Vision Theory and Application, VISIGRAPP, Berlin, Germany, 11–14 March 2015Google Scholar
  8. 8.
    Adelson, E., Bergen, J.: Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. 2, 90–105 (1985)CrossRefGoogle Scholar
  9. 9.
    Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J.G. (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Drulea, M., Nedevschi, S.: Motion estimation using the correlation transform. IEEE Trans. Image Process. 22, 1057–7149 (2013)CrossRefGoogle Scholar
  11. 11.
    Fleet, D., Jepson, A.: Computation of component image velocity from local phase information. Int. J. Comput. Vis. 5, 77–104 (1990)CrossRefGoogle Scholar
  12. 12.
    Horn, B., Schunck, B.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)CrossRefGoogle Scholar
  13. 13.
    Benosman, R., Leng, S., Clercq, C., Bartolozzi, C., Srinivasan, M.: Asynchronous framless event-based opticlal flow. Neural Netw. 27, 32–37 (2012)CrossRefGoogle Scholar
  14. 14.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with and application to stereo vision. In: Proceedings of Imaging Understanding Workshop, pp. 121–130 (1981)Google Scholar
  15. 15.
    Tschechne, S., Brosch, T., Sailer, R., Egloffstein, N., Abdul-Kreem, L.I., Neumann, H.: On event-based motion detection and integration. In: Proceedings of 8th International Conference on Bio-inspired Information and Communication Technologies, BICT, December 1–3, Boston, MA, USA. ACM digital library (2014)Google Scholar
  16. 16.
    Tschechne, S., Sailer, R., Neumann, H.: Bio-inspired optic flow from event-based neuromorphic sensor input. In: El Gayar, N., Schwenker, F., Suen, C. (eds.) ANNPR 2014. LNCS, vol. 8774, pp. 171–182. Springer, Heidelberg (2014)Google Scholar
  17. 17.
    De Valois, R., Cottarisb, N.P., Mahonb, L.E., Elfara, S.D., Wilsona, J.A.: Spatial and temporal receptive fields of geniculate and cortical cells and directional selectivity. Vis. Res. 40, 3685–3702 (2000)CrossRefGoogle Scholar
  18. 18.
    Brosch, T., Tschechne, S., Neumann, H.: On event-based optical flow detection. Front. Neurosci. 9, Article No. 137, 1–15 (2015)Google Scholar
  19. 19.
    Ringach, D.L.: Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Neurophysiology 88, 455–463 (2002)Google Scholar
  20. 20.
    Carandini, M., Heeger, D.J.: Normalization as a canonical neural computation. Nat. Rev. Neurosci. 13, 51–62 (2012)CrossRefGoogle Scholar
  21. 21.
    Brosch, T., Neumann, H.: Computing with a canonical neural circuits model with pool normalization and modulating feedback. Neural Comput. 26, 2735–2789 (2014)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Blomfield, S.: Arithmetical operations performed by nerve cells. Brain Res. 69, 115–124 (1974)CrossRefGoogle Scholar
  23. 23.
    Dayan, P., Abbot, L.F.: Theoretical Neuroscience. MIT Press, Cambridge (2001)Google Scholar
  24. 24.
    Silver, R.A.: Neuronal arithmetic. Nat. Rev. Neurosci. 11, 474–489 (2010)CrossRefGoogle Scholar
  25. 25.
    Grossberg, S.: Nonlinear neural networks: principles, mechanisms, and architectures. Neural Netw. 1, 17–61 (1988)CrossRefGoogle Scholar
  26. 26.
    Bouecke, J., Tlapale, E., Kornprobst, P., Neumann, H.: Neural mechanisms of motion detection, integration, and segregation: from biology to artificial image processing systems. EURASIP J. Adv. Signal Process. 2011, Article ID 781561, 22 (2010). doi:  10.1155/2011/781561
  27. 27.
    Lyu, S., Simoncelli, E.P.: Nonlinear extraction of independent components of natural images using radial gaussianization. Neural Comput. 21, 1485–1519 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Bayerl, P., Neumann, H.: Disambiguating visual motion through contextual feedback modulation. Neural Comput. 16, 2041–2066 (2004)CrossRefzbMATHGoogle Scholar
  29. 29.
    Yo, C., Wilson, H.: Perceived direction of moving two-dimensional patterns depends on duration, contrast and eccentricity. Vis. Res. 32, 135–147 (1992)CrossRefGoogle Scholar
  30. 30.
    Adelson, E., Movshon, J.: Phenomenal coherence of moving visual pattern. Nature 300, 523–525 (1982)CrossRefGoogle Scholar
  31. 31.
    Simoncelli, E.: Bayesian multiscale differential optical flow. In: Handbook of Computer Vision and Applications, Chap. 14. Academic Press (1999)Google Scholar
  32. 32.
    Caplovitz, G., Hsieh, P., Tse, P.: Mechanisms underlying the perceived angular velocity of a rigidly rotating object. Vis. Res. 46, 2877–2893 (2006)CrossRefGoogle Scholar
  33. 33.
    Hubel, D.H., Wiesel, T.N.: Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat. J. Neurophysiol. 28, 229–289 (1965)Google Scholar
  34. 34.
    Pack, C.C., Livingstone, M.S., Duffy, K.R., Born, R.T.: End-stopping and the aperture problem: two-dimensional motion signals in macaque v1. Neuron 39, 671–680 (2003)CrossRefGoogle Scholar
  35. 35.
    Tsui, J.M.G., Hunter, N., Born, R.T., Pack, C.C.: The role of v1 surround suppression in mt motion integration. J. Neurophysiol. 24, 3123–3138 (2010)CrossRefGoogle Scholar
  36. 36.
    Studený, M., Vejnarová, J.: The multiinformation function as a tool for measuring stochastic dependence. In: Jordan, M.I. (ed.) Learning in Graphical Models. NATO ASI Series, vol. 89, pp. 261–297. Springer, Heidelberg (1998). (Kluwer Academic Publishers)CrossRefGoogle Scholar
  37. 37.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley, Hoboken (2006)zbMATHGoogle Scholar
  38. 38.
    Lyu, S., Simoncelli, E.P.: Nonlinear extraction of independent components of natural images using radial gaussianization. Neural Comput. 21, 1485–1519 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Strout, J.J., Pantle, A., Mills, S.L.: An energy model of interframe interval effects in single-step apparent motion. Vis. Res. 34, 3223–3240 (1994)CrossRefGoogle Scholar
  40. 40.
    Emerson, R.C., Bergen, J.R., Adelson, E.H.: Directionally selective complex cells and the computation of motion energy in cat visual cortex. Vis. Res. 32, 203–218 (1992)CrossRefGoogle Scholar
  41. 41.
    Challinor, K.L., Mather, G.: A motion-energy model predicts the direction discrimination and mae duration of two-stroke apparent motion at high and low retinal illuminance. Visi. Res. 50, 1109–1116 (2010)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Institute for Neural Information ProcessingUlm UniversityUlmGermany
  2. 2.Control and Systems Engineering DepartmentUniversity of TechnologyBaghdadIraq

Personalised recommendations