Skip to main content

Event-Based Visual Data Sets for Prediction Tasks in Spiking Neural Networks

  • Conference paper
Artificial Neural Networks and Machine Learning – ICANN 2014 (ICANN 2014)

Abstract

For spiking networks to perform computational tasks, benchmark data sets are required for model design, refinement and testing. Classic machine learning benchmark data sets use classification as the dominant paradigm, however the temporal characteristics of spiking neural networks mean they are likely to be more useful for problems involving sequence data. To support these paradigms, we provide data sets of 11 moving scenes, each with multiple variations, recorded from a dynamic vision sensor (DVS128), comprising high dimensional (16k pixels) and low latency (15 microsecond) events. We also present a novel long range prediction task based on the DVS128 data, and introduce a pilot study of a spiking neural network learning to predict thousands of events into the future.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Chandy, K.M.: Event-driven applications: Costs, benefits and design approaches. Presented at the Gartner Application Integration and Web Services Summit, San Diego, CA (2006)

    Google Scholar 

  2. Lichtsteiner, P., et al.: A 128x128 120 db 15 s latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits 43(2), 566–576 (2008)

    Article  Google Scholar 

  3. Liu, S.-C., et al.: Event-based 64-channel binaural silicon cochlea with q enhancement mechanisms. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 2027–2030 (2010)

    Google Scholar 

  4. Ahn, E.Y., et al.: Dynamic vision sensor camera based bare hand gesture recognition. In: 2011 IEEE Symposium on Computational Intelligence for Multimedia, Signal and Vision Processing (CIMSIVP), pp. 52–59 (April 2011)

    Google Scholar 

  5. O’Connor, P., et al.: Real-time classification and sensor fusion with a spiking deep belief network. Frontiers in Neuroscience 7(178) (2013)

    Google Scholar 

  6. Brandli, C., et al.: Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor. Frontiers in Neuroscience 7(275) (2014)

    Google Scholar 

  7. Piatkowska, E., et al.: Spatiotemporal multiple persons tracking using dynamic vision sensor. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 35–40. IEEE (2012)

    Google Scholar 

  8. Drazen, D., et al.: Toward real-time particle tracking using an event-based dynamic vision sensor. Experiments in Fluids 51(5), 1465–1469 (2011)

    Article  Google Scholar 

  9. Koeth, F., et al.: Self-organisation of motion features with a temporal asynchronous dynamic vision sensor. Biologically Inspired Cognitive Architectures 6(0), 8–11 (2013)

    Article  Google Scholar 

  10. Bichler, O., et al.: Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity. Neural Networks 32, 339–348 (2012)

    Article  Google Scholar 

  11. Gomez-Rodriguez, F., et al.: Real time multiple objects tracking based on a bio-inspired processing cascade architecture. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1399–1402 (May 2010)

    Google Scholar 

  12. Censi, A., et al.: Low-latency localization by active led markers tracking using a dynamic vision sensor. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 891–898. IEEE (2013)

    Google Scholar 

  13. Rogister, P., et al.: Asynchronous event-based binocular stereo matching. IEEE Transactions on Neural Networks and Learning Systems 23(2), 347–353 (2012)

    Article  Google Scholar 

  14. Pikatkowska, E., Belbachir, A.N.: Asynchronous stereo vision for event-driven dynamic stereo sensor using an adaptive cooperative approach. In: 2013 IEEE International Conference on Computer Vision (ICCV) Workshops, pp. 45–50 (2013)

    Google Scholar 

  15. Weikersdorfer, D., Hoffmann, R., Conradt, J.: Simultaneous localization and mapping for event-based vision systems. In: Chen, M., Leibe, B., Neumann, B., et al. (eds.) ICVS 2013. LNCS, vol. 7963, pp. 133–142. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  16. Weikersdorfer, D., Conradt, J.: Event-based particle filtering for robot self-localization. In: 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 866–870. IEEE (2012)

    Google Scholar 

  17. Perez-Peña, F., et al.: Neuro-inspired spike-based motion: From dynamic vision sensor to robot motor open-loop control through spike-vite. Sensors 13(11), 15, 805–15, 832 (2013)

    Google Scholar 

  18. Delbrück, T., Lang, M.: Robotic goalie with 3ms reaction time at 4% cpu load using event-based dynamic vision sensor. Frontiers in Neuroscience 7(223) (2013)

    Google Scholar 

  19. Ball, D., et al.: Irat: Intelligent rat animat technology. In: Proceedings of the 2010 Australasian Conference on Robotics and Automation, pp. 1–3 (2010)

    Google Scholar 

  20. Delbrück, T.: Jaer open source project (2007), http://jaer.wiki.sourceforge.net

  21. Delbrück, T.: Caer (2013), http://sourceforge.net/projects/jaer/files/cAER/

  22. Ball, D., et al.: Openratslam: An open source brain-based slam system. Autonomous Robots 34(3), 149–176 (2013)

    Article  Google Scholar 

  23. Besl, P., McKay, N.D.: A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2), 239–256 (1992)

    Article  Google Scholar 

  24. Kullback, S., Leibler, R.A.: On information and sufficiency. In: The Annals of Mathematical Statistics, pp. 79–86 (1951)

    Google Scholar 

  25. Gibson, T., et al.: Predicting temporal sequences using an event-based spiking neural network incorporating learnable delays. In: The International Joint Conference on Neural Networks, IJCNN (in press 2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Gibson, T.(. et al. (2014). Event-Based Visual Data Sets for Prediction Tasks in Spiking Neural Networks. In: Wermter, S., et al. Artificial Neural Networks and Machine Learning – ICANN 2014. ICANN 2014. Lecture Notes in Computer Science, vol 8681. Springer, Cham. https://doi.org/10.1007/978-3-319-11179-7_80

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11179-7_80

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11178-0

  • Online ISBN: 978-3-319-11179-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics