Advertisement

Unsupervised Event-Based Optical Flow Using Motion Compensation

  • Alex Zihao ZhuEmail author
  • Liangzhe Yuan
  • Kenneth Chaney
  • Kostas Daniilidis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11134)

Abstract

In this work, we propose a novel framework for unsupervised learning for event cameras that learns to predict optical flow from only the event stream. In particular, we propose an input representation of the events in the form of a discretized 3D volume, which we pass through a neural network to predict the optical flow for each event. This optical flow is used to attempt to remove any motion blur in the event image. We then propose a loss function applied to the motion compensated event image that measures the motion blur in this image. We evaluate this network on the Multi Vehicle Stereo Event Camera dataset (MVSEC), along with qualitative results from a variety of different scenes.

Keywords

Event cameras Unsupervised learning Optical flow 

Notes

Acknowledgements

Thanks to Tobi Delbruck and the team at iniLabs for providing and supporting the DAVIS-346b cameras. We also gratefully appreciate support through the following grants: NSF-IIS-1703319, NSF-IIP-1439681 (I/UCRC), ARL RCTA W911NF-10-2-0016, and the DARPA FLA program. This work was supported in part by the Semiconductor Research Corporation (SRC) and DARPA.

References

  1. 1.
    Gallego, G., Rebecq, H., Scaramuzza, D.: A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: IEEE International Conference on Computer Vision Pattern Recognition (CVPR), vol. 1 (2018)Google Scholar
  2. 2.
    Yu, J.J., Harley, A.W., Derpanis, K.G.: Back to basics: unsupervised learning of optical flow via brightness constancy and motion smoothness. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 3–10. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-49409-8_1CrossRefGoogle Scholar
  3. 3.
    Lichtsteiner, P., Posch, C., Delbruck, T.: A 128 \(\times \) 128 120 db 15 \(\mu \) s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576 (2008)CrossRefGoogle Scholar
  4. 4.
    Meister, S., Hur, J., Roth, S.: UnFlow: unsupervised learning of optical flow with a bidirectional census loss. In: AAAI, New Orleans, February 2018Google Scholar
  5. 5.
    Mitrokhin, A., Fermuller, C., Parameshwara, C., Aloimonos, Y.: Event-based moving object detection and tracking. arXiv preprint arXiv:1803.04523 (2018)
  6. 6.
    Zhu, A., Yuan, L., Chaney, K., Daniilidis, K.: EV-FlowNet: self-supervised optical flow estimation for event-based cameras. In: Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania, June 2018.  https://doi.org/10.15607/RSS.2018.XIV.062
  7. 7.
    Zhu, A.Z., Chen, Y., Daniilidis, K.: Realtime time synchronized event-based stereo. In: The European Conference on Computer Vision (ECCV), September 2018CrossRefGoogle Scholar
  8. 8.
    Zhu, A.Z., Thakur, D., Ozaslan, T., Pfrommer, B., Kumar, V., Daniilidis, K.: The multi vehicle stereo event camera dataset: an event camera dataset for 3D perception. IEEE Robot. Autom. Lett. 3(3), 2032–2039 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of PennsylvaniaPhiladelphiaUSA

Personalised recommendations