Skip to main content

Continuous-Time Intensity Estimation Using Event Cameras

  • Conference paper
  • First Online:
Computer Vision – ACCV 2018 (ACCV 2018)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11365))

Included in the following conference series:

Abstract

Event cameras provide asynchronous, data-driven measurements of local temporal contrast over a large dynamic range with extremely high temporal resolution. Conventional cameras capture low-frequency reference intensity information. These two sensor modalities provide complementary information. We propose a computationally efficient, asynchronous filter that continuously fuses image frames and events into a single high-temporal-resolution, high-dynamic-range image state. In absence of conventional image frames, the filter can be run on events only. We present experimental results on high-speed, high-dynamic-range sequences, as well as on new ground truth datasets we generate to demonstrate the proposed algorithm outperforms existing state-of-the-art methods.

Code, Datasets and Video: https://cedric-scheerlinck.github.io/continuous-time-intensity-estimation.

This research was supported by an Australian Government Research Training Program Scholarship, and the Australian Research Council through the “Australian Centre of Excellence for Robotic Vision” CE140100016.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that events are continuous-time signals even though they are not continuous functions of time; the time variable t on which they depend varies continuously.

  2. 2.

    The filter can also be updated (using (13)) at any user-chosen time instance (or rate). In our experiments we update the entire image state whenever we export the image for visualisation.

References

  1. Agrawal, A., Chellappa, R., Raskar, R.: An algebraic approach to surface reconstruction from gradient fields. In: International Conference on Computer Vision (ICCV), pp. 174–181 (2005). https://doi.org/10.1109/iccv.2005.31

  2. Agrawal, A., Raskar, R., Chellappa, R.: What is the range of surface reconstructions from a gradient field? In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 578–591. Springer, Heidelberg (2006). https://doi.org/10.1007/11744023_45

    Chapter  Google Scholar 

  3. Bardow, P., Davison, A.J., Leutenegger, S.: Simultaneous optical flow and intensity estimation from an event camera. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 884–892 (2016). https://doi.org/10.1109/CVPR.2016.102

  4. Barua, S., Miyatani, Y., Veeraraghavan, A.: Direct face detection and video reconstruction from event cameras. In: IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–9 (2016). https://doi.org/10.1109/WACV.2016.7477561

  5. Belbachir, A.N., Schraml, S., Mayerhofer, M., Hofstaetter, M.: A novel HDR depth camera for real-time 3D 360-degree panoramic vision. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), June 2014

    Google Scholar 

  6. Benosman, R., Clercq, C., Lagorce, X., Ieng, S.H., Bartolozzi, C.: Event-based visual flow. IEEE Trans. Neural Netw. Learn. Syst. 25(2), 407–417 (2014). https://doi.org/10.1109/TNNLS.2013.2273537

    Article  Google Scholar 

  7. Benosman, R., Ieng, S.H., Clercq, C., Bartolozzi, C., Srinivasan, M.: Asynchronous frameless event-based optical flow. Neural Netw. 27, 32–37 (2012). https://doi.org/10.1016/j.neunet.2011.11.001

    Article  Google Scholar 

  8. Brandli, C., Berner, R., Yang, M., Liu, S.C., Delbruck, T.: A \(240\times 180\) 130 dB 3 \(\upmu \)s latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014). https://doi.org/10.1109/JSSC.2014.2342715

    Google Scholar 

  9. Brandli, C., Muller, L., Delbruck, T.: Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor. In: IEEE International Symposium on Circuits and Systems (ISCAS), pp. 686–689, June 2014. https://doi.org/10.1109/ISCAS.2014.6865228

  10. Censi, A., Scaramuzza, D.: Low-latency event-based visual odometry. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 703–710 (2014). https://doi.org/10.1109/ICRA.2014.6906931

  11. Cook, M., Gugelmann, L., Jug, F., Krautz, C., Steger, A.: Interacting maps for fast visual interpretation. In: International Joint Conference on Neural Networks (IJCNN), pp. 770–776 (2011). https://doi.org/10.1109/IJCNN.2011.6033299

  12. Delbruck, T., Linares-Barranco, B., Culurciello, E., Posch, C.: Activity-driven, event-based vision sensors. In: IEEE International Symposium on Circuits and Systems (ISCAS), pp. 2426–2429, May 2010. https://doi.org/10.1109/ISCAS.2010.5537149

  13. Franklin, G.F., Powell, J.D., Workman, M.L.: Digital Control of Dynamic Systems, vol. 3. Addison-Wesley, Menlo Park (1998)

    MATH  Google Scholar 

  14. Gallego, G., Rebecq, H., Scaramuzza, D.: A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3867–3876 (2018). https://doi.org/10.1109/CVPR.2018.00407

  15. Gehrig, D., Rebecq, H., Gallego, G., Scaramuzza, D.: Asynchronous, photometric feature tracking using events and frames. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11216, pp. 766–781. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01258-8_46

    Chapter  Google Scholar 

  16. Higgins, W.T.: A comparison of complementary and kalman filtering. IEEE Trans. Aerosp. Electron. Syst. 3, 321–325 (1975)

    Article  Google Scholar 

  17. Huang, J., Guo, M., Chen, S.: A dynamic vision sensor with direct logarithmic output and full-frame picture-on-demand. In: IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–4, May 2017. https://doi.org/10.1109/iscas.2017.8050546

  18. Huang, J., Guo, M., Wang, S., Chen, S.: A motion sensor with on-chip pixel rendering module for optical flow gradient extraction. In: IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, May 2018. https://doi.org/10.1109/iscas.2018.8351312

  19. Huang, J., Wang, S., Guo, M., Chen, S.: Event-guided structured output tracking of fast-moving objects using a CeleX sensor. IEEE Trans. Circuits Syst. Video Technol. 28(9), 2413–2417 (2018). https://doi.org/10.1109/tcsvt.2018.2841516

    Article  Google Scholar 

  20. Kim, H., Handa, A., Benosman, R., Ieng, S.H., Davison, A.J.: Simultaneous mosaicing and tracking with an event camera. In: British Machine Vision Conference (BMVC) (2014). https://doi.org/10.5244/C.28.26

  21. Kim, H., Leutenegger, S., Davison, A.J.: Real-time 3D reconstruction and 6-DoF tracking with an event camera. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 349–364. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_21

    Chapter  Google Scholar 

  22. Lichtsteiner, P., Posch, C., Delbruck, T.: A 128\(\times \)128 120 dB 15 \(\upmu \)s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576 (2008). https://doi.org/10.1109/JSSC.2007.914337

    Google Scholar 

  23. Liu, H.C., Zhang, F.L., Marshall, D., Shi, L., Hu, S.M.: High-speed video generation with an event camera. Vis. Comput. 33(6–8), 749–759 (2017). https://doi.org/10.1007/s00371-017-1372-y

    Article  Google Scholar 

  24. Mahony, R., Hamel, T., Pflimlin, J.M.: Nonlinear complementary filters on the special orthogonal group. IEEE Trans. Autom. Control. 53(5), 1203–1218 (2008)

    Article  MathSciNet  Google Scholar 

  25. Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., Scaramuzza, D.: The event-camera dataset and simulator: event-based data for pose estimation, visual odometry, and SLAM. Int. J. Robot. Res. 36, 142–149 (2017). https://doi.org/10.1177/0278364917691115

    Article  Google Scholar 

  26. Pock, T., Chambolle, A.: Diagonal preconditioning for first order primal-dual algorithms in convex optimization. In: International Conference on Computer Vision (ICCV), pp. 1762–1769, November 2011. https://doi.org/10.1109/iccv.2011.6126441

  27. Posch, C., Matolin, D., Wohlgenannt, R.: A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS. IEEE J. Solid-State Circuits 46(1), 259–275 (2011). https://doi.org/10.1109/JSSC.2010.2085952

    Article  Google Scholar 

  28. Rebecq, H., Horstschäfer, T., Gallego, G., Scaramuzza, D.: EVO: a geometric approach to event-based 6-DOF parallel tracking and mapping in real-time. IEEE Robot. Autom. Lett. 2, 593–600 (2017). https://doi.org/10.1109/LRA.2016.2645143

    Article  Google Scholar 

  29. Reinbacher, C., Graber, G., Pock, T.: Real-time intensity-image reconstruction for event cameras using manifold regularisation. In: British Machine Vision Conference (BMVC) (2016). https://doi.org/10.5244/C.30.9

  30. Shedligeri, P.A., Shah, K., Kumar, D., Mitra, K.: Photorealistic image reconstruction from hybrid intensity and event based sensor. arXiv preprint arXiv:1805.06140 (2018)

  31. Stoffregen, T., Kleeman, L.: Simultaneous optical flow and segmentation (SOFAS) using dynamic vision sensor. In: Australasian Conference on Robotics and Automation (ACRA) (2017)

    Google Scholar 

  32. Vidal, A.R., Rebecq, H., Horstschaefer, T., Scaramuzza, D.: Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Robot. Autom. Lett. 3(2), 994–1001 (2018). https://doi.org/10.1109/lra.2018.2793357

    Article  Google Scholar 

  33. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/tip.2003.819861

    Article  Google Scholar 

  34. Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011). https://doi.org/10.1109/tip.2011.2109730

    Article  MathSciNet  MATH  Google Scholar 

  35. Zhou, Y., Gallego, G., Rebecq, H., Kneip, L., Li, H., Scaramuzza, D.: Semi-dense 3D reconstruction with a stereo event camera. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 242–258. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_15

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cedric Scheerlinck .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 3865 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Scheerlinck, C., Barnes, N., Mahony, R. (2019). Continuous-Time Intensity Estimation Using Event Cameras. In: Jawahar, C., Li, H., Mori, G., Schindler, K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science(), vol 11365. Springer, Cham. https://doi.org/10.1007/978-3-030-20873-8_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-20873-8_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-20872-1

  • Online ISBN: 978-3-030-20873-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics