Advertisement

Real-Time Camera Tracking: When is High Frame-Rate Best?

  • Ankur Handa
  • Richard A. Newcombe
  • Adrien Angeli
  • Andrew J. Davison
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7578)

Abstract

Higher frame-rates promise better tracking of rapid motion, but advanced real-time vision systems rarely exceed the standard 10–60Hz range, arguing that the computation required would be too great. Actually, increasing frame-rate is mitigated by reduced computational cost per frame in trackers which take advantage of prediction. Additionally, when we consider the physics of image formation, high frame-rate implies that the upper bound on shutter time is reduced, leading to less motion blur but more noise. So, putting these factors together, how are application-dependent performance requirements of accuracy, robustness and computational cost optimised as frame-rate varies? Using 3D camera tracking as our test problem, and analysing a fundamental dense whole image alignment approach, we open up a route to a systematic investigation via the careful synthesis of photorealistic video using ray-tracing of a detailed 3D scene, experimentally obtained photometric response and noise models, and rapid camera motions. Our multi-frame-rate, multi-resolution, multi-light-level dataset is based on tens of thousands of hours of CPU rendering time. Our experiments lead to quantitative conclusions about frame-rate selection and highlight the crucial role of full consideration of physical image formation in pushing tracking performance.

Keywords

Pareto Front Dense Tracking Motion Blur Camera Tracking Real Camera 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Nakabo, Y., Ishikawa, M., Toyoda, H., Mizuno, S.: 1ms column parallel vision system and its application of high speed target tracking. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (2000)Google Scholar
  2. 2.
    Klein, G., Murray, D.W.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR) (2007)Google Scholar
  3. 3.
    Newcombe, R.A., Lovegrove, S., Davison, A.J.: DTAM: Dense tracking and mapping in real-time. In: Proceedings of the International Conference on Computer Vision (ICCV) (2011)Google Scholar
  4. 4.
    Calonder, M., Lepetit, V., Konolige, K., Mihelich, P., Bowman, J., Fua, P.: Compact signatures for high-speed interest point description and matching. In: Proceedings of the International Conference on Computer Vision (ICCV) (2009)Google Scholar
  5. 5.
    Chli, M., Davison, A.J.: Active Matching. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 72–85. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  6. 6.
    Comport, A.I., Malis, E., Rives, P.: Accurate quadri-focal tracking for robust 3D visual odometry. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (2007)Google Scholar
  7. 7.
    Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohli, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: Real-time dense surface mapping and tracking. In: Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR) (2011)Google Scholar
  8. 8.
    Meilland, M., Comport, A.I., Rives, P.: A spherical robot-centered representation for urban navigation. In: Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS) (2010)Google Scholar
  9. 9.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (1981)Google Scholar
  10. 10.
    Sturm, J., Magnenat, S., Engelhard, N., Pomerleau, F., Colas, F., Burgard, W., Cremers, D., Siegwart, R.: Towards a benchmark for RGB-D SLAM evaluation. In: Proceedings of the RGB-D Workshop on Advanced Reasoning with Depth Cameras at Robotics: Science and Systems, RSS (2011)Google Scholar
  11. 11.
    Calonder, M., Lepetit, V., Fua, P.: Pareto-optimal dictionaries for signatures. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010)Google Scholar
  12. 12.
    Funke, J., Pietzsch, T.: A framework for evaluating visual SLAM. In: Proceedings of the British Machine Vision Conference (BMVC) (2009)Google Scholar
  13. 13.
    Liu, C., Freeman, W.T., Szeliski, R., Kang, S.B.: Noise estimation from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006)Google Scholar
  14. 14.
    Debevec, P., Malik, J.: Recovering high dynamic range radiance maps from photographs. ACM Transactions on Graphics (SIGGRAPH) (1997)Google Scholar
  15. 15.
    Klein, G., Murray, D.W.: Simulating low-cost cameras for augmented reality compositing. IEEE Transactions on Visualization and Computer Graphics (VGC) 16, 369–380 (2010)CrossRefGoogle Scholar
  16. 16.
    Hasinoff, S.W., Kutulakos, K.N., Durand, F., Freeman, W.T.: Time-constrained photography. In: Proceedings of the International Conference on Computer Vision (ICCV) (2009)Google Scholar
  17. 17.
    Hasinoff, S.W., Durand, F., Freeman, W.T.: Noise-optimal capture for high dynamic range photography. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Ankur Handa
    • 1
  • Richard A. Newcombe
    • 1
  • Adrien Angeli
    • 1
  • Andrew J. Davison
    • 1
  1. 1.Department of ComputingImperial College LondonUK

Personalised recommendations