Advertisement

Automated Camera Selection and Control for Better Training Support

  • Adrian Ilie
  • Greg Welch
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8027)

Abstract

Physical training ranges have been shown to be critical in helping trainees integrate previously-perfected skills. There is a growing need for streamlining the feedback participants receive after training. This need is being met by two related research efforts: approaches for automated camera selection and control, and computer vision-based approaches for automated extraction of relevant training feedback information.

We introduce a framework for augmenting the capabilities present in training ranges that aims to help in both domains. Its main component is ASCENT (Automated Selection and Control for ENhanced Training), an automated camera selection and control approach for operators that also helps provide better training feedback to trainees.

We have tested our camera control approach in simulated and laboratory settings, and are pursuing opportunities to deploy it at training ranges. In this paper we outline the elements of our framework and discuss its application for better training support.

Keywords

Planning Horizon Object Tracking Greedy Heuristic Camera Network Training Support 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allen, B.D.: Hardware Design Optimization for Human Motion Tracking Systems. Ph.D. thesis, University of North Carolina at Chapel Hill (December 2007)Google Scholar
  2. 2.
    Broaddus, C., Germano, T., Vandervalk, N., Divakaran, A., Wu, S., Sawhney, H.: Act-vision: active collaborative tracking for multiple ptz cameras. In: Proceedings of SPIE: Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications, Orlando, FL, USA, vol. 7345 (April 2009)Google Scholar
  3. 3.
    Denzler, J., Zobel, M., Niemann, H.: On optimal camera parameter selection in kalman filter based object tracking. In: 24th DAGM Symposium on Pattern Recognition, pp. 17–25 (2002)Google Scholar
  4. 4.
    Denzler, J., Zobel, M., Niemann, H.: Information theoretic focal length selection for real-time active 3-d object tracking. In: International Conference on Computer Vision, vol. 1, pp. 400–407 (October 2003)Google Scholar
  5. 5.
    Deutsch, B., Niemann, H., Denzler, J.: Multi-step active object tracking with entropy based optimal actions using the sequential kalman filter. In: IEEE International Conference on Image Processing, vol. 3, pp. 105–108 (2005)Google Scholar
  6. 6.
    Deutsch, B., Zobel, M., Denzler, J., Niemann, H.: Multi-step entropy based sensor control for visual object tracking. In: Rasmussen, C.E., Bülthoff, H.H., Schölkopf, B., Giese, M.A. (eds.) DAGM 2004. LNCS, vol. 3175, pp. 359–366. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Guan, L.: Multi-view Dynamic Scene Modeling. Ph.D. thesis, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA (April 2010)Google Scholar
  8. 8.
    Ilie, A., Welch, G.: On-line control of active camera networks for computer vision tasks. ACM Transactions on Sensor Networks (to appear, 2014)Google Scholar
  9. 9.
    Ilie, A., Welch, G., Macenko, M.: A stochastic quality metric for optimal control of active camera network configurations for 3D computer vision tasks. In: Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications, Marseille, France (October 2008)Google Scholar
  10. 10.
    Ilie, D.A.: On-Line Control of Active Camera Networks. Ph.D. thesis, University of North Carolina at Chapel Hill (2010)Google Scholar
  11. 11.
    Krahnstoever, N., Yu, T., Lim, S.N., Patwardhan, K., Tu, P.: Collaborative real-time control of active cameras in large scale surveillance systems. In: Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications, Marseille, France (October 2008)Google Scholar
  12. 12.
    Kutulakos, K.N., Seitz, S.M.: A theory of shape by space carving. International Journal of Computer Vision 38(3), 199–218 (2000)zbMATHCrossRefGoogle Scholar
  13. 13.
    Lim, S.-N., Davis, L., Mittal, A.: Task scheduling in large camera networks. In: Yagi, Y., Kang, S.B., Kweon, I.S., Zha, H. (eds.) ACCV 2007, Part I. LNCS, vol. 4843, pp. 397–407. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  14. 14.
    Matsuyama, T., Wu, X., Takai, T., Nobuhara, S.: Real-time 3d shape reconstruction, dynamic 3d mesh deformation, and high fidelity visualization for 3d video. In: Computer Vision and Image Understanding, vol. 96, pp. 393–434. lsevier Science Inc., New York (2004)Google Scholar
  15. 15.
    Matsuyama, T., Ukita, N.: Real-time multitarget tracking by a cooperative distributed vision system. Proceedings of the IEEE 90, 1137–1150 (2002)CrossRefGoogle Scholar
  16. 16.
    Matusik, W., Buehler, C., Raskar, R., Gortler, S.J., McMillan, L.: Image-based visual hulls. In: ACM Siggraph, pp. 369–374 (2000)Google Scholar
  17. 17.
    Naish, M.D., Croft, E.A., Benhabib, B.: Coordinated dispatching of proximity sensors for the surveillance of manoeuvring targets. Robotics and Computer-Integrated Manufacturing 19(3), 283–299 (2003)CrossRefGoogle Scholar
  18. 18.
    Qureshi, F., Terzopoulos, D.: Surveillance in virtual reality: System design and multi-camera control. In: Proceedings of Computer Vision and Pattern Recognition, pp. 1–8 (June 2007)Google Scholar
  19. 19.
    Qureshi, F.Z., Terzopoulos, D.: Towards intelligent camera networks: a virtual vision approach. In: 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Beijing, China, pp. 177–184 (October 2005)Google Scholar
  20. 20.
    Sadagic, A., Welch, G., Basu, C., Darken, C., Kumar, R., Fuchs, H., Cheng, H., Frahm, J.M., Kolsch, M., Rowe, N., Towles, H., Wachs, J., Lastra, A.: New generation of instrumented ranges: Enabling automated performance analysis. In: 2009 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC-2009), Orlando, FL (2009)Google Scholar
  21. 21.
    Sommerlade, E., Reid, I.: Probabilistic surveillance with multiple active cameras. In: IEEE International Conference on Robotics and Automation (May 2010)Google Scholar
  22. 22.
    Wachs, J., Goshorn, D., Kolsch, M.: Recognizing human postures and poses in monocular still images. In: Intl. Conf. on Image Processing, Computer Vision, and Pattern Recognition, Las Vegas, NV, pp. 665–671 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Adrian Ilie
    • 1
  • Greg Welch
    • 2
  1. 1.The University of North Carolina at Chapel HillUSA
  2. 2.The University of Central FloridaUSA

Personalised recommendations