Automated Camera Selection and Control for Better Training Support
Physical training ranges have been shown to be critical in helping trainees integrate previously-perfected skills. There is a growing need for streamlining the feedback participants receive after training. This need is being met by two related research efforts: approaches for automated camera selection and control, and computer vision-based approaches for automated extraction of relevant training feedback information.
We introduce a framework for augmenting the capabilities present in training ranges that aims to help in both domains. Its main component is ASCENT (Automated Selection and Control for ENhanced Training), an automated camera selection and control approach for operators that also helps provide better training feedback to trainees.
We have tested our camera control approach in simulated and laboratory settings, and are pursuing opportunities to deploy it at training ranges. In this paper we outline the elements of our framework and discuss its application for better training support.
KeywordsPlanning Horizon Object Tracking Greedy Heuristic Camera Network Training Support
Unable to display preview. Download preview PDF.
- 1.Allen, B.D.: Hardware Design Optimization for Human Motion Tracking Systems. Ph.D. thesis, University of North Carolina at Chapel Hill (December 2007)Google Scholar
- 2.Broaddus, C., Germano, T., Vandervalk, N., Divakaran, A., Wu, S., Sawhney, H.: Act-vision: active collaborative tracking for multiple ptz cameras. In: Proceedings of SPIE: Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications, Orlando, FL, USA, vol. 7345 (April 2009)Google Scholar
- 3.Denzler, J., Zobel, M., Niemann, H.: On optimal camera parameter selection in kalman filter based object tracking. In: 24th DAGM Symposium on Pattern Recognition, pp. 17–25 (2002)Google Scholar
- 4.Denzler, J., Zobel, M., Niemann, H.: Information theoretic focal length selection for real-time active 3-d object tracking. In: International Conference on Computer Vision, vol. 1, pp. 400–407 (October 2003)Google Scholar
- 5.Deutsch, B., Niemann, H., Denzler, J.: Multi-step active object tracking with entropy based optimal actions using the sequential kalman filter. In: IEEE International Conference on Image Processing, vol. 3, pp. 105–108 (2005)Google Scholar
- 7.Guan, L.: Multi-view Dynamic Scene Modeling. Ph.D. thesis, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA (April 2010)Google Scholar
- 8.Ilie, A., Welch, G.: On-line control of active camera networks for computer vision tasks. ACM Transactions on Sensor Networks (to appear, 2014)Google Scholar
- 9.Ilie, A., Welch, G., Macenko, M.: A stochastic quality metric for optimal control of active camera network configurations for 3D computer vision tasks. In: Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications, Marseille, France (October 2008)Google Scholar
- 10.Ilie, D.A.: On-Line Control of Active Camera Networks. Ph.D. thesis, University of North Carolina at Chapel Hill (2010)Google Scholar
- 11.Krahnstoever, N., Yu, T., Lim, S.N., Patwardhan, K., Tu, P.: Collaborative real-time control of active cameras in large scale surveillance systems. In: Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications, Marseille, France (October 2008)Google Scholar
- 14.Matsuyama, T., Wu, X., Takai, T., Nobuhara, S.: Real-time 3d shape reconstruction, dynamic 3d mesh deformation, and high fidelity visualization for 3d video. In: Computer Vision and Image Understanding, vol. 96, pp. 393–434. lsevier Science Inc., New York (2004)Google Scholar
- 16.Matusik, W., Buehler, C., Raskar, R., Gortler, S.J., McMillan, L.: Image-based visual hulls. In: ACM Siggraph, pp. 369–374 (2000)Google Scholar
- 18.Qureshi, F., Terzopoulos, D.: Surveillance in virtual reality: System design and multi-camera control. In: Proceedings of Computer Vision and Pattern Recognition, pp. 1–8 (June 2007)Google Scholar
- 19.Qureshi, F.Z., Terzopoulos, D.: Towards intelligent camera networks: a virtual vision approach. In: 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Beijing, China, pp. 177–184 (October 2005)Google Scholar
- 20.Sadagic, A., Welch, G., Basu, C., Darken, C., Kumar, R., Fuchs, H., Cheng, H., Frahm, J.M., Kolsch, M., Rowe, N., Towles, H., Wachs, J., Lastra, A.: New generation of instrumented ranges: Enabling automated performance analysis. In: 2009 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC-2009), Orlando, FL (2009)Google Scholar
- 21.Sommerlade, E., Reid, I.: Probabilistic surveillance with multiple active cameras. In: IEEE International Conference on Robotics and Automation (May 2010)Google Scholar
- 22.Wachs, J., Goshorn, D., Kolsch, M.: Recognizing human postures and poses in monocular still images. In: Intl. Conf. on Image Processing, Computer Vision, and Pattern Recognition, Las Vegas, NV, pp. 665–671 (2009)Google Scholar