Advertisement

A Hierarchical Behavior Analysis Approach for Automated Trainee Performance Evaluation in Training Ranges

  • Saad Khan
  • Hui Cheng
  • Rakesh Kumar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8027)

Abstract

In this paper we present a closed loop mixed reality training system that provides automatic assessment of trainee performance during kinetic military exercises. At the core of our system is a hierarchical behavior analysis approach that integrates a number of data sensor modalities including Audio/Video, RFID and IMUs to automatically capture trainee actions in a comprehensive manner. Our behavior analysis and performance evaluation framework uses a finite state machine (FSM) model in which trainee behaviors are the states of the training scenario and the transitions of states are caused by stimuli that we refer to as trigger events. The goal of behavior analysis is to estimate the states of the trainees with respect to the training scenario and quantify trainee performance. To robustly detect each state, we build classifiers for each behavioral state and trigger event. At a given time, based on the state estimation, a set of related classifiers are activated for detecting trigger events and states that can be transitioned to and from the current states. The overall structure of the FSM and trigger events is determined by a Training Ontology that is specific to the training scenario.

Keywords

Behavior Analysis Finite State Machine Trigger Event Training Scenario Trainee Behavior 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cheng, H., Yang, C., Han, F., Sawhney, H.: HO2: A new feature for multi-agent event detection and recognition. In: Computer Vision Pattern Recognition Workshop, pp. 1–8 (2008)Google Scholar
  2. 2.
    Hsu, S., Samarasekera, S., Kumar, R., Sawhney, H.S.: Pose Estimation, Model Refinement, and Enhanced Visualization Using Video. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Hilton Head Is., SC, vol. I, pp. 488–495 (2000)Google Scholar
  3. 3.
    Jung, S., Guo, Y., Sawhney, H., Kumar, R.: Action Video Retrieval Based on Atomic Action Vocabulary. In: Proc. ACM Int’l Conf.on Multimedia Information Retrieval, Vancouver, British Columbia (2008)Google Scholar
  4. 4.
    Cheng, H., Kumar, R., Basu, C., Han, F., Khan, S., Sawhney, H., Broaddus, C., Meng, C., Sufi, A., Germano, T., Kolsch, M., Wachs, J.: An Instrumentation and Computational Framework of Automaoted Behavior Analysis and Performance Evaluation for Infantry Training. In: Proceedings of 2009 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC 2009), Orlando, FL (2009)Google Scholar
  5. 5.
    Cheng, H., Kumar, R., Germano, T., Meng, C.: Automatic Performance Evaluation and Lessons Learned (APELL) for MOUT Training. In: Proceedings of 2006 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC 2006), Orlando, FL (2006)Google Scholar
  6. 6.
    Kumar, R., Samarasekera, S., Arpa, A., Aggarwal, M., Paragano, V., Hanna, K., Sawhney, H., Sartor, M.: Monitoring Urban Sites using Video Flashlight and Analysis System. In: GOMAC Proceedings, Tampa Florida (2003)Google Scholar
  7. 7.
    Fontana, R.J.: Recent System Applications of Short-Pulse Ultra-Wideband (UWB) Technology. IEEE Transaction on Microwave Theory and Techniques 52(9), 2087–2104 (2004)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Noy, N.F., Sintek, M., Decker, S., Crubezy, M., Fergersen, R., Musen, M.A.: Creating Semantic Web Contents with Protégé-2000. IEEE Intelligent Systems 16(2), 60–71 (2001)CrossRefGoogle Scholar
  9. 9.
    Melnik, S., Garcia-Molina, H., Papepcke, A.: A Mediation Infrastructure, for Digital Library Services. ACM Digital Libraries, 123–132 (2000)Google Scholar
  10. 10.
    Viola, P., Jones, M.: Robust Real-time Object Detection. In: 2nd Intl Workshop on Statistical and Comp. Theories of Vision, Vancouver (2001)Google Scholar
  11. 11.
    Wachs, J.P., Goshorn, D., Kölsch, M.: Recognizing Human Postures and Poses in Monocular Still Images. In: Intl. Conf. on Image Processing, Computer Vision, and Pattern Recognition (IPCV) (2009)Google Scholar
  12. 12.
    Torralba, S.A., Murphy, K.P., Freeman, W.T.: Sharing visual features for multiclass and multiview object detection. IEEE PAMI 29(5), 854–869 (2007)CrossRefGoogle Scholar
  13. 13.
    Camouflage, Cover and Concealment, Lesson Plan. USMC, Weapons and Field Training Battalion (January 26, 2006)Google Scholar
  14. 14.
    Zhao, T., Aggarwal, M., Kumar, R., Sawhney, H.S.: Real-time Wide Area Multi-camera Stereo Tracking. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition, San Diego, CA (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Saad Khan
    • 1
  • Hui Cheng
    • 1
  • Rakesh Kumar
    • 1
  1. 1.SRI InternationalPrincetonUSA

Personalised recommendations