Mixed Reality Medical First Aid Training System Based on Body Identification

  • Jiayu Wang
  • Ruoxiu Xiao
  • Lijing Jia
  • Xianmei WangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11903)


Effective first aid training is helpful to improve the survival rate of the wounded in the face of natural disasters, emergencies and wars. While, traditional first aid training relies on the explanation and demonstration of experts, which has certain limitations. In this paper, we propose a novel type of first aid training system. The implementation of the system mainly includes three steps. First, medical body model images are collected to construct data sets. Second, key parts of the human body are identified and located based on a designed lightweight YOLO_v2 network. Finally, according to the results of the human identification, virtual guidance will be merged in the real environment by HoloLens glasses, and be uploaded to the server. With the Mixed Reality technology, we superimpose the corresponding virtual emergency instructions of cardiopulmonary resuscitation and artificial respiration in key parts of the body, and pass back to HoloLens glasses to realize first aid training. Experimental results show that the proposed medical first aid training system can make learners have a real touch and improve learning efficiency.


Body identification You Only Look Once (YOLO) Mixed Reality Medical first aid Training system 



This work was supported in part by grants from National Natural Science Foundation of China (61701022), Beijing Science & Technology Program (Z181100001018017) and Beijing Natural Science Foundation (7182158).


  1. 1.
    Kureckova, V., Gabrhel, V., Zamecnik, P., et al.: First aid as an important traffic safety factor–evaluation of the experience–based training. Eur. Transp. Res. Rev. 9(1), 5 (2017)CrossRefGoogle Scholar
  2. 2.
    Gavish, N., Gutiérrez, T., Webel, S., et al.: Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interact. Learn. Environ. 23(6), 778–798 (2015)CrossRefGoogle Scholar
  3. 3.
    Dunne, J.R., McDonald, C.L.: Pulse!!: a model for research and development of virtual-reality learning in military medical education and training. Mil. Med. 175(suppl_7), 25–27 (2010)CrossRefGoogle Scholar
  4. 4.
    McLellan, B.: A medical simulation model for teaching trauma skills. J. Emerg. Med. 14(3), 393 (1996)CrossRefGoogle Scholar
  5. 5.
    Thoman, W.J., Lampotang, S., Gravenstein, D., et al.: A computer model of intracranial dynamics integrated to a full-scale patient simulator. Comput. Biomed. Res. 31(1), 32–46 (1998)CrossRefGoogle Scholar
  6. 6.
    Donnelly, M.M., Olson, W.A.: Mechanical simulator for modeling thermal properties of a premature infant: U.S. Patent 5,409,382[P], 25 April 1995Google Scholar
  7. 7.
    Ohta, Y., Tamura, H.: Mixed Reality: Merging Real and Virtual Worlds. Springer, Heidelberg (2014)Google Scholar
  8. 8.
    Maimone, A., Georgiou, A., Kollin, J.S.: Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph. (TOG) 36(4), 85 (2017)CrossRefGoogle Scholar
  9. 9.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  10. 10.
    Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  11. 11.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)Google Scholar
  12. 12.
    Srivastava, N., Hinton, G., Krizhevsky, A., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Everingham, M., Van Gool, L., Williams, C.K.I., et al.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vision 88(2), 303–338 (2010)CrossRefGoogle Scholar
  14. 14.
    Kim, A., Eustice, R.M.: Active visual SLAM for robotic area coverage: theory and experiment. Int. J. Rob. Res. 34(4–5), 457–475 (2015)CrossRefGoogle Scholar
  15. 15.
    Lindgren, R., Tscholl, M., Wang, S., et al.: Enhancing learning and engagement through embodied interaction within a mixed reality simulation. Comput. Educ. 95, 174–187 (2016)CrossRefGoogle Scholar
  16. 16.
    Chen, T., Tian, G.Y., Sophian, A., et al.: Feature extraction and selection for defect classification of pulsed eddy current NDT. NDT E Int. 41(6), 467–476 (2008)CrossRefGoogle Scholar
  17. 17.
    Genc, Y., Riedel, S., Souvannavong, F., et al.: Marker-less tracking for AR: a learning-based approach. In: Proceedings, International Symposium on Mixed and Augmented Reality, pp. 295–304. IEEE (2002)Google Scholar
  18. 18.
    Rublee, E., Rabaud, V., Konolige, K., et al.: ORB: an efficient alternative to SIFT or SURF. In: ICCV, vol. 11, no. 1, p. 2 (2011)Google Scholar
  19. 19.
    Torr, P.H.S., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78(1), 138–156 (2000)CrossRefGoogle Scholar
  20. 20.
    Faugeras, O.D., Lustman, F.: Motion and structure from motion in a piecewise planar environment. Int. J. Pattern Recognit. Artif Intell. 2(03), 485–508 (1988)CrossRefGoogle Scholar
  21. 21.
    Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  22. 22.
    Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases. J. Pathol. Inf. 7 (2016)CrossRefGoogle Scholar
  23. 23.
    Lin, T.Y., Goyal, P., Girshick, R., et al.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jiayu Wang
    • 1
  • Ruoxiu Xiao
    • 1
  • Lijing Jia
    • 2
  • Xianmei Wang
    • 1
    Email author
  1. 1.School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijingChina
  2. 2.Emergency DepartmentChinese PLA General HospitalBeijingChina

Personalised recommendations