Mixed Reality Medical First Aid Training System Based on Body Identification
Effective first aid training is helpful to improve the survival rate of the wounded in the face of natural disasters, emergencies and wars. While, traditional first aid training relies on the explanation and demonstration of experts, which has certain limitations. In this paper, we propose a novel type of first aid training system. The implementation of the system mainly includes three steps. First, medical body model images are collected to construct data sets. Second, key parts of the human body are identified and located based on a designed lightweight YOLO_v2 network. Finally, according to the results of the human identification, virtual guidance will be merged in the real environment by HoloLens glasses, and be uploaded to the server. With the Mixed Reality technology, we superimpose the corresponding virtual emergency instructions of cardiopulmonary resuscitation and artificial respiration in key parts of the body, and pass back to HoloLens glasses to realize first aid training. Experimental results show that the proposed medical first aid training system can make learners have a real touch and improve learning efficiency.
KeywordsBody identification You Only Look Once (YOLO) Mixed Reality Medical first aid Training system
This work was supported in part by grants from National Natural Science Foundation of China (61701022), Beijing Science & Technology Program (Z181100001018017) and Beijing Natural Science Foundation (7182158).
- 6.Donnelly, M.M., Olson, W.A.: Mechanical simulator for modeling thermal properties of a premature infant: U.S. Patent 5,409,382[P], 25 April 1995Google Scholar
- 7.Ohta, Y., Tamura, H.: Mixed Reality: Merging Real and Virtual Worlds. Springer, Heidelberg (2014)Google Scholar
- 9.Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
- 10.Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
- 11.Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)Google Scholar
- 17.Genc, Y., Riedel, S., Souvannavong, F., et al.: Marker-less tracking for AR: a learning-based approach. In: Proceedings, International Symposium on Mixed and Augmented Reality, pp. 295–304. IEEE (2002)Google Scholar
- 18.Rublee, E., Rabaud, V., Konolige, K., et al.: ORB: an efficient alternative to SIFT or SURF. In: ICCV, vol. 11, no. 1, p. 2 (2011)Google Scholar
- 21.Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
- 23.Lin, T.Y., Goyal, P., Girshick, R., et al.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)Google Scholar