Abstract
Virtual Reality (VR) is creating a new paradigm in humans’ communication. Today, we can enter in a virtual environment and interact with each other through 3D characters. However, VR headsets occlude user’s face, limiting the Motion Capture (MoCap) of facial expressions and, thus, limiting the introduction of this non-verbal component. The unique solution available is not suitable for consumer-level applications, relying on complex hardware and calibrations. In this work, we deliver consumer-level methods for facial MoCap under VR environments. We developed an occlusions-support method compatible with generic facial MoCap systems. Then, we extract facial features and deploy Random Forests algorithms that accurately estimate emotions and upper face movements occluded by the headset. Our VR MoCap methods are validated and a facial animation use case is provided. With our novel methods, both calibration and hardware is reduced, making possible face-to-face communication in VR environments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
OpenCV (2014)
Biocca, F.: The Cyborg’s dilemma: progressive embodiment in virtual environments. J. Comput.-Mediated Commun. 3(2) (1997)
Bombari, D., Schmid, P.C., Schmid Mast, M., Birri, S., Mast, F.W., Lobmaier, J.S.: Emotion recognition: the role of featural and configural face information. Q. J. Exp. Psychol. 66(12), 2426–2442 (2013)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Cao, C., Hou, Q., Zhou, K.: Displaced dynamic expression regression for real-time facial tracking and animation. ACM Trans. Graph. (TOG) 33(4), 43 (2014)
Cao, C., Weng, Y., Lin, S., Zhou, K.: 3d shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 41 (2013)
Eisenbarth, H., Alpers, G.W.: Happy mouth and sad eyes: scanning emotional facial expressions. Emotion 11(4), 860 (2011)
Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978)
Ekman, P., Friesen, W.V.: Unmasking the Face: A Guide to Recognizing Emotions from Facial Cues. Malor Books, Cambridge (1975)
Fuentes, C.T., Runa, C., Blanco, X.A., Orvalho, V., Haggard, P.: Does my face fit?: a face image task reveals structure and distortions of facial feature representation. PLoS ONE 8(10), e76805 (2013)
Jack, R.E., Jack, R.E.: Culture and facial expressions of emotion Culture and facial expressions of emotion. Vis. Cogn. 1–39 (2013)
Kilteni, K., Groten, R., Slater, M.: The sense of embodiment in virtual reality. Presence: Teleoperators Virtual Environ. 21(4), 373–387 (2002)
Lang, C., Wachsmuth, S., Hanheide, M., Wersing, H.: Facial communicative signals. Int. J. Social Robot. 4(3), 249–262 (2012)
Li, H., Trutoiu, L., Olszewski, K., Wei, L., Trutna, T., Hsieh, P.-L., Nicholls, A., Ma, C.: Facial performance sensing head-mounted display. ACM Trans. Graph. (Proceedings SIGGRAPH 2015) 34(4), 47 (2015)
Li, H., Yu, J., Ye, Y., Bregler, C.: Realtime facial animation with on-the-fly correctives. ACM Trans. Graph. 32(4) (2013)
Loconsole, C., Miranda, C.R., Augusto, G., Frisoli, G., Orvalho, V.C.: Real-time emotion recognition: a novel method for geometrical facial features extraction. In: 9th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP 2014), vol. 1, pp. 378–385 (2014)
Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101. IEEE (2010)
Magnenat-Thalmann, N., Primeau, E., Thalmann, D.: Abstract muscle action procedures for human face animation. Vis. Comput. 3(5), 290–297 (1988)
McCloud, S.: Understanding Comics: The Invisible Art. Kitchen Sink Press, Northampton (1993)
McCloud, S.: Making Comics: Storytelling Secrets of Comics, Manga and Graphic Novels. William Morrow, William Morrow Paperbacks, New York (2006)
Pandzic, I.S., Forchheimer, R.: MPEG-4 Facial Animation: The Standard Implementation and Applications. Wiley, Hoboken (2003)
Parikh, R., Mathai, A., Parikh, S., Sekhar, G.C., Thomas, R.: Understanding and using sensitivity, specificity and predictive values. Indian J. Ophthalmol. 56(1), 45 (2008)
Parke, F.I., Waters, K.: Computer Facial Animation, vol. 289. AK Peters Wellesley, Natick (1996)
Pighin, F., Lewis, J.: Performance-driven facial animation. In: ACM SIGGRAPH (2006)
R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2013). ISBN 3-900051-07-0
Rodriguez, J., Perez, A., Lozano, J.: Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 569–575 (2010)
Saragih, J.M., Lucey, S., Cohn, J.F.: Deformable model fitting by regularized landmark mean-shift. Int. J. Comput. Vis. 91(2), 200–215 (2011)
Slater, M.: Grand challenges in virtual environments. Front. Robot. AI 1, 3 (2014)
von der Pahlen, J., Jimenez, J., Danvoye, E., Debevec, P., Fyffe, G., Alexander, O.: Digital ira and beyond: creating real-time photoreal digital actors. In: ACM SIGGRAPH 2014 Courses, p. 1. ACM (2014)
Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. ACM Trans. Graph. (TOG) 30(4), 77 (2011)
Acknowledgements
This work is supported by Instituto de Telecomunicações (Project Incentivo ref: Projeto Incentivo/EEI/LA0008/2014 and project UID ref: UID/EEA/5008/2013) and University of Porto. The authors would like to thanks Elena Kokkinara from Trinity College Dublin and Pedro Mendes from University of Porto for their support given in the beginning of the project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Miranda, C.R., Orvalho, V.C. (2017). Consumer-Level Virtual Reality Motion Capture. In: Braz, J., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2016. Communications in Computer and Information Science, vol 693. Springer, Cham. https://doi.org/10.1007/978-3-319-64870-5_18
Download citation
DOI: https://doi.org/10.1007/978-3-319-64870-5_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-64869-9
Online ISBN: 978-3-319-64870-5
eBook Packages: Computer ScienceComputer Science (R0)