Abstract
The paper describes a system able to recognize the users identity according how she/he looks at the monitor while using a given interface. The system does not need invasive measurements that could limit the naturalness of her/his actions. The proposed approach clusters the sequences of observed points on the screen and characterizes the user identity according the relevant detected patterns. Moreover, the system is able to identify patterns in order to have a more accurate recognition and to create prototypes of natural facial dynamics in user expressions. The possibility to characterize people through facial movements introduces a new perspective on human-machine interaction. For example, a user can obtain different contents according her/his mood or a software interface can modify itself to keep a higher attention from a bored user. The success rate of the classification using only 7 parameters is around 68%. The approach is based on k-means that is tuned to maximize an index involving the number of true-positive detections and conditional probabilities. A different evaluation of this parameter allows to focus on the identification of a single user or to spot a general movement for a wide range of people The experiments show that the performance can reach the 90% of correct recognition.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Pantic, M., Patras, I.: Dynamics of Facial Expression: Recognition of Faci-al Actions and Their Temporal Segments Form Face Profile Image Sequences. IEEE Trans. Systems, Man, and Cybernetics Part B 36(2), 443–449 (2006)
Gaglio, S., Infantino, I., Pilato, G., Rizzo, R., Vella, F.: Vision and emotional flow in a cognitive architecture for human-machine interaction. Frontiers in Artificial Intelligence and Applications 233, 112–117 (2011); cited By (since 1996) 1
Infantino, I., Rizzo, R., Gaglio, S.: A framework for sign language sentence recognition by commonsense context. IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews 37(5), 1034–1039 (2007); cited By (since 1996) 5.
Kelley, R., Tavakkoli, A., King, C., Nicolescu, M., Nicolescu, M., Bebis, G.: Understanding human intentions via hidden markov models in autonomous mobile robots. In: Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (HRI 2008) (2008)
Infantino, I., Lodato, C., Lopes, S., Vella, F.: Implementation of a Intentional Vision System to support Cognitive Architectures. In: Proc. of 3rd International Conference on Computer Vision Theory and Applications VISAPP 2008, International Workshop on Robotic Perception (VISAPP-RoboPerc 2008) (2008)
Carroll, J.M., Olson, J.: Mental Models In Human-Computer Interaction. In: Helander, M. (ed.) Handbook of Human-Computer Interaction, pp. 135–158. Elsevier Ltd., Amsterdam (1990)
Infantino, I., Pilato, G., Rizzo, R., Vella, F.: I feel blue: Robots and humans sharing color representation for emotional cognitive interaction. In: Chella, A., Pirrone, R., Sorbello, R., Jóhannsdóttir, K.R. (eds.) Biologically Inspired Cognitive Architectures 2012. AISC, vol. 196, pp. 161–166. Springer, Heidelberg (2013)
Kelley, R., Tavakkoli, A., King, C., Nicolescu, M., Nicolescu, M.: Understanding Activities and Intentions for Human-Robot Interaction. In: Human-Robot Interaction. InTech (2010)
Orr, R., Abowd, G.: The smart floor: a mechanism for natural user identification and tracking. In: CHI 2000 Extended Abstracts on Human Factors in Computing Systems, pp. 275–276. ACM (2000)
Ben-Yacoub, S., Abdeljaoued, Y., Mayoraz, E.: Fusion of face and speech data for person identity verification. IEEE Transactions on Neural Networks 10(5), 1065–1074 (1999)
Bergadano, F., Gunetti, D., Picardi, C.: Identity verification through dynamic keystroke analysis. Intelligent Data Analysis 7(5), 469–496 (2003)
Turk, M.: Computer vision in the interface. Communications of the ACM 47(1), 60–67 (2004)
Aggarwal, J., Park, S.: Human motion: Modeling and recognition of actions and interactions. In: Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 3DPVT 2004, pp. 640–647. IEEE (2004)
Kalman, R.E.: A New Approach to Linear Filtering and Prediction Problems. Transactions of the ASME–Journal of Basic Engineering 82(Series D), 35–45 (1960)
Microsoft: Face tracking, http://msdn.microsoft.com/en-us/library/jj130970.aspx
Ahlberg, J.: TCANDIDE-3 – an updated parameterized face. Technical report, Dept. of Electrical Engineering, Linkping University, Sweden (2001), http://www.icg.isy.liu.se/candide/
Basu, S., Bilenko, M., Mooney, R.: A probabilistic framework for semi-supervised clustering. In: Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 59–68. ACM (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Scardino, G., Infantino, I., Vella, F. (2013). Recognition of Human Identity by Detection of User Activity. In: Marinos, L., Askoxylakis, I. (eds) Human Aspects of Information Security, Privacy, and Trust. HAS 2013. Lecture Notes in Computer Science, vol 8030. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39345-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-39345-7_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39344-0
Online ISBN: 978-3-642-39345-7
eBook Packages: Computer ScienceComputer Science (R0)