Understanding Movement and Interaction: An Ontology for Kinect-Based 3D Depth Sensors
Microsoft Kinect has attracted great attention from research communities, resulting in numerous interaction and entertainment applications. However, to the best of our knowledge, there does not exist an ontology for 3D depth sensors. Including automated semantic reasoning in these settings would open the doors for new research, making possible not only to track but also understand what the user is doing. We took a first step towards this new paradigm and developed a 3D depth sensor ontology, modelling different features regarding user movement and object interaction. We believe in the potential of integrating semantics into computer vision. As 3D depth sensors and ontology-based applications improve further, the ontology could be used, for instance, for activity recognition, together with semantic maps for supporting visually impaired people or in assistance technologies, such as remote rehabilitation.
KeywordsOntology Kinect Human Activity Modelling and Recognition Ubiquitous Computing
Unable to display preview. Download preview PDF.
- 6.Kinect@Home Project, KTH, http://www.kinectathome.com (2012)
- 7.Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., Fitzgibbon, A.: KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST 2011, pp. 559–568 (2011)Google Scholar
- 9.Foukarakis, M.: Informational system for managing photos and spatial information using sensors, ontologies and semantic maps. PhD thesis, Technical University of Crete (2009)Google Scholar
- 10.Kinect for Windows, http://www.microsoft.com/en-us/kinectforwindows/develop/
- 12.Suárez-Figueroa, M.C.: NeOn Methodology for building ontology networks: specification, scheduling and reuse. PhD thesis, Universidad Politécnica de Madrid (2010)Google Scholar