Abstract
This work presents a communication concept for vision based interaction with airborne UAS. Unlike previous approaches, this research focuses on high level mission tasking of UAS without having to rely on radio data link. The paper provides the overall concept design and focuses on communication via gestures. A respective model describing the gestural syntax for high level commands as well as a feedback mechanism to enable bidirectional human-machine communication for different operational modes is presented in detail. First real world experiments evaluate the feasibility of the deployed sensors for the intended purpose.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
OpenNI framework is an open source SDK for the development of 3D sensing middleware libraries and applications, available at http://openni.ru/index.html.
- 3.
NiTE was a powerful middleware of PrimeSense that featured a robust user skeleton joint tracking and gesture recognition. Since its purchase by Apple Inc. in 2013, it is officially not available any more.
- 4.
- 5.
C++ HOG detector implementation included in Dlib library.
- 6.
Using the C++ HOG detector and correlation tracker implementation included in Dlib library.
References
Venetsky, L., Tieman, J.W.: Robotic gesture recognition system, 20 October 2009
Pfeil, K., Koh, S.L., LaViola, J.: Exploring 3D gesture metaphors for interaction with unmanned aerial vehicles. In: Proceedings of the 2013 International Conference on Intelligent User Interfaces, pp. 257–266 (2013)
Wagner, P.K., Peres, S.M., Madeo, R.C.B., de Moraes Lima, C.A., de Almeida Freitas, F.: Gesture unit segmentation using spatial-temporal information and machine learning. In: FLAIRS Conference (2014)
Monajjemi, V.M., Wawerla, J., Vaughan, R., Mori, G.: HRI in the sky: creating and commanding teams of UAVs with a vision-mediated gestural interface. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 617–623 (2013)
Nagi, J., Giusti, A., Di Caro, G.A., Gambardella, L.M.: HRI in the sky: controlling UAVs using face poses and hand gestures. In: HRI, pp. 252–253 (2014)
Vanetsky, L., Husni, M., Yager, M.: Gesture recognition for UCAV-N flight deck operations: problem definition final report, Naval Air Systems Command, January 2003
Cicirelli, G., Attolico, C., Guaragnella, C., D’Orazio, T.: A kinect-based gesture recognition approach for a natural human robot interface. Int. J. Adv. Robot. Syst. 12, 22 (2015)
McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)
Bressem, J., Ladewig, S.H.: Rethinking gesture phases: articulatory features of gestural movement? Semiotica 2011(184), 53–91 (2011)
Kendon, A.: Gesticulation and speech: two aspects of the process of utterance. Relatsh. Verbal Nonverbal Commun. 25, 207–227 (1980)
Fricke, E.: Grammatik Multimodal: Wie Wörter und Gesten Zusammenwirken. Walter De Gruyter Incorporated, Boston (2012)
Kranstedt, A., Kühnlein, P., Wachsmuth, I.: Deixis in multimodal human computer interaction: an interdisciplinary approach. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 112–123. Springer, Heidelberg (2003)
Monajjemi, M., Bruce, J., Sadat, S.A., Wawerla, J., Vaughan, R.: UAV, do you see me? Establishing mutual attention between an uninstrumented human and an outdoor UAV in flight. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3614–3620 (2015)
Anjum, M.L., Ahmad, O., Rosa, S., Yin, J., Bona, B.: Skeleton tracking based complex human activity recognition using kinect camera. In: Beetz, M., Johnston, B., Williams, M.-A. (eds.) ICSR 2014. LNCS, vol. 8755, pp. 23–33. Springer, Heidelberg (2014)
Verschae, R., Ruiz-del-Solar, J.: Object detection: current and future directions. Front. Robot. AI 2, 1475 (2015)
Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 886–893 (2005)
Danelljan, M., Häger, G., Shahbaz Khan, F., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference, p. 65.1 (2014)
King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)
Schwarz, L.A., Mkhitaryan, A., Mateus, D., Navab, N.: Estimating human 3D pose from time-of-flight images based on geodesic distances and optical flow. In: 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, pp. 700–706 (2011)
Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Schelle, A., Stütz, P. (2016). Modelling Visual Communication with UAS. In: Hodicky, J. (eds) Modelling and Simulation for Autonomous Systems. MESAS 2016. Lecture Notes in Computer Science(), vol 9991. Springer, Cham. https://doi.org/10.1007/978-3-319-47605-6_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-47605-6_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-47604-9
Online ISBN: 978-3-319-47605-6
eBook Packages: Computer ScienceComputer Science (R0)