Skip to main content

Conversational Agent Module for French Sign Language Using Kinect Sensor

  • Conference paper
  • First Online:
Understanding Human Activities Through 3D Sensors (UHA3DS 2016)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10188))

  • 472 Accesses

Abstract

Inside a CAVE different AR/VR scenarios can be constructed. Some scenarios use conversational agent interaction. In case of “deaf-mute” person the interaction must be based on sign language. The idea of this paper is to propose a “deaf-mute conversational agent” module based on sign language interaction. This innovative AR module is based on Kinect acquisition and real time 3D gesture recognition techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A dialog system or conversational agent (CA) is a computer system intended to converse with a human, with a coherent structure. Dialog systems have employed text, speech, graphics, haptics, gestures and other modes for communication on both the input and output channel. (Wikipedia definition).

  2. 2.

    Davi: http://www.davi.ai/.

References

  1. Baumann, J.: Military applications of virtual reality. Human Interface Technology Laboratory (2010). www.hitl.washington.edu/scivw/EVE/II.G.Military.html

  2. Rizzo, A., Parsons, T.D., Lange, B., Kenny, P., Buckwalter, J.G., Rothbaum, B., Difede, J., Frazier, J., Newman, B., Williams, J., et al.: Virtual reality goes to war: a brief review of the future of military behavioral healthcare. J. Clin. Psychol. Med. Settings 18(2), 176–187 (2011)

    Article  Google Scholar 

  3. Riva, G.: Virtual Reality in Neuro-Psycho-Physiology: Cognitive, Clinical and Methodological Issues in Assessment and Rehabilitation, vol. 44. IOS Press, Amsterdam (1997)

    Google Scholar 

  4. Laver, K., George, S., Thomas, S., Deutsch, J.E., Crotty, M.: Virtual reality for stroke rehabilitation. Stroke 43(2), e20–e21 (2012)

    Article  Google Scholar 

  5. Zaal, F.T., Bootsma, R.J.: Virtual reality as a tool for the study of perception-action: the case of running to catch fly balls. Presence Teleoperators Virtual Environ. 20(1), 93–103 (2011)

    Article  Google Scholar 

  6. Bideau, B., Kulpa, R., Vignais, N., Brault, S., Multon, F., Craig, C.: Using virtual reality to analyze sports performance. IEEE Comput. Graph. Appl. 30(2), 14–21 (2010)

    Google Scholar 

  7. Sanz, A., González, I., Castejón, A.J., Casado, J.L.: Using virtual reality in the teaching of manufacturing processes with material removal in CNC machine-tools. In: Materials Science Forum, vol. 692, pp. 112–119. Trans Tech Publ (2011)

    Article  Google Scholar 

  8. Yair, Y., Mintz, R., Litvak, S.: 3D-virtual reality in science education: an implication for astronomy teaching. J. Comput. Math. Sci. Teach. 20(3), 293–306 (2001)

    Google Scholar 

  9. Van Krevelen, D., Poelman, R.: A survey of augmented reality technologies, applications and limitations. Int. J. Virtual Real. 9(2), 1–20 (2010)

    Google Scholar 

  10. Zhao, Q.: A survey on virtual reality. Sci. China Ser. F Inf. Sci. 52(3), 348–400 (2009)

    Article  MathSciNet  Google Scholar 

  11. Fernando, T., Murray, N., Gautier, G., Mihindu, S., Loupos, K., Gravez, P., Hoffmann, H., Blondelle, J., Di Marca, S., Fontana, M., et al.: State-of-the-art in VR. Technical report (2004)

    Google Scholar 

  12. Fuchs, P., Moreau, G., Guitton, P.: Virtual Reality: Concepts and Technologies. CRC Press, Boca Raton (2011)

    Google Scholar 

  13. Ridene, T., Leroy, L., Chendeb, S.: Innovative virtual reality application for road safety education of children in urban areas. In: Bebis, G., et al. (eds.) ISVC 2015. LNCS, vol. 9475, pp. 797–808. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-27863-6_75

    Chapter  Google Scholar 

  14. Salous, S., Ridene, T., Newton, J., Chendeb, S.: Study of geometric dispatching of four-kinect tracking module inside a CAVE. In: Proceedings of the 10th International Conference on Disability, Virtual Reality and Associated Technologies, pp. 369–372 (2014)

    Google Scholar 

  15. Salous, S., Newton, J., Leroy, L., Chendeb, S.: Dynamic sensor selection based on joint data quality in the context of a multi-kinect module inside the CAVE “Le SAS”. Int. J. Comput. Theor. Eng. 8(6), 471 (2016)

    Article  Google Scholar 

  16. Anderson, J.N., Davidson, N., Morton, H., Jack, M.A.: Language learning with interactive virtual agent scenarios and speech recognition: lessons learned. Comput. Animat. Virtual Worlds 19(5), 605–619 (2008)

    Article  Google Scholar 

  17. Rubin, V.L., Chen, Y., Thorimbert, L.M.: Artificially intelligent conversational agents in libraries. Libr. Hi Tech 28(4), 496–522 (2010)

    Article  Google Scholar 

  18. Van Lun, E.: Conversational agent. https://www.chatbots.org/conversational_agent/

  19. Wang, H.C., Lai, C.T.: Kinect-taped communication: using motion sensing to study gesture use and similarity in face-to-face and computer-mediated brainstorming. In: Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, pp. 3205–3214. ACM (2014)

    Google Scholar 

  20. Chai, X., Li, G., Lin, Y., Xu, Z., Tang, Y., Chen, X., Zhou, M.: Sign language recognition and translation with kinect. In: IEEE Conference on AFGR (2013)

    Google Scholar 

  21. Chen, X., et al.: Kinect sign language translator expands communication possibilities. Microsoft Research (2013)

    Google Scholar 

  22. Wang, H., Chai, X., Zhou, Y., Chen, X.: Fast sign language recognition benefited from low rank approximation. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1, pp. 1–6. IEEE (2015)

    Google Scholar 

  23. Suma, E.A., Krum, D.M., Lange, B., Koenig, S., Rizzo, A., Bolas, M.: Adapting user interfaces for gestural interaction with the flexible action and articulated skeleton toolkit. Comput. Graph. 37(3), 193–201 (2013)

    Article  Google Scholar 

  24. Taylor II, R.M., Hudson, T.C., Seeger, A., Weber, H., Juliano, J., Helser, A.T.: VRPN: a device-independent, network-transparent VR peripheral system. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, pp. 55–61. ACM (2001)

    Google Scholar 

  25. Douglas, D.H., Peucker, T.K.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. In: Classics in Cartography: Reflections on Influential Articles from Cartographica, pp. 15–28 (2011)

    Chapter  Google Scholar 

  26. Agarwal, P.K., Har-Peled, S., Mustafa, N.H., Wang, Y.: Near-linear time approximation algorithms for curve simplification. Algorithmica 42(3–4), 203–219 (2005)

    Article  MathSciNet  Google Scholar 

  27. Potter, L.E., Araullo, J., Carter, L.: The leap motion controller: a view on sign language. In: Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration, pp. 175–178. ACM (2013)

    Google Scholar 

  28. Li, Y.: Hand gesture recognition using kinect. In: 2012 IEEE International Conference on Computer Science and Automation Engineering, pp. 196–199. IEEE (2012)

    Google Scholar 

  29. Cooper, H., Ong, E.J., Pugeault, N., Bowden, R.: Sign language recognition using sub-units. J. Mach. Learn. Res. 13, 2205–2231 (2012)

    MATH  Google Scholar 

  30. Ridene, T.: Virtual reality server of interaction eXtensible VRSIX. In: IEEE International Conference on Acoustics, Speech and Signal Processing, March 2017

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Taha Ridene .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Poulet, T., Haffreingue, V., Ridene, T. (2018). Conversational Agent Module for French Sign Language Using Kinect Sensor. In: Wannous, H., Pala, P., Daoudi, M., Flórez-Revuelta, F. (eds) Understanding Human Activities Through 3D Sensors. UHA3DS 2016. Lecture Notes in Computer Science(), vol 10188. Springer, Cham. https://doi.org/10.1007/978-3-319-91863-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-91863-1_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-91862-4

  • Online ISBN: 978-3-319-91863-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics