Advertisement

Tactile Facial Action Units Toward Enriching Social Interactions for Individuals Who Are Blind

  • Troy McDaniel
  • Samjhana Devkota
  • Ramin Tadayon
  • Bryan Duarte
  • Bijan Fakhri
  • Sethuraman Panchanathan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11010)

Abstract

Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65%). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.

Keywords

Social assistive aids Assistive technology Visual-to-tactile mapping Sensory substitution Facial action units 

References

  1. 1.
    Knapp, M.L.: Nonverbal Communication in Human Interaction. Harcourt College, ‎San Diego (1996)Google Scholar
  2. 2.
    Segrin, C., Flora, J.: Poor social skills are a vulnerability factor in the development of psychosocial problems. Hum. Commun. Res. 26(3), 489–514 (2000)CrossRefGoogle Scholar
  3. 3.
    Ekman, P., Friesen, W.V., Hager, J.C.: The Facial Action Coding System: A Technique for the Measurement of Facial Movements. Consulting Psychologists, Palo Alto (2002)Google Scholar
  4. 4.
    Valstar, M.F., Pantic, M.: Biologically vs. logic inspired encoding of facial actions and emotions in video. In: IEEE International Conference on Multimedia & Expo, pp. 325–328 (2006)Google Scholar
  5. 5.
    Seeing Machines. https://www.seeingmachines.com. Accessed 24 Nov 2017
  6. 6.
    IMOTIONS. https://imotions.com. Accessed 24 Nov 2017
  7. 7.
    De la Torre, F., et al.: IntraFace. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, pp. 1–8 (2015)Google Scholar
  8. 8.
    Qiu, S., Rauterberg, M., Hu, J.: Designing and evaluating a wearable device for accessing gaze signals from the sighted. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2016, Part I. LNCS, vol. 9737, pp. 454–464. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-40250-5_43CrossRefGoogle Scholar
  9. 9.
    McDaniel, T., Krishna, S., Balasubramanian, V., Colbry, D., Panchanathan, S.: Using a haptic belt to convey non-verbal communication cues during social interactions to individuals who are blind. In: IEEE Haptics Audio-Visual Environments and Games Conference, pp. 13–18 (2008)Google Scholar
  10. 10.
    McDaniel, T., Villanueva, D., Krishna, S., Colbry, D., Panchanathan, S.: Heartbeats: a methodology to convey interpersonal distance through touch. In: ACM Conference on Human Factors in Computing Systems, pp. 3985–3990 (2010)Google Scholar
  11. 11.
    Krishna, S., Bala, S., McDaniel, T., McGuire, S., Panchanathan, S.: VibroGlove: an assistive technology aid for conveying facial expressions. In: ACM Conference on Human Factors in Computing Systems, pp. 3637–3642 (2010)Google Scholar
  12. 12.
    Buimer, H.P., Bittner, M., Kostelijk, T., van der Geest, T.M., van Wezel, R.J.A., Zhao, Y.: Enhancing emotion recognition in vips with haptic feedback. In: Stephanidis, C. (ed.) HCI 2016, Part II. CCIS, vol. 618, pp. 157–163. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-40542-1_25CrossRefGoogle Scholar
  13. 13.
    Réhman, S.U., Liu, L.: Vibrotactile rendering of human emotions on the manifold of facial expressions. J. Multimedia 3(3), 18–25 (2008)CrossRefGoogle Scholar
  14. 14.
    Rahman, A., Anam, A.I., Yeasin, M.: EmoAssist: emotion enabled assistive tool to enhance dyadic conversation for the blind. Multimedia Tools Appl. 76(6), 7699–7730 (2017)CrossRefGoogle Scholar
  15. 15.
    Bala, S., McDaniel, T., Panchanathan, S.: Visual-to-tactile mapping of facial movements for enriched social interactions. In: IEEE International Symposium on Haptic, Audio and Visual Environments and Games, pp. 82–87 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Troy McDaniel
    • 1
  • Samjhana Devkota
    • 1
  • Ramin Tadayon
    • 1
  • Bryan Duarte
    • 1
  • Bijan Fakhri
    • 1
  • Sethuraman Panchanathan
    • 1
  1. 1.Center for Cognitive Ubiquitous Computing, School of Computing, Informatics, and Decision Systems EngineeringArizona State UniversityTempeUSA

Personalised recommendations