Advertisement

Context Aware Human-Robot and Human-Agent Interaction

  • Nadia Magnenat-Thalmann
  • Junsong Yuan
  • Daniel Thalmann
  • Bum-Jae You

Part of the Human–Computer Interaction Series book series (HCIS)

Table of contents

  1. Front Matter
    Pages i-xiii
  2. User Understanding Through Multisensory Perception

    1. Front Matter
      Pages 1-1
    2. Jianfeng Ren, Xudong Jiang, Junsong Yuan
      Pages 3-29
    3. Yang Xiao, Hui Liang, Junsong Yuan, Daniel Thalmann
      Pages 31-53
    4. Kai Wu, Andy W. H. Khong
      Pages 55-78
  3. Facial and Body Modelling Animation

    1. Front Matter
      Pages 79-79
    2. Martin Constable, Justin Dauwels, Shoko Dauwels, Rasheed Umer, Mengyu Zhou, Yasir Tahir
      Pages 81-111
    3. Hyewon Seo
      Pages 113-132
    4. Junghyun Cho, Heeseung Choi, Sang Chul Ahn, Ig-Jae Kim
      Pages 133-149
    5. Il Hong Suh, Sang Hyoung Lee
      Pages 151-173
    6. Sukwon Lee, Sung-Hee Lee
      Pages 175-189
    7. Jun Lee, Nadia Magnenat-Thalmann, Daniel Thalmann
      Pages 191-207
  4. Modelling Human Behaviours

    1. Front Matter
      Pages 209-209
    2. Juzheng Zhang, Jianmin Zheng, Nadia Magnenat-Thalmann
      Pages 211-236
    3. Aryel Beck, Zhang Zhijun, Nadia Magnenat-Thalmann
      Pages 237-256
    4. Samuel Lemercier, Daniel Thalmann
      Pages 257-274
    5. Zerrin Yumak, Nadia Magnenat-Thalmann
      Pages 275-298

About this book

Introduction

This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI).

The research presented in this volume is split into three sections:

·User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization.

·Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion.

·Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other.

Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.

Keywords

Autonomous Virtual Humans Computer animation Human-Computer Interaction (HCI) Perception and awareness Social Robotics

Editors and affiliations

  • Nadia Magnenat-Thalmann
    • 1
  • Junsong Yuan
    • 2
  • Daniel Thalmann
    • 3
  • Bum-Jae You
    • 4
  1. 1.Institute for Media InnovationNanyang Technology UniversitySingaporeSingapore
  2. 2.Institute for Media InnovationNanyang Technological UniversitySingaporeSingapore
  3. 3.Institute for Media InnovationNanyang Technological UniversitySingaporeSingapore
  4. 4.Center of HCI for CoexistenceKorea Inst. of Science &TechnologySeoulKorea, Republic of (South Korea)

Bibliographic information

  • DOI https://doi.org/10.1007/978-3-319-19947-4
  • Copyright Information Springer International Publishing Switzerland 2016
  • Publisher Name Springer, Cham
  • eBook Packages Computer Science
  • Print ISBN 978-3-319-19946-7
  • Online ISBN 978-3-319-19947-4
  • Series Print ISSN 1571-5035
  • Buy this book on publisher's site
Industry Sectors
Pharma
Automotive
Finance, Business & Banking
Electronics
IT & Software
Telecommunications
Consumer Packaged Goods
Aerospace