This special issue provides researchers with the opportunity to describe their work and their latest progress in the field of social robotics, aimed at fostering discussions and potential collaborations on the development of theoretical foundations and practical applications of social robots. The special issue focuses on Sociorobotics, the design and implementation of social behaviors of robots interacting with each other and humans. The issue provides a platform for those working on the interaction between humans and robots and on the integration of robots into human societies.

This special issue is comprised of eight papers, covering various Sociorobotics topics of interest, including research in robot personalities and behaviors, facial expressions and emotions, robot learning and navigation, and applied assistive robots (recommendation system, classroom, and interaction with children with autism).

In “Human–Robot Facial Expression Reciprocal-Interaction Platform: Case Studies on Children with Autism”, A. Ghorbandaei Poura, A. Taheri, M. Alemi and A. Meghdari present the design and development of a social robot with facial expressions designed to evoke human–robot reciprocal interactions with children with autism. A vision system was utilized to recognize the users’ facial expressions, combined with a decision making system to generate a proper response for the robot, comprised of different posed facial behaviors and neck movements. The main emotional states that were considered included happiness, sadness, anger, surprise, disgust, and fear. Many of the participants in the presented study displayed interest in interacting with the social robot and engaged with the robot during interactive imitation games. The acceptability and performance of the social robot were evaluated.

In “Faces of Emotion: Investigating Emotional Facial Expressions Towards a Robot”, I. M. Menne and F. Schwab study emotional reactions towards social robots, using both verbalized and nonverbalized emotions. They state that humans’ emotional responses toward robots and their treatments are observable. Facial expressions were argued to be an essential mode for natural social human–robot interaction. They report on the importance of further investigation of emotional facial expressions. They employ the Facial Action Coding System (FACS), which is a standardized method for measuring facial expressions. Their experiments studied the emotional facial expressions of 62 human participants as they watched video clips of an entertainment dinosaur robot involved in either a friendly interaction or an unfriendly one. The participant’s evoked emotions were evaluated.

In “The Essence of Ethical Reasoning in Robot-Emotion Processing”, S. Ojha, M. -A. Williams and B. Jonston present an ethical reasoning mechanism, providing mathematical and computational models for studying emotions. They investigate the social appropriateness of a robot expressing an emotion, versus it inhibiting the emotion. It is argued that such distinction can result in the design of robots that are more believable and more socially acceptable due to their portrayal of emotions. They hypothesize that ethical reasoning should augment emotion processing to make the robot believable and acceptable. Computational models of emotions were evaluated along with ethical standards of emotions. A hierarchical model of emotions was described, where emotions are triggered in response to an event, with a cognition layer embedded in the design to enable convergence to a proper emotional response. The simulated experiments included a household robot being deployed in an elderly care home.

In “Avoiding the content treadmill for robot personalities”, D. H. Grollman addresses the challenges of content treadmill for robot personalities, which involves the continual generation of robot content in order to maintain the freshness of robot personalities. It is argued that refreshing robot personalities requires significant investments in design and development. It is proposed that the focus should be on first developing the personality system of a robot, followed by building the needed functionality. The personality system was defined as one consisting of an infinite personality space with a drive-centric system. The concept was implemented using a small mobile robot, equipped with four drives of food, obstacle, comfort, and human drives. The experiments concentrated on determining whether the personality system could yield recognizable differences in robot behavior that were measurable. The scalability, utility, and adaptability of the proposed methodology are discussed.

In “Teaching robot navigation behaviors to optimal RRT planners”, N. Pérez-Higueras, F. Caballero and L. Merino explore the utilization of Optimal Rapidly exploring Random Trees (RRT) as the main planner for robot navigation, addressing the important challenge of robots’ motion planning, while navigating dynamic spaces that include humans. They propose a new approach to learning navigation behaviors from demonstrations, using Inverse Reinforcement Learning concepts and Optimal RRT as navigation planners. Comparisons were made with other related approaches, and the methodology was applied to navigations in spaces requiring human-awareness. The planner experiments involved a robot navigating in rooms and locations in a house, where one to three people were present. The configurations varied the initial and goal positions of the robot, along with the positions of the people, resulting in 30 different demonstration configurations. The physical robot experiments were performed in a robotics laboratory using static and dynamic scenarios, which included a number of moving pedestrians. It is stated that the learned navigation behaviors were more socially appropriate.

In “Analyzing the Impact of Different Feature Queries in Active Learning for Social Robots”, V. Gonzalez-Pacheco, M. Malfaz, Á. Castro-Gonzalez, J. C. Castillo, F. Alonso and M. A. Salichs address the challenges of robots learning from humans. The impacts of feedback from a group of human users on the pose learning task were studied, comparing the effects of active learning with those of passive learning. In active learning, the social robot had the ability to ask questions both during and after the learning experiences. The experiments involved 30 human users who trained a social robot in detecting certain poses, such as looking to the left, pointing to the right, etc. The authors indicate that due to the fact that the methodology is based on the concept of feature selection, other learning approaches could also be applicable with the proposed methodology.

In “Socially Assistive Robot for Providing Recommendations: Comparing a Humanoid Robot with a Mobile Application”, S. Rossi, M. Staffa and A. Tamburro assess the users’ acceptance of recommendations and their level of engagement, comparing the recommendations provided by a humanoid socially assistive robot with those of a mobile phone app. The domain of movie recommendation was used, where both systems provided the same information, albeit through different mechanisms, namely, head and gaze orientation, speech, and motion. These were compared against textual and graphical presentations of an app. Results from the analysis showed that the acceptance rate from the robot was higher than the app, but not at a statistically significant level. However, the expressed preference illustrated an enhanced experience with the robot, versus the app, with a statistically significant difference.

In “Developing a Prototyping Method for Involving Children in the Design of Classroom Robots”, M. Obaid, G. E. Baykal, A. E. Yantaç and W. Barendregt propose the inclusion of students in the design of social robots for use in classrooms. The premise was that through influencing the design of technologies with which they interact, the students can be and should be part of the user-centered design. The focus was on the form factor of the classroom robot, and two elicitation techniques were used, namely, clay models and a robot toolkit comprised of components such as legs, heads, torsos, etc. Characteristics such as shapes, sizes, material, and so on were included in the study of the form factors, as provided by the children during the study. Heuristics for the design, as elicited from the studies of children, were presented.

We greatly appreciate the encouragement, support, and efforts of the Editor-in-Chief (Professor Shuzhi S. Ge), the Co-Editor-in-Chief (Professor Oussama Khatib), the Springer Senior Editor for Engineering (Nathalie Jacobs), staff at Springer, and numerous reviewers in producing this special issue. It is our hope that this special issue will generate more interest and research endeavors, resulting in better understanding of social robots and their many potential applications for improving human lives.

Guest editors

Arvin Agah, University of Kansas, USA

John-John Cabibihan, Qatar University, Qatar

Ayanna Howard, Georgia Institute of Technology, USA

Miguel A. Salichs, University Carlos III de Madrid, Spain

Hongsheng He, Wichita State University, USA