Towards Truly Affective AAL Systems

  • Mara PudaneEmail author
  • Sintija Petrovica
  • Egons Lavendelis
  • Hazım Kemal Ekenel
Open Access
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11369)


Affective computing is a growing field of artificial intelligence. It focuses on models and strategies for detecting, obtaining, and expressing various affective states, including emotions, moods, and personality related attributes. The techniques and models developed in affective computing are applicable to various affective contexts, including Ambient Assisted Living. One of the hypotheses for the origin of emotion is that the primary purpose was to regulate social interactions. Since one of the crucial characteristics of Ambient Assisted Living systems is supporting social contact, it is unthinkable to build such systems without considering emotions. Moreover, the emotional capacity needed for Ambient Assisted Living systems exceeds simple user emotion detection and showing emotion expressions of the system. In addition, emotion generation and emotion mapping on rational thinking and behavior of a system should be considered. The chapter discusses the need and requirements for these processes in the context of various application domains of Ambient Assisted Living, i.e., healthcare, mobility, education, and social interaction.


Affective computing Social interaction Healthcare Education Mobility 

1 Introduction

Ambient Assisted Living (AAL) can be described as concepts, products, and services that combine new technologies and social environment to improve the quality of life for people in all stages of their lifetime [1]. From an individual perspective, the quality of life can be considered in terms of well-being. It includes emotional (self-esteem, emotional intelligence, mindset), social (friends, family, community) and physical (health, physical safety) aspects in a person’s life [2]. Humans are social beings, thus one of the most important tasks of AAL is facilitating social contact [3]. This is achievable through the implementation of affect (a generic term used to cover feelings, mood, emotions, etc.) detecting and processing mechanisms in a system. Affective data enhances a system’s ability to make rational decisions and achieve its goals by serving as an extra information for detecting the context of the particular situation and as a mediator through which information can be passed.

Integration of affective capabilities in AAL systems requires knowledge from various fields, including cognitive psychology, neuroscience, medicine, and computer science. Mentioned knowledge has been of paramount importance in such artificial field (AI) field as affective computing which mainly focuses on the study and development of systems and devices that can recognize, interpret, process, and simulate human emotions [4] which has led to significant amount of research, algorithms and methods in this area. One question that has been in the center since the first affective systems appeared is related to their affective abilities; to put it simply – what kind of emotional processes does a system need? In the studies answering this question, main affective processes of affective systems have been identified (namely, emotion recognition, emotion expression, emotion generation and emotion mapping on the rational behavior); it has been argued that depending on their focus, not all systems need all these processes [5].

Another aspect of this chapter is AAL applications that are targeted to help not only older adults but also younger people (since health disorders can affect anyone at any age) to live independently and comfortably in their living environment. However, living environments do not include only users’ houses but also various environments surrounding them such as city streets, schools, shops, restaurants, and other places. Therefore, these people have needs for movement, social interaction, healthcare and acquisition of knowledge and skills not only related to specific problem domains (e.g., mathematics) but also basic skills required for everyday life like eating or cleaning. To support emerging emotional, physical and mental needs in extended AAL environments, four AAL application domains, including healthcare, education (teaching/learning), mobility (transportation), and social interaction, are analyzed in terms of previously mentioned affective processes.

The chapter starts with explanations of the complexity of affective systems and advancements in affective computing field, as well as describes affective processes and their implementations in affective computing systems. Next, the need for emotions in existing AAL application areas has been discussed and a short analysis of AAL systems in the context of basic emotional processes has been provided.

2 General Emotional Processes of Affective Systems

Affective computing (AC), which started its advancement in 1997 [4], aims to endow computers with abilities to detect, recognize, interpret, process, simulate human emotions from visual, textual, and auditory sources, as well as respond appropriately [6]. AC humanizes human-computer interactions by building artificial emotional intelligence. As natural language interactions with technology continue to evolve (examples include search, bots, and personal assistants), emotion recognition is already emerging to improve advertising, marketing, entertainment, travel, customer service, and healthcare [7].

Advances in data processing speeds and disciplines of computer science, AI, machine learning, psychology, and neuroscience, are all leading to expanding of AC field [8]. Computers, cameras, and sensors can capture facial expressions, gaze, posture, gestures, tone of voice, speech, patterns of keyboard and/or mouse usage, as well as physiological states (e.g., skin temperature or conductance, heart rate and blood volume pulse) to register changes in a user’s emotional state [6].

Analysis of existing studies shows that numerous computational models of emotions have been developed and applied by researchers working in the AC area. An abundant amount of various systems and applications has facilitated discussion of main affective processes and system’s affective abilities in general.

One of fundamental works in this direction has been done by Hudlicka who proposed a general affective system framework [9]. The framework focuses on the roles of emotions and their fulfillment in artificial units. Such general approach allows systematic and organized design and implementation of necessary processes and functions, as well as enables comparison of affective mechanisms of various systems. According to AC, an abstract affective component can be identified, which executes three processes: affect recognition, affect calculation as well as affect expression [4]. Affect calculation may include two separate processes: emotion generation and emotion mapping on behavior [9]. By combining these ideas, Petrovica and Pudane [10] have defined processes that are needed specifically for a fully affective system that interacts with a user (see Fig. 1).
Fig. 1.

Affective processes performed by an emotion-aware system (only affective interactions with a user are shown) (adapted from [10]).

Emotion recognition is usually done by extracting emotional cues from one or more modalities, i.e., facial expressions [11], gestures [12], body postures [13], voice [14], etc. Perception of various modalities is a precondition in order to automatically detect emotions and accordingly adapt the behavior of AAL systems. In the AC field, affect detection is commonly achieved both through non-intrusive sensors, which do not require physical contact, e.g., video cameras, eye trackers, and microphones, and intrusive sensors which require physical contact with human body, e.g., physiological sensors or haptic (touch) sensors. Since the main goal of AAL field is the development of non-intrusive intelligent systems that are able to proactively support people with special needs in their daily activities, non-invasive user monitoring is an important aspect of AAL systems [15].

Emotional state generation is related to the appraisal of stimuli causing subjective emotional experience. Emotional responses are triggered by various events that are evaluated as being significant for a person’s (or robot’s/agent’s) expectations, needs, or goals. Therefore, the same stimulus can produce distinct emotions, depending on differences in the person’s interpretations [9]. In AC field, affect generation is achieved by computational emotion modeling. One of the goals for computational emotion modeling is to enrich the architecture of intelligent systems with emotion mechanisms similar to those of humans, and thus endow them with the capacity to “have” emotions. In the context of AAL, some studies exist in this direction, e.g., in [16] authors describe need-inspired emotion model applied in a HiFi agent whose emotions are generated by evaluating the situation and comparing it to agent’s different needs.

Emotion mapping on cognition and behavior means defining reasoning or behavior changes caused by an emotional experience. Emotions can lead to the expression and communication of different reactions or the activation of specific actions in a person’s (or agent’s/robot’s) body. Thus, models of emotion effects should deal with the multi-modal nature of emotion. Systems with embodied agents need to express emotions not only through behavior, but also through other modalities available in their particular embodiment (e.g., facial expressions, speech, or gestures). One of the possible approaches that can be used for mapping emotions to behavioral reactions is the application of a behavior-consequent model that allows aligning emotional state to physical actions or other direct outward or social expressions, for instance smiling when happy. Behavioral-consequent models are often used to synthesize human-like emotional or social behavior in embodied robots like Kismet [17] or in virtual agents such as Max [18]. Regarding AAL developments that are able to link emotions with behavioral effects, few projects can be found. For example, in a NICA project [19] a behavioral architecture is developed for a social empathic robot which can assist a user in the interaction with a smart home environment.

Emotion expression is focused on the system’s ability to express emotions as responses to people’s personality, emotions, moods, attitudes, and actions. For AAL systems, such ability could improve their functionality since many AAL systems are developed as personal assistants fulfilling two functions:
  1. 1.

    facilitation of the completion of daily tasks [20]

  2. 2.

    maintenance of social interaction and communication to prevent social isolation of people [21].


To make the virtual companions or assistants not only look realistic but also have natural and human-like behaviors, one of the key characteristics is personality and the ability to exhibit human traits and characteristics, including emotions [22]. In AC field, such functionality is achieved mainly through affective conversational agents or affective robots; in AAL systems, it is implemented in a similar way – through virtual agents embodied into a system’s interface or robots. Thus, ways how emotions are expressed by AAL systems (or virtual agents) can be similar to those used by humans, i.e., facial expressions [23], voice and speech [24], behavior and body posture [25]. In other cases, a reaction to human emotions can be expressed through changes in music, color, and lighting [2].

While all four functional blocks (emotion recognition, emotion generation, emotion mapping on the rational behavior and emotion expression) if implemented properly ensure that a system is fully affective, it is assumed that a system still can perform well if it has just a few functional blocks. For example, if a system needs to adapt to a user’s emotions, it will achieve its goals just by recognizing emotions and expressing them as a response to a user’s emotions. Such approach is often used in intelligent tutoring systems [5].

AAL systems, in general, are complex in the sense that they need to support social interaction as well as carry out rational functions. This leads to thinking that in AAL systems, all four processes are needed: to detect emotion, to generate emotion, to map emotion on rational processes (“feel” emotion) and to express emotion.

While these components are already recognizable in existing systems, we argue that depending on the application area of AAL (as opposed to AAL systems as a whole), requirements for affective abilities differ. While rich affective model might be crucial in other cases, such as when dealing with older adults or targeting long-term interaction and/or companionship, for more specific AAL systems, full set of identified functions is not necessary. To prove this, we analyze four different areas where AAL can be used. To compare these areas, we use affective processes as a reference. It provides main functions required for AAL systems.

In the next section, various AAL application domains corresponding to requirements of AAL systems are reviewed and analyzed. Main characteristics are described for all listed application areas, as well as these characteristics are analyzed in the context of AC processes. Analysis of basic affective processes in existing AAL applications would help to develop truly affective systems supporting users not only physically but also mentally.

3 Affective Computing in AAL

AAL systems are aimed at satisfying the needs of those in care. In the research on older adults [6], needs have been divided into four kinds: Errand, Life curation, Emotional health and Comfort needs. Older adults are one of the major user groups of AAL, however, additional need for younger generation appears – a need for education, e.g., in autistic children cases [26]. We have chosen the following application areas in which AAL systems should support specific user needs:
  • education which supports life comfort in long-term by ensuring that basic life skills are learned;

  • the social interaction that supports emotional and comfort needs;

  • mobility supporting errand needs as well as comfort since the ability to move freely increases independence;

  • healthcare supporting physical (life curation) needs.

3.1 Emotions as Part of AAL in Education

Emotions play a central role as they ensure our survival and support all activities from the most basic to the most elaborated tasks, including education [27]. Studies have shown that emotions can influence various aspects of human behavior and cognitive processes, such as attention, long-term memorizing, decision making, understanding, remembering, analyzing, reasoning, and application of knowledge [28]. Emotions and cognition are complementary processes in learning situations when learners have to make conclusions, answer causal questions, identify problems, solve tasks, make knowledge-based comparisons, provide logical explanations, as well as demonstrate a usage of acquired knowledge and transfer it to others [29]. Emotional states of a learner can influence his/her problem-solving abilities and even leave an impact on a willingness to engage in the learning process, as well as they can affect motivation to learn. It is considered that positive emotions play an important role in the development of creativity and ability to adapt to different problems during their solving, conversely, negative emotions can hinder thinking processes, abilities to concentrate, remember, memorize, solve tasks, reason, and make conclusions [30].

Learning environments utilizing AC (i.e., monitoring of learner’s emotions and/or responding to them [31]) can create different scenarios that help and improve educational conditions. A system for emotion identification may detect signals of frustration during the learning process or lack of understanding during the study of concepts and definitions [27]. With such identification at the beginning of processes, the educational staff can start individual psychological assistance for learners, avoiding future problems that interfere in the learning process, and even more, in their lives. Currently, many examples of AC in educational settings already exist, e.g., AutoTutor [32], MathSpring [33], MetaTutor [34], etc. However, most of them focus on normally developing individuals and provide knowledge in specific problem domains, e.g., physics, mathematics, medicine, etc. Therefore, such developments might be applicable in cases when learners are not able to attend schools, for example, children with movement disorders.

If we are focusing particularly on AAL field and children with special educational needs, including those that have emotional, behavioral, sensory, physical, or mental disabilities, like children with autism, then previously mentioned affective learning environments (e.g., MetaTutor) developed for teaching specific problem domain are not applicable. This is due to the fact that most of the children suffering from autism have problems with learning even the basic skills required for everyday life [26]. In general, autism is a communication disorder that requires early and continuous educational interventions on various levels like everyday social interaction, communication and reasoning skills, language, understanding norms of social behavior, imagination, etc. [35]. Usually, these skills are relatively self-evident or easy to develop for other children. Basic social interaction skills are generally acquired from a very early age through an ongoing experience with the world and interactions with the people around us. Children with autism experience difficulties in this domain [36]. A social-emotional domain is strictly interrelated with cognitive and motor development, as it consists of the acquisition of capacities for personal relationships, emotional expression, motivation and engagement [36]. From an affective perspective, children with autism often have difficulty recognizing emotions in others and sharing enjoyment, interests, or accomplishments, as well as in interpreting facial cues to understand emotional expressions of others [37]. Without this understanding, they will remain oblivious to other people’s intentions and emotions. A lack of such an important prior knowledge about the environment hinders children to make informed decisions [38].

In general, the education is considered as the most proper solution for the autism, however, planning of the learning process for learners with autism is complex, because these learners have significant differences from most other learners in learning style, communication, and social skill development, and often have challenging behaviors [39]. Such differences may strongly influence the educational process and often lead to social exclusion from meaningful participation in learning activities and community life. Exclusion, in turn, further reduces learners’ perspectives to learn, grow, and develop [27]. Adapted educational systems facilitating an acquisition of knowledge and skills through the use of AC are crucial if the objective is successful development of the society where equal opportunities are provided for all children, youth, and adults.

Analysis of existing learning environments targeting AAL domain allows concluding that most of the developed solutions are particularly aimed at assisting autistic children in communication and interaction with other people. For example, mobile application CaptureMyEmotion [40] helps to teach children to recognize their emotions in the moment of taking photos, recording videos or sounds. Later emotions can be discussed with a caregiver thus helping children to learn their emotions. Another solution called Emotional Advisor has been proposed to help autistic children to engage in meaningful conversations where people are able to recognize their own or other people’s emotions. Emotional Advisor is capable of teaching and guiding autistic people on how to respond appropriately based on how the other person is feeling or expressing emotions during verbal communication [38]. In [41], the educational system called Face3D has been proposed for autistic children to help them in understanding and reasoning about other people’s (for example, relatives’) mental and emotional states by use of virtual agents representing real people, their performance, emotions, and behavior.

A robotic solution called IROMEC (Interactive Robotic Social Mediators as Companions) has been developed to teach autistic children basic social interaction skills [36]. During playing with IROMEC, children’s specific strengths and needs are taken into consideration and a wide range of objectives are covered regarding the development of different child’s skills (sensory, communicational and interaction, motor, cognitive, social, and emotional) [42]. The robot allows the use of different inputs (e.g. direct operation on touchscreen, buttons, remotely controlled switches, etc.) which can be changed according to child’s abilities and provides personalized feedback according to child’s and therapist’s preferences, therefore IROMEC adapts itself and develops along with a child [36]. Regarding the emotional factor covered by the system, IROMEC can display a set of basic emotions such as happiness, sadness, fear, surprise, disgust, anger. In addition, various scenarios of IROMEC are aimed at improving child’s self-esteem and regulation of emotions, as well as it enables teaching a range of basic emotions [42].

Even though many educational environments are targeting autistic children, more general solutions for people with disabilities also exist (however, not many). For example, an Ambient Intelligence Context-aware Affective Recommender Platform (AICARP) has been built to support learner’s needs through personalizing and adapting the learning environment during language learning [43]. AICARP ensures personalized feedback (e.g., playing different songs or sending some signals using a light and a buzzer) when particular learner’s emotional states, i.e., relaxed or nervous, are detected [44].

Overall, it can be concluded that AAL systems aiming at helping children during the learning process will not be able to provide full-fledged support if emotional aspects will not be considered during the development of the particular system. Emotions directly affect human cognitive abilities, including learning skills, therefore non-intrusive detection of learner’s emotional states and appropriate response (or adaptation) to these emotions are those capabilities which should be considered during the design of such AAL systems.

3.2 Emotions as Part of AAL in the Social Interaction

One of AAL goals is to ensure people’s wellbeing which includes not only satisfying physical needs or running errands but also making sure a person is, putting it simply, happy [45]. This is especially important in a case when a person uses a system in a long term, i.e., service robots for older adults and artificial nannies for kids [46]. Moreover, research shows that people are more open to system’s suggestions if it uses emotional words [47]. This leads to conclusion that a user would be more interested to engage with a system if it would fulfill their emotional expectations as a result supporting the main functions of AAL as well. A system that satisfies emotional needs has its advantages, and yet not many AAL systems exist in this direction.

The most straight-forward way for implementing social and emotional behaviors in an AAL system is through artificial companions. Developing such systems present multiple challenges such as unmistakable expressions of emotions, the ability to conduct high-level dialogue, abilities to learn, to adapt, to develop a personality, to use natural cues, and to develop social competencies [48, 49]. While it is not an easy task, research suggests that aside from already mentioned benefits – satisfying emotional needs and reducing loneliness and supporting “rational” tasks – companions also reduce stress and as a consequence can improve physical health [50]. However, for the companions to achieve these goals an important characteristic is a believability – i.e., a user needs to perceive them as if they act on their own; emotions are crucial for a companion to be believable [51].

Believable artificial companions have been researched in several areas, including social robotics, virtual assistants both as chatbots and as characters that provide other activities [52]. In an AAL environment, mobile robots provide more possibilities in terms of running errands or physically helping a user. Moreover, the research shows that people tend to empathize and attach to a robotic companion compared to its simulation [53]; robotic pets can be involved in therapy and achieve effect similarly as real pets [54] that cannot be done on 2D screen.

In the field of AC, however, several frameworks and projects for virtual agents have been developed that in terms of behavior are believable. One of such developments is WASABI – an architecture that is implemented as a virtual reality companion for playing a card game [18]. For this reason, this subsection reviews different types of assistants; it does not focus on “practical” functions (such as running errands, reminding drinking pills, etc.) of the companions but rather on their emotional abilities and behaviors that enable them to become emotionally believable.

In general, there are two types of companions: virtual and robotic [52]. Virtual assistants have no physical embodiment and they can have no virtual body as well (e.g., a chatbot). Emotions in companions, however, are closely related to expression through the body which helps them to be readable without misunderstandings [48] so a companion needs at least some kind of body – even if it is a virtual agent.

Robotic companions are researched by a field called social robotics [17]. Social robots are autonomous robots that can interact with a user in a socially believable manner [17]. Social robots are grouped into the ones that use strong approaches and those that use weak approaches. The strong approach means that a robot evolves its abilities over time; on the contrary the weak approach means a robot is just imitating emotions [49]. In the context of companions, this classification can be extended to virtual assistants as well.

Several researchers have noted that for companions to be able to adapt to a user, to form a personality and display believable behavior in long-term, they need to be able to learn [48, 55]. In [55], it is especially accented that in the future social robots will need to be personalized for which sophisticated user model might be needed. This leads to the conclusion that weak approaches will be left to narrow applications and currently the development of strong approaches is needed.

The weak approach is often used in robots that are zoomorphic, i.e., remind animals; some of these animals have no emotions at all e.g., robotic parrot RoboParrot that is used for educational purposes and therapy [56] or robotic seal PARO which is also used for therapy [57]. Sony’s robotic dog AIBO, on the other hand, can express six emotions: happiness, anger, fear, sadness, surprise, and dislike [46] but lately Sony has moved towards strong approaches claiming that dog can form an emotional bond with a user [58].

The strong approach in zoomorphic agents has been developed already almost two decades ago in FLAME which is a virtual agent [59]. FLAME is a fuzzy logic adaptive model of emotions which was implemented as a pet dog. A user can give his feedback to the pet, thus forming his behavior and teaching new rules. The author claims that such learning adapts the pet to the user.

Another group of robots and agents are the ones that are not similar neither to animals nor humans. They rely on different forms of emotion expression [48]. A well-known example of such social robots is Mung that has a simple body and LED lights that allows expressing emotion through colors [60]. An interesting experiment was done to investigate if movement-based emotions (without e.g. facial features) can be recognized [61]. The results showed that users still recognize emotions with sufficient accuracy. Such studies are important also for humanoid robots since implementing facial features is a complex task from both, hardware and software perspective, and for this reason, other approaches are often chosen. One example of that is Nao – widely used social robot (see e.g. [62] where Nao is used to investigate interaction with users or [63] where Nao is used to interact with autistic children) which relies on emotion expression through the body movements and lights [64].

Humanoids or robots with human-like expressions are often used for emotion expression [52]. Not all of them, however, express emotions through complex channels and not all of them use the strong approach. In [65] a human-like robot Daryl is described. While it shows its emotions through verbal cues and movement, the approach used in Daryl cannot be considered as strong since (a) the robot does not learn anything and (b) it reacts to the onlooker’s shirts color, and emotions are assigned arbitrarily to colors.

One of the first anthropomorphic robots was Kismet. Despite the fact that the author claimed that in theory, Kismet could learn, in the reality, it did not do so [66]. On its basis, Leonardo who uses the strong approach was developed. Leonardo uses gestures and facial expressions for social communication, can learn about objects and form affective memories which in turn underlies his likes and dislikes [67, 68]. Already mentioned WASABI has a human life-size body and sophisticated internal models that allows displaying mood, emotions and build attitude [18].

The strong approach is currently making its way into the social virtual agent’s world. One can see it in robots developed by the industry, the most sophisticated and publicly known being Sophia [69], and also in papers recently published which are focused on developing methods that solve different learning issues. A model for learning emotional reactions from humans and the environment, similarly as humans do, has been developed in [70]. Similarly, in [71], a method for learning facial expressions from humans has been implemented and tested. This all leads to the conclusion that the research on companions indeed has made rapid development since 2009 when social robotics was considered to be “very young” [68] and is on a track toward long-term companions that are able to adapt and learn from a user.

Currently, there are many advanced approaches in AC that allows modeling advanced user states which are not yet implemented into the area of social robotics, mostly because robots have other challenges that slow down development of emotional models (such as mechanical limitations, materials used, etc.) [48]. However, it can be concluded that due to practical functions and emotional attachment to robotic companions compared to virtual companions, social robots are the future of artificial companionship.

3.3 Emotions as Part of AAL in Mobility

AAL applications are targeted to help older adults or people with disabilities to live independently and comfortably in their living environment; however, living environments do not include only home, but also various environments such as neighborhood, shopping mall and other public places [72]. The best way to help people with disabilities is to give them autonomy and independence [73]; therefore, mobility that includes movement by private cars, public transport, wheelchairs and walking (by person itself or using walking sticks or exoskeletons) has become one of the most important areas for AAL solutions [74]. For example, older adults prefer to live as independently as possible at home, but living independently involves many possible risks, such as falling, weakening bodies, memory loss, and wandering that limit mobility and activities [75]. The main objective to be achieved regarding people with disabilities is providing them with an access to information resources and ability to move safely and autonomously in any environment. So far, many environments are not easily accessible for these people by themselves and without a guide [72].

In parallel to the development of AAL systems for the mobility, AC has also entered this domain. Emotional factors and affective states are crucial for enhanced safety and comfort [76] since essential driver abilities and attributes are affected by emotions, including perception and organization of memory, goal generation, evaluation, decision-making, strategic planning, focus and attention, motivation and performance, intentions and communication [77]. Furthermore, the mobility of older adults can be affected by emotional factors, e.g., the fear of getting lost or hurt [78]. Current predictions show that average population’s age is increasing and within 50 years one-third of the population in regions like Japan, Europe, China, and North America, will be over 60 years old [24]. Therefore, a great number of drivers will be older adults in the future.

Aggressiveness and anger are emotional states that extremely influence driving behavior and increase the risk of causing an accident [77]. As reported in a literature, aggressive or angry behaviors may occur in people with Alzheimer’s or other with dementias quite easily [79]. Furthermore, aging has been found to have negative effects on dual-task performance and older drivers present declines in information processing and driving performance [24]. Even healthy people can experience a wide range of emotions during driving, e.g., stress (caused by rush hour traffic congestion), confusion (caused by confusing road signs), nervousness or fear (e.g., for novice drivers), sadness (caused by negative event), etc. [77]. While driving, these emotions can have very harmful effects on the road, or even cause death. For instance, anger can lead to sudden driving reactions, often involving car accidents. Sadness or an excess of joy can lead to a loss of attention [80]. Considering the great responsibility, a driver has for his/her passengers, other road users, and her- or himself, as well as the fact that steering a car is an activity where even the smallest disturbance potentially has grave repercussions, keeping the driver in an emotional state that is the most suited for a driving is of enormous importance. Too low level of activation (e.g., resulting from emotional states like sadness or fatigue) also leads to reduced attention as well as prolonged reaction time and therefore lowers driving performance. In general, loss of mobility as a consequence of any illnesses puts people at an increased risk of social isolation and lower levels of physical activity [81].

By analyzing existing AAL solutions related to mobility and AC, it is possible to distinguish at least three application categories: intelligent solutions for walking, virtual environments for driving, and systems leading to affect-aware cars. All the mentioned categories and examples will be discussed further.

A support during the walking is of particular importance for older adults, people having problems with vision or movement in general. Currently, several developments (including robotic solutions and mobile applications) have been proposed to provide walking assistance or motivate people to go out and do physical activities. In [82], the Elderly-assistant & Walking-assistant robot has been described which is able to determine an intention of a user and identify a walking-mode. Its purpose is to provide physical support and walking assistance for older adults to meet their needs for walking autonomy, friendliness, and security [83].

For example, iWalkActive has been developed [84] to offer people a highly innovative, attractive and open walker platform that greatly improves a user’s mobility in an enjoyable and motivating way at the same time supporting physical activities that are either impossible or very difficult to perform with traditional non-motorized walkers, e.g., rollators. iWalkActive offers community services such as recording, sharing and rating walking routes, thus proving a possibility to stay socially connected.

DALi (Devices for Assisted Living) project was aimed at developing a semi-autonomous, intelligent mobility aid for older adults, which supports navigation in crowded and unstructured environments, i.e., public urban places such as shopping malls, airports, and hospitals [85]. This project takes into account also psychological and socio-emotional needs of older users, including self-consciousness, pride, and fear of embarrassment because older adults are more focused on achieving emotional goals compared to younger adults. Thus, this project focuses on emotional benefits achieved by improving a sense of safety and reducing the fear of falling. The use of the DALi also leads to the renewal of confidence and contribute to a belief in mastery [85].

Eyewalker project targets the development of an independent solution that can be simply clipped on a rollator [86]. Eyewalker involves the determination of a user’s emotional state based on movement analysis since gait itself provides relevant information about a person’s affective state. For the emotion detection, an acceleration data is analyzed.

Besides already mentioned physical solutions, various mobile or software applications have been developed focused on a facilitation of physical activities, including walking since regular walking is beneficial for enhancing mental health, for example, reducing physical symptoms and anxiety associated with minor stress. Ambient Walk is a mobile application that aims to explore how ambient sound generated by walking and meditative breathing, and the practice itself impacts user’s affective states [87]. Ambient Walk is designed to use audio-visual interaction as an interventional medium that provides novel means to foster mindfulness and relaxation. A similar mobile application has been proposed in [88]. This mobile tool supports mindful walking to reduce stress and to target such diseases as diabetes or depression. It is a mobile personalized tool that senses the walking speed and provides haptic feedback.

Next category regarding developed AAL mobility solutions includes various virtual environments (e.g., driving simulators) aimed at analyzing emotions during the driving process [89]. For example, young adults with autism have difficulties in learning safe driving skills. Furthermore, they demonstrate unsafe gaze patterns and higher levels of anxiety [90]. One of such virtual reality-based environments has been described in [91]. Environment operating as a driving simulator integrates electroencephalogram sensor, eye tracker and physiological data acquisition system for the recognition of several affective states and the mental workload of autistic individuals when they performed driving tasks. Based on acquired affective data, interventions of the system are adapted to keep users in a flow state. A similar solution called Driving Simulator has been designed to elicit driving related emotions and states, i.e., panic, fear, frustration, anger, boredom, and sleepiness [92]. Detection of mentioned affective states is carried out based on the analysis of various physiological body signals (GSR, temperature, and heart rate). Emotional Car simulator described in [80] has been developed with an aim to control and reduce the negative impact of emotions during the driving. The simulator can capture physiological data through EEG systems and recognize such affective states as excitement, engagement, boredom, meditation, and frustration. Besides emotion recognition, this environment integrates a virtual agent which intervenes to reduce an emotional impact so that a driver can return to a neutral emotion.

Another area where mobility will be improved in the near future is the use of autonomous cars. As such cars will not require attention from a driver, their use by older users or people with disabilities will be facilitated [74]. Therefore, researchers have been working on various solutions which can be integrated into a car to make it affect-aware. An extensive work has been done in the direction of car-voice integration since speech is a powerful carrier of emotional information [93]. This is also due to the fact that speech-controlled systems are already integrated into existing cars. Besides emotion recognition from voice, this process can be carried out based on other modalities, e.g., facial expressions and/or body posture [95], physiological signals [96], and even driving style [77]. However, the best way how a car can respond to the emotional state of a driver is through the voice. An appropriate voice response can be provided in terms of words used, presentation of a message by stressing particular words in the message and speaking in an appropriate emotional state [93]. Adapting a personality of an automated in-car assistant to a mood of a driver can also be important. A badly synthesized voice or an overly friendly, notoriously the same voice is likely to annoy the driver which soon would lead to distraction. Therefore, as an important adaptation strategy, matching in-car voice with the driver’s emotion is beneficial [77]. A solution called Voice User Help has been implemented and described in [24]. It is a smart voice-operated system that utilizes natural language understanding and emotional adaptive interfaces to assist drivers when looking for vehicle information with minimal effect on their driving performance. Additionally, the system presents an opportunity for older adult drivers to reduce the learning curve of new in-vehicle technologies and improve efficiency. In parallel to the speech recognition engine, an emotion recognition engine estimates the current emotional state of the user (e.g., angry, annoyed, joyful, happy, confused, bored, neutral) based on prosodic cues. Later, this information is used by a dialog manager to modify its responses.

Another research related to emotionally responsive cars has been proposed in [76]. A car can detect abnormal levels of stress and use this information to automatically adapt its interactions with a driver and increase individual and social awareness. Thus, the car is able to help the driver to better manage stress through adaptive music, calming temperature, corrective headlights, an empathetic voice of GPS, etc.

3.4 Emotions as Part of AAL in Healthcare

One of the primary applications for AAL systems is healthcare so it is not a surprise that there exists a remarkable number of various solutions. The overall benefits of using technology in healthcare include increased accessibility and cost-effectiveness, exclusion of human factor from the treatment (including infinite patience, diminishing variability) as well as tailoring communication to users’ needs [97].

Healthcare applications are intended not only to take care of older adults or people with disabilities but also to monitor users with chronic health conditions [98, 99]. Besides, healthcare in AAL systems is related not only to maintaining physical health but also to nurturing mental health. For this reason, it is closely related to cyberpsychology – a research area that has originated in psychology and focuses on treating and preventing mental illnesses through technology [97].

Specifically, some of the developments have been proven to increase the safety of older adults [100], improve the mental safety of chronic patients [101] and to enhance the quality of life for autistic children via accurately recognizing their emotions [102]. Healthcare applications also help to prevent habits that may lead to health problems in the future, such as overeating [103] and excessive drinking [104].

One can easily see that emotions have a crucial role in healthcare applications. Emotions are related to both causes and curing of physiological and mental illnesses [97] thus manipulations with a person’s emotional state can help with preventing illnesses as well as in the treatment of health problems.

Researchers have found that emotional responses towards various emotion elicitors can mitigate or enhance stress-related conditions. One example of physical disease prevention is Cardiac Defence Response detection which is a health risk that is not associated with dangerous stimuli. In [105], an algorithm has been designed for automatic recognition of such condition; it can help a patient to self-regulate as well as it notifies medical staff of the user’s health state. Physical diseases are particularly closely related to emotions when dealing with older adults and yet it is one of the groups that are susceptible towards depression; for this reason, a solution called a SENTIENT has been developed [106]. It monitors a user with the aim to detect negative or positive emotional valence in real-time thus enabling detecting and curing depression at its early stages.

As mentioned before, detection of affective state can also help with a treatment which in case of AAL systems can mean one of two things, i.e., there are two types of systems interventions in case of problems: in one case, system monitors a user and if abnormality is detected, calls caretaker, in the other system intervenes itself [97]. In case of life-threatening conditions, it is crucial for a system’s communication with caretakers to be failsafe; for this reason, researchers look for such solutions both from abnormality detection and messaging [107] perspectives. Abnormality detection is closely related to how well a system can detect user’s emotional states which is why several sensor data fusion solutions have been developed (see, e.g. [108] where a method to fuse image and sound have been invented). A question of sensors used in AAL systems is still open since they need, on one hand, to be unobtrusive, and on the other hand, informative enough. For this reason, wearable sensors and mobile phones are often used (see, e.g., [109]).

A system can intervene with the user itself and try to help in various ways. One such way is through changing conditions, e.g., switching on the light at night when distress is noticed [110]. In [111], based on pitch and speed while talking on the phone, depressive and manic states of patients suffering from bipolar disorder have been detected which then can be used for a treatment. Emotion detection and analysis can also be used not only with an aim to detect existing emotional state of a user but also to predict and automatically analyze behavior of involved humans [112].

While there are a lot of systems that monitor and analyze user’s states, the vast majority of them contact human caretakers once the intervention is needed. A current trend in the health applications is moving towards ubiquitous healthcare which means monitoring patients in all environments [107]. One such novel approach is monitoring older adults via the community [113]. Another promising research direction is personalization of a treatment for similar diseases [114].

4 Analysis of Affective Requirements for AAL Application Domains

As it was described in Sect. 2, four basic affective processes (emotion recognition, affect calculation consisting of emotion generation and emotion mapping on cognition and behavior, as well as emotion expression) can be fulfilled by an affective component, a unit or a system. The main goal of this section is to provide analysis and summary of previously considered AAL systems in terms of mentioned processes.

In general, the relationship between previously analyzed AAL application domains and all four affective processes is represented in Table 1. If the specific affective process is of high importance and should be included in the development of AAL systems as a functional requirement then it is depicted with black color. If not all solutions of the specific AAL application domain require the corresponding functionality then dark grey color is used (medium importance). Light grey color represents cases when the process is not essential to ensure the intended functionality of the AAL system (low importance).
Table 1.

The relationship between affective processes and AAL application domains.


Emotion recognition and creation of a user model is an essential task of AAL systems targeting provision of educational activities since reasoning about learner’s emotions and adaptation of a system’s behavior (including emotion expressions of the system itself) is further required as a feedback. As an example, previously described IROMEC robot can be mentioned. It carries out user modeling (models child’s abilities and emotions) and accordingly adapts itself and provides personalized feedback. In general, emotion recognition is carried out through various modalities. The most popular one, of course, is the identification of facial expressions via cameras because it is considered a non-intrusive method. However, intrusive approaches (for example, analysis of physiological data) are applied as well for emotion recognition purposes.

If we return to affective processes, in particular, to emotion generation, then for AAL applications aimed at teaching specific knowledge or skills for a short-term period it is not of particular importance to actually “feel” or generate emotions based on system’s own emotion model. It can be just an imitation of emotions (e.g., feeling empathy towards learners) as predefined reactions to learner’s emotions, actions and/or learning outcomes in order to increase system’s (or pedagogical agent’s) believability and gain learners’ trust. Thus, there is no need to generate further changes in the system’s rational processes and/or behavior according to felt system’s emotions.

The Social Interaction.

A significant amount of effort has been dedicated to emotion recognition. Particularly, a challenge for social robots is emotion identification outside of the laboratory, i.e., “in the wild”. While the most social robots recognize user’s emotions from the camera, several use audio signals and body postures as well. In general, emotion recognition in AAL environment does not differ from emotion recognition that is being done away from a computer. A more interesting task is user modeling which is crucial for adapting to a user and forming a long-term friendship. While user modeling is also one of the key factors for education and healthcare, for companions it is especially crucial to develop long-term affective models, structures about a user, his interests and user’s affective attitudes towards various things.

Emotion expression is also very important for companions from two aspects: first, emotional expressions should be clearly understandable for a user; secondly, they should be socially appropriate. Expressivity, in general, is much-researched topic that has resulted in the aforementioned robot Leonardo as well as other developments.

An affective ability that differs social interaction from other areas is the necessity for the calculation of a system’s internal affective states, including emotion generation and mapping on cognition and behavior. Such approach allows the system to be more believable over a long time since emotional displays and emotion influence on behavior is the key to affection formation and life illusion (i.e. belief that the artificial companion is actually alive).


Regarding mobility and transportation in general, there can be various options depending on a system’s specificity. If the solution is aimed at supporting just a walking then there is no need for the emotion integration, however, if some form of interaction is involved then emotion inclusion can become an essential task.

In case of walking assistants, emotion recognition as a system’s capability not always is required since most of these developments aim to promote positive emotional outcomes (e.g., reducing the fear of getting lost) through specific actions (for example, the DALi project). The most important would be a creation of a user profile according to which a system would adapt its actions targeting emotional benefits.

If the aim is a long-term interaction and/or communication which could be the case of affect-sensitive cars, then recognition of user’s emotions and generation of appropriate emotional responses for an in-car assistant via voice or facial expressions may be required. However, behavior and rational thinking of such systems should not submit to emotions since this can lead to negative outcomes, for example, car accidents, injuries, etc.

Currently, a great amount of work is already devoted to the emotion recognition from driver’s voice since many cars use voice analysis and speech recognition services. Therefore, a possibility to acquire affective data in many cases is already integrated into cars only analysis of the collected data in the context of emotions should be applied. Regarding this issue, results of studies and experiments carried out with driving simulators can be used as well to analyze driver’s emotions in particular situations with an aim to create corresponding drivers’ profiles.


When it comes to the affect integration into healthcare applications, the largest amount of research and practical studies has been linked to affect recognition. It is a logical consequence of field specifics: accurate affective state recognition underlies the entire chain of procedures that healthcare applications carry out. However, emotion recognition is not the only thing in the center of attention. User modeling and possibly forecasting his or her emotional reactions and consequently the behavior is of uttermost importance. Accurate and personalized user models would enable more precise detection of affective state and consequently would lead to more accurate evaluation of user’s health condition.

In the healthcare, similarly as in educational systems it is not needed for a system to have its own affective state but rather system should be able to tailor the affective reaction for achieving particular emotion from a user. System’s reasoning and decision-making processes, as can be seen from existing research, closely interact with user’s emotions, monitoring and forecasting them as well as adjusting system’s behavior.

Finally, some emotion expression capacities might be needed if a system performs interventions when required. In this case, functions of a healthcare system are merged with companionship functions so the system might need affective abilities vital for companions.

5 Conclusions

The chapter discusses a need of integration of AC approaches and methods in the context of AAL systems to improve their functionality in terms of rational decision making and enhancement of social interaction with people requiring the use of these systems. Four basic emotional processes forming general affective system framework have been described and analysis of various AAL systems application areas (i.e., education, social interaction, mobility, and healthcare) have been done to identify current capabilities of AAL systems in terms of listed processes.

Overall, it can be concluded that the existence of truly affective AAL system is not in the far future – separate parts of such systems already exist. Emotion detection is the most studied process in AC, therefore, various methods and algorithms have been developed which can be applied in the development of AAL systems. The analyzed AAL areas are closely merged together; it can be clearly seen that one system can have multiple functions.

Processes related to system’s emotion expression can be considered as a second most developed direction not only in AC but also in the field of AAL. Many researchers are working towards intelligent and expressive social agents which display believable behavior and can be used as personal assistants, teachers, companions, etc. In many cases, such agents represent a system itself and carry out most of the system’s functions aimed at direct interaction with a user, thus improving system’s communicative abilities.

The research focused on affect generation and consequently – the system’s endowment with abilities to “feel” emotions already exists, although it is at the very beginning of its development. Currently, most part of AAL systems just imitates abilities to “feel” emotions by using predefined emotion and/or behavior patterns as responses to user’s emotions. However, one direction where “feeling” real emotions is of primary interest, is companionship and long-term social interaction. While in some areas, such as healthcare, the system’s dependency on its own emotions can be unnecessary or even dangerous, in the social interaction “emotional glitches”, e.g., being offended, can make companion more believable and life-like. It can be concluded that this is one of future research directions.

Another trend that is closely related to the future of AAL is personalization – personal services and personal communication with a user. This means that there is a need to store not only “rational” data, such as health condition, but also affective data and attitudes of a user – which puts various user modeling techniques (including machine learning) as a top-interest research.


  1. 1.
    Eichelberg, M., Rölker-Denker, L.: Action Aimed at Promoting Standards and Interoperability in the Field of AAL (Deliverable D5). AAL Joint Programme (2014)Google Scholar
  2. 2.
    Castillo, J.C., et al.: Software architecture for smart emotion recognition and regulation of the ageing adult. Cogn. Comput. 8(2), 357–367 (2016)CrossRefGoogle Scholar
  3. 3.
    Takács, B., Hanák, D.: A mobile system for assisted living with ambient facial interfaces. Int. J. Comput. Sci. Inf. Syst. 2(2), 33–50 (2007)Google Scholar
  4. 4.
    Picard, W.: Affective Computing. MIT Press, Cambridge (1997)Google Scholar
  5. 5.
    Pudane, M., Lavendelis, E.: General guidelines for design of affective multi-agent systems. Appl. Comput. Syst. 22, 5–12 (2017)CrossRefGoogle Scholar
  6. 6.
    Lee, W., Norman, M.D.: Affective computing as complex systems science. Procedia Comput. Sci. 95, 18–23 (2016)CrossRefGoogle Scholar
  7. 7.
    Page, T.: Affective computing in the design of interactive systems. i-Manager’s J. Mob. Appl. Technol. 2(2), 1–18 (2015)Google Scholar
  8. 8.
    Carrie, C.: On Affective Computing: Past Imperfect, Future Impactful. Accessed 31 Aug 2018
  9. 9.
    Hudlicka, E.: Computational analytical framework for affective modeling: towards guidelines for designing computational models of emotions. In: Handbook of Research on Synthesizing Human Emotion in Intelligent Systems and Robotics, pp. 1–62. IGI Global, USA (2015)Google Scholar
  10. 10.
    Petrovica, S., Pudane, M.: Emotion modeling for simulation of affective student-tutor interaction: personality matching. Int. J. Educ. Inf. Technol. 10, 159–167 (2016)Google Scholar
  11. 11.
    Chen, J., Chen, Z., Chi, Z., Fu, H.: Facial expression recognition in video with multiple feature fusion. IEEE Trans. Affect. Comput. 9(1), 38–50 (2018)CrossRefGoogle Scholar
  12. 12.
    Zen, G., Porzi, L., Sangineto, E., Ricci, E., Sebe, N.: Learning personalized models for facial expression analysis and gesture recognition. IEEE Trans. Multimedia 18(4), 775–788 (2016)CrossRefGoogle Scholar
  13. 13.
    Zacharatos, H., Gatzoulis, C., Chrysanthou, Y.L.: Automatic emotion recognition based on body movement analysis: a survey. IEEE Comput. Graph. Appl. 34(6), 35–45 (2014)CrossRefGoogle Scholar
  14. 14.
    Rojas, V., Ochoa, S.F., Hervás, R.: Monitoring moods in elderly people through voice processing. In: Pecchia, L., Chen, L.L., Nugent, C., Bravo, J. (eds.) IWAAL 2014. LNCS, vol. 8868, pp. 139–146. Springer, Cham (2014). Scholar
  15. 15.
    Capineri, L.: Resistive sensors with smart textiles for wearable technology: from fabrication processes to integration with electronics. Procedia Eng. 87, 724–727 (2014)CrossRefGoogle Scholar
  16. 16.
    Lutfi, S.L., Fernández-Martínez, F., Lorenzo-Trueba, J., Barra-Chicote, R., Montero, J.M.: I feel you: the design and evaluation of a domotic affect-sensitive spoken conversational agent. Sens. (Basel, Switzerland) 13(8), 10519–10538 (2013)CrossRefGoogle Scholar
  17. 17.
    Breazeal, C.: Designing Sociable Robots. MIT Press, Cambridge (2002)zbMATHGoogle Scholar
  18. 18.
    Becker-Asano, C.: WASABI: Affect Simulation for Agents with Believable Interactivity. IOS Press, USA (2008)Google Scholar
  19. 19.
    Carolis, B.D., Ferilli, S., Palestra, G., Carofiglio, V.: Towards an empathic social robot for ambient assisted living. In: Proceedings of the 2nd International Workshop on Emotion and Sentiment in Social and Expressive Media: Opportunities and Challenges for Emotion-Aware Multiagent Systems, pp. 19–34 (2015)Google Scholar
  20. 20.
    Brumitt, B., Meyers, B., Krumm, J., Kern, A., Shafer, S.: EasyLiving: technologies for intelligent environments. In: Thomas, P., Gellersen, Hans-W. (eds.) HUC 2000. LNCS, vol. 1927, pp. 12–29. Springer, Heidelberg (2000). Scholar
  21. 21.
    Doyle, J., Skrba, Z., McDonnell, R., Arent, B.: Designing a touch screen communication device to support social interaction amongst older adults. In: Proceedings of the 24th BCS Interaction Specialist Group Conference, pp. 177–185. BCS Learning & Development Ltd., Swindon (2010)Google Scholar
  22. 22.
    Wang, D., Subagdja, B., Kang, Y., Tan, A. H., Zhang, D.: Towards intelligent caring agents for aging-in-place: issues and challenges. In: Proceedings of 2014 IEEE Symposium on Computational Intelligence for Human-Like Intelligence, pp. 1–8. IEEE Computer Society (2015)Google Scholar
  23. 23.
    Tsiourti, C., Joly, E., Wings, C., Moussa, M.B., Wac, K.: Virtual assistive companion for older adults: field study and design implications. In: Proceedings of 8th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth), pp. 57–64 (2014)Google Scholar
  24. 24.
    Alvarez, I., López-de-Ipiña, M.K., Gilbert, J.E.: The voice user help, a smart vehicle assistant for the elderly. In: Bravo, J., López-de-Ipiña, D., Moya, F. (eds.) UCAmI 2012. LNCS, vol. 7656, pp. 314–321. Springer, Heidelberg (2012). Scholar
  25. 25.
    Hanke, S., Tsiourti, C., Sili, M., Christodoulou, E.: Embodied ambient intelligent systems. Ambient Intelligence and Smart Environments: Recent Advances in Ambient Assisted Living – Bridging Assistive Technologies. e-Health and Personalized Health Care, pp. 65–85. IOS Press, Netherlands (2015)Google Scholar
  26. 26.
    Tang, Z., Guo, J., Miao, S., Acharya, S., Feng, J.: Ambient intelligence based context-aware assistive system to improve independence for people with autism spectrum disorder. In: Proceedings of Hawaii International Conference on System Sciences, Koloa, HI, USA, pp. 3339–3348 (2016)Google Scholar
  27. 27.
    Kadar, M., Ferreira, F., Calado, J., Artifice, A., Sarraipa, J., Jardim-Goncalves, R.: Affective computing to enhance emotional sustainability of students in dropout prevention. In: Proceedings of the 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, pp. 85–91. ACM Press, New York (2016)Google Scholar
  28. 28.
    Schwarz, N.: Emotion, cognition, and decision making. J. Cogn. Emot. 14(4), 440–443 (2000)Google Scholar
  29. 29.
    Lehman, B., D’Mello, S., Person, N.: The Intricate Dance between Cognition and Emotion during Expert Tutoring. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010. LNCS, vol. 6095, pp. 1–10. Springer, Heidelberg (2010). Scholar
  30. 30.
    Forbes-Riley, K., Rotaru, M., Litman, D.J.: The relative impact of student affect on performance models in a spoken dialogue tutoring system. User Model. User-Adap. Inter. 18(1–2), 11–43 (2008)CrossRefGoogle Scholar
  31. 31.
    Luneski, A., Bamidis, P.D., Hitoglou-Antoniadou, M.: Affective computing and medical informatics: state of the art in emotion-aware medical applications. Stud. Health Technol. Inf. 136, 517–522 (2008)Google Scholar
  32. 32.
    D’Mello, S.K., Graesser, A.C.: AutoTutor and affective autotutor: learning by talking with cognitively and emotionally intelligent computers that talk back. ACM Trans. Interact. Intell. Syst. 2(4), 23:2–23:39 (2012)Google Scholar
  33. 33.
    Woolf, B.P.: Building Intelligent Interactive Tutors: Student-Centered Strategies for Revolutionizing E-Learning. Morgan Kaufmann Publishers, San Francisco (2009)Google Scholar
  34. 34.
    Taub, M., Azevedo, R., Bouchet, F., Khosravifar, B.: Can the use of cognitive and metacognitive self-regulated learning strategies be predicted by learners’ levels of prior knowledge in hypermedia-learning environments? Comput. Hum. Behav. 39, 356–367 (2014)CrossRefGoogle Scholar
  35. 35.
    Konstantinidis, E.I., Luneski, A., Nikolaidou, M.M.: Using affective avatars and rich multimedia content for education of children with autism. In: Proceedings of the 2nd International Conference on Pervasive Technologies Related to Assistive Environments, pp. 1–6. ACM Press, New York (2009)Google Scholar
  36. 36.
    Ferrari, E., Robins, B., Dautenhahn, K.: Therapeutic and educational objectives in robot assisted play for children with autism. In: Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication, pp. 108–114. IEEE Computer Society (2009)Google Scholar
  37. 37.
    Messinger, D.S., et al.: Affective computing, emotional development, and autism. In: The Oxford Handbook of Affective Computing, pp. 516–536. Oxford University Press (2015)Google Scholar
  38. 38.
    Teoh, T.T., Lim, S.M., Cho, S.Y., Nguwi, Y.Y.: Emotional advisor to help children with autism in social communication. In: Proceedings of the 6th International Conference on Computer Sciences and Convergence Information Technology, Jeju, South Korea, pp. 278–283 (2011)Google Scholar
  39. 39.
    Judy, M.V., Krishnakumar, U., Hari Narayanan, A.G.: Constructing a personalized e-learning system for students with autism based on soft semantic web technologies. In: Proceedings of IEEE International Conference on Technology Enhanced Education, pp. 1–5. IEEE Computer Society (2012)Google Scholar
  40. 40.
    Leijdekkers, P., Gay, V., Frederick, W.: CaptureMyEmotion: a mobile app to improve emotion learning for autistic children using sensors. In: Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, pp. 381–384. IEEE Computer Society (2013)Google Scholar
  41. 41.
    Bertacchini, F., et al.: An emotional learning environment for subjects with autism spectrum disorder. In: Proceedings of International Conference on Interactive Collaborative Learning, pp. 653–659. IEEE Computer Society (2013)Google Scholar
  42. 42.
    Robins, B., et al.: Scenarios of robot assisted play for children with cognitive and physical disabilities. Interact. Stud. 13(2), 189–234 (2012)CrossRefGoogle Scholar
  43. 43.
    Santos, O.C., Saneiro, M., Rodriguez-Sanchez, M., Boticario, J.G., Uria-Rivas R., Salmeron-Majadas S.: The potential of ambient intelligence to deliver interactive context-aware affective educational support through recommendations. In: Proceedings of the Workshops at the 17th International Conference on Artificial Intelligence in Education, pp. 1–3. Springer, Switzerland (2015)Google Scholar
  44. 44.
    Santos, O.C., Saneiro, M., Boticario, J.G., Rodriguez-Sanchez, M.: Toward interactive context-aware affective educational recommendations in computer assisted language learning. New Rev. Hypermedia Multimed. 22(1–2), 27–57 (2016)CrossRefGoogle Scholar
  45. 45.
    Ivanova Goleva, R., et al.: AALaaS and ELEaaS platforms. In: Enhanced Living Environments: From Models to Technologies, pp. 207–234. The IET (2017)Google Scholar
  46. 46.
    Sharkey, A., Sharkey, N.: Children, the elderly, and interactive robots: anthropomorphism and deception in robot care and companionship. IEEE Robot. Autom. Mag. 18(1), 32–38 (2011)MathSciNetCrossRefGoogle Scholar
  47. 47.
    Hosseini, S.M.F., et al.: Both look and feel matter: essential factors for robotic companionship. In: Proceedings of 26th IEEE International Symposium on Robot and Human Interactive Communication, pp. 150–155. IEEE Computer Society (2017)Google Scholar
  48. 48.
    Paiva, A., Leite, I., Ribeiro, T.: Emotion modelling for social robots. In: The Oxford Handbook of Affective Computing, pp. 296–419. Oxford University Press (2015)Google Scholar
  49. 49.
    Weber, J.: Human-robot interaction. In: Handbook of Research on Computer Mediated Communication, pp. 855–867. IGI Global (2008)Google Scholar
  50. 50.
    Aminuddin, R., Sharkey, A., Levita, L.: Interaction with the Paro robot may reduce psychophysiological stress responses. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 593–594. IEEE Computer Society (2016)Google Scholar
  51. 51.
    Selvarajah, K., Richards, D.: The use of emotions to create believable agents in a virtual environment. In: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 13–20. ACM Press, New York (2005)Google Scholar
  52. 52.
    Hortensius, R., Hekele, F., Cross, E.S.: The perception of emotions in artificial agents. IEEE Trans. Cogn. Dev. Syst., 1 (2018)Google Scholar
  53. 53.
    Seo, S.H., Geiskkovitch, D., Nakane, M., King, C., Young, J.E.: Poor thing! would you feel sorry for a simulated robot? In: Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 125–132. ACM Press, New York (2015)Google Scholar
  54. 54.
    Robinson, H., Macdonald, B., Kerse, N., Broadbent, E.: The psychosocial effects of a companion robot: a randomized controlled trial. J. Am. Med. Dir. Assoc. 14(9), 661–667 (2013)CrossRefGoogle Scholar
  55. 55.
    Dautenhahn, K.: Robots we like to live with?! a developmental perspective on a personalized, life-long robot companion. In: Proceedings of the 2004 IEEE International Workshop on Robot and Human Interactive Communication, pp. 17–22. IEEE Computer Society (2004)Google Scholar
  56. 56.
    Shayan, A.M., Sarmadi, A., Pirastehzad, A., Moradi, H., Soleiman, P.: RoboParrot 2.0: a multi-purpose social robot. In: Proceedings of IEEE International Conference on Robotics and Mechatronics, pp. 422–427. IEEE Computer Society (2016)Google Scholar
  57. 57.
    PARO Robots, PARO Therapeutic Robot, Accessed 28 Aug 2018
  58. 58.
    Entertainment Robot “AIBO”. Accessed 29 Aug 2018
  59. 59.
    Seif El-Nasr, M., Yen, J., Ioerger, T.R.: FLAME – fuzzy logic adaptive model of emotions. Auton. Agents Multi-Agent Syst. 3(3), 219–257 (2000)CrossRefGoogle Scholar
  60. 60.
    Kim, E.H., Kwak, S.S., Han, J., Kwak, Y.K.: Evaluation of the expressions of robotic emotions of the emotional robot “Mung”. In: Proceedings of the 3rd International Conference on Ubiquitous Information Management and Communication, pp. 362–365. ACM Press, New York (2009)Google Scholar
  61. 61.
    Embgen, S., Luber, M., Becker-Asano, C., Ragni, M., Evers, V., Arras, K.O.: Robot-specific social cues in emotional body language. In: Proceedings of IEEE International Workshop on Robot and Human Interactive Communication, pp. 1019–1025. IEEE Computer Society (2012)Google Scholar
  62. 62.
    Rehm, M., Krogsager, A.: Negative affect in human robot interaction - Impoliteness in unexpected encounters with robots. In: Proceedings of IEEE International Workshop on Robot and Human Interactive Communication, pp. 45–50. IEEE Computer Society (2013)Google Scholar
  63. 63.
    Shamsuddin, S., Yussof, H., Ismail, L.I., Mohamed, S., Hanapiah, F.A., Zahari, N.I.: Humanoid robot NAO interacting with autistic children of moderately impaired intelligence to augment communication skills. Procedia Eng. 41, 1533–1538 (2012)CrossRefGoogle Scholar
  64. 64.
    SoftBank Robotics, Who is Nao? Accessed 29 Aug 2018
  65. 65.
    Hollinger, G.A., Georgiev, Y., Manfredi, A., Maxwell, B.A., Pezzementi, Z.A., Mitchell, B.: Design of a social mobile robot using emotion-based decision mechanisms. In: Proceedings of IEEE International Conference on Intelligent Robots and Systems, pp. 3093–3098. IEEE Computer Society (2006)Google Scholar
  66. 66.
    Breazeal, C.: Sociable Machines: Expressive Social Exchange Between Humans and Robots. MIT Press, Cambridge (2000)Google Scholar
  67. 67.
    Thomaz, A.L., Breazeal, C.: Asymmetric interpretations of positive and negative human feedback for a social learning agent. In: Proceedings of IEEE International Workshop on Robot and Human Interactive Communication, pp. 720–725. IEEE Computer Society (2007)Google Scholar
  68. 68.
    Breazeal, C.: Role of expressive behaviour for robots that learn from people. Philos. Trans. R. Soc. B: Biol. Sci. 364(1535), 3527–3538 (2009)CrossRefGoogle Scholar
  69. 69.
    Hanson Robotics, Sophia. Accessed 28 Aug 2018
  70. 70.
    Dang, T.L.Q., Jeong, S., Chong, N.Y.: Personalized robot emotion representation through retrieval of memories. In: Proceedings of the 3rd International Conference on Control, Automation and Robotics, pp. 65–70. IEEE Computer Society (2017)Google Scholar
  71. 71.
    Chen, C., Garrod, O.G.B., Zhan, J., Beskow, J., Schyns, P.G., Jack, R.E.: Reverse engineering psychologically valid facial expressions of emotion into social robots. In: Proceedings of 13th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 448–452. IEEE Computer Society (2018)Google Scholar
  72. 72.
    Li, R., Lu, B., McDonald-Maier, K.D.: Cognitive assisted living ambient system: a survey. Digit. Commun. Netw. 1(4), 229–252 (2015)CrossRefGoogle Scholar
  73. 73.
    Favela, J., Alamán, X.: Special theme: ambient assisted living for mobility: safety, well-being and inclusion. Pers. Ubiquitous Comput. 17, 1061–1602 (2013)CrossRefGoogle Scholar
  74. 74.
    Flórez-Revuelta, F., Chaaraoui, A.A.: Technologies and applications for active and assisted living. what’s next? In: Active and Assisted Living: Technologies and Applications, pp. 1–8. The IET (2016)Google Scholar
  75. 75.
    Chan, M., Campo, E., Bourennane, W., Bettahar, F., Charlon, Y.: Mobility behavior assessment using a smart-monitoring system to care for the elderly in a hospital environment. In: Proceedings of the 7th International Conference on Pervasive Technologies Related to Assistive Environments, Article No. 51. ACM Press, New York (2014)Google Scholar
  76. 76.
    Hernandez, J., McDuff, D., Benavides, X., Amores, J., Maes, P., Picard, R.: AutoEmotive: bringing empathy to the driving experience to manage stress. In: Proceedings of the Companion Publication on Designing Interactive Systems, pp. 53–56. ACM Press, New York (2014)Google Scholar
  77. 77.
    Eyben, F., Wöllmer, M., Poitschke, T., Schuller, B., Blaschke, C., Färber, B., Nguyen-Thien, N.: Emotion on the road–necessity, acceptance, and feasibility of affective computing in the car. Adv. Hum.-Comput. Interact. 2010, 1–17 (2010)CrossRefGoogle Scholar
  78. 78.
    GOAL Consortium: Deliverable D2.1. Profiles of Older People. Growing Older, staying mobile: Transport needs for an ageing society (GOAL). Accessed 29 Aug 2018
  79. 79.
    Alzheimer’s Association: Behaviors. How to respond when dementia causes unpredictable behaviors. Accessed 29 Aug 2018
  80. 80.
    Frasson, C., Brosseau, P.O., Tran, T.H.D.: Virtual environment for monitoring emotional behaviour in driving. In: Trausan-Matu, S., Boyer, K.E., Crosby, M., Panourgia, K. (eds.) ITS 2014. LNCS, vol. 8474, pp. 75–83. Springer, Cham (2014). Scholar
  81. 81.
    Shumway-Cook, A., Ciol, M.A., Yorkston, K.M., Hoffman, J.M., Chan, L.: Mobility limitations in the medicare population: prevalence and sociodemographic and clinical correlates. J. Am. Geriatr. Soc. 53(7), 1217–1221 (2005)CrossRefGoogle Scholar
  82. 82.
    Han, H., Zhang, X., Mu, X.: An approach for fuzzy control of elderly-assistant & walking-assistant robot. In: Proceeding of the 14th International Conference on Ubiquitous Robots and Ambient Intelligence, pp. 263–267. IEEE Computer Society (2017)Google Scholar
  83. 83.
    Wei, X., Zhang, X., Yi, P.: Design of control system for elderly-assistant & walking-assistant robot based on fuzzy adaptive method. In: Proceedings of the 2012 IEEE International Conference on Mechatronics and Automation, pp. 2083–2087. IEEE Computer Society (2012)Google Scholar
  84. 84.
    Morandell, M., et al.: iWalkActive: an active walker for active people. In: Assistive Technology: From Research to Practice, pp. 216–221. IOS Press (2013)Google Scholar
  85. 85.
    Bright, A.K., Coventry, L.: Assistive technology for older adults: psychological and socio-emotional design requirements. In: Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments, Article No. 9. ACM Press, New York (2013)Google Scholar
  86. 86.
    Weiss, V., Bologna, G., Cloix, S., Hasler, D., Pun, T.: Walking behavior change detector for a “smart” walker. Procedia Comput. Sci. 39, 43–50 (2014)CrossRefGoogle Scholar
  87. 87.
    Chen, S., Bowers, J., Durrant, A.: “Ambient walk”: a mobile application for mindful walking with sonification of biophysical data. In: Proceedings of the 2015 British HCI Conference, pp. 315–315. ACM Press, New York (2015)Google Scholar
  88. 88.
    Pryss, R., Reichert, M., John, D., Frank, J., Schlee, W., Probst, T.: A personalized sensor support tool for the training of mindful walking. In: Proceeding of the 15th IEEE International Conference on Wearable and Implantable Body Sensor Networks, pp. 114–117. IEEE Computer Society (2018)Google Scholar
  89. 89.
    Jeon, M., Yim, J.-B., Walker, B.N.: An angry driver is not the same as a fearful driver: effects of specific negative emotions on risk perception, driving performance, and workload. In: Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 137–140. ACM Press, New York (2011)Google Scholar
  90. 90.
    Reimer, B., et al.: Brief report: examining driving behavior in young adults with high functioning autism spectrum disorders: a pilot study using a driving simulation paradigm. J. Autism Dev. Disord. 43(9), 2211–2217 (2013)CrossRefGoogle Scholar
  91. 91.
    Fan, J., Wade, J., Key, A., Warren, Z., Sarkar, N.: EEG-based affect and workload recognition in a virtual driving environment for ASD intervention. IEEE Trans. Biomed. Eng. 65(1), 43–51 (2018)CrossRefGoogle Scholar
  92. 92.
    Lisetti, C.L., Nasoz, F.: Affective intelligent car interfaces with emotion recognition. In: Proceedings of the 11th International Conference on Human Computer Interaction, pp. 1–10. ACM Press, New York (2005)Google Scholar
  93. 93.
    Jones, C.M., Jonsson, I.: Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In: Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future, pp. 1–10. Computer-Human Interaction Special Interest Group (2005)Google Scholar
  94. 94.
    Jonsson, I.M., Nass, C., Harris, H., Takayama, L.: Matching in-car voice with drivers state: impact on attitude and driving performance. In: Proceedings of the 3rd International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, pp. 173–181. University of Iowa (2005)Google Scholar
  95. 95.
    Caridakis, G.: Multimodal emotion recognition from expressive faces, body gestures and speech. In: Boukis, C., Pnevmatikakis, A., Polymenakos, L. (eds.) AIAI 2007. ITIFIP, vol. 247, pp. 375–388. Springer, Boston (2007). Scholar
  96. 96.
    Hönig, F., Wagner, J., Batliner, A., Nöth, E.: Classification of user states with physiological signals: on-line generic features vs. specialized. In: Proceedings of the 17th European Signal Processing Conference, pp. 2357–2316. The University of Strathclyde (2009)Google Scholar
  97. 97.
    Calvo, R., et al.: Cyberpsychology and affective computing. In: The Oxford Handbook of Affective Computing, pp. 547–558. Oxford University Press (2015)Google Scholar
  98. 98.
    Breazeal, C.: Social robots for health applications. In: Proceedings of 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 5368–5371. IEEE Computer Society (2011)Google Scholar
  99. 99.
    Memon, M., Wagner, S.R., Pedersen, C.F., Aysha Beevi, F.H., Hansen, F.O.: Ambient assisted living healthcare frameworks, platforms, standards, and quality attributes. Sens. (Basel, Switzerland) 14, 4312–4341 (2014)CrossRefGoogle Scholar
  100. 100.
    Coradeschi, S., et al.: GiraffPlus: combining social interaction and long term monitoring for promoting independent living. In: Proceedings of 2013 6th International Conference on Human System Interactions, pp. 578–585. IEEE Computer Society (2013)Google Scholar
  101. 101.
    Khalil, R.M., Al-Jumaily, A.: Machine learning based prediction of depression among type 2 diabetic patients. In: Proceedings of 12th International Conference on Intelligent Systems and Knowledge Engineering, pp. 1–5. IEEE Computer Society (2017)Google Scholar
  102. 102.
    Kashanian, H., Ajami, N.B., Deghati, M.: Communication with autistic people through wearable sensors and cloud technology. In: Proceedings of 2017 5th Iranian Joint Congress on Fuzzy and Intelligent Systems, pp. 139–143 (2017)Google Scholar
  103. 103.
    Carroll, E.A., et al.: Food and mood: Just-in-time support for emotional eating. In: Proceedings of 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pp. 252–257. IEEE Computer Society (2013)Google Scholar
  104. 104.
    Shi, R., Chen, Z., Wang, H., Sun, P., Trull, T., Shang, Y.: MAAS - a mobile ambulatory assessment system for alcohol craving studies. In: Proceedings of International Computer Software and Applications Conference, pp. 282–287. IEEE Computer Society (2015)Google Scholar
  105. 105.
    Gravina, R., Fortino, G.: Automatic methods for the detection of accelerative cardiac defense response. IEEE Trans. Affect. Comput. 7(3), 286–298 (2016)CrossRefGoogle Scholar
  106. 106.
    Leon, E., Montejo, M., Dorronsoro, I.: Prospect of smart home-based detection of subclinical depressive disorders. In: Proceedings of 5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops, pp. 452–457. IEEE Computer Society (2011)Google Scholar
  107. 107.
    Taleb, T., Bottazzi, D., Nasser, N.: A novel middleware solution to improve ubiquitous healthcare systems aided by affective information. IEEE Trans. Inf Technol. Biomed. 14(2), 335–349 (2010)CrossRefGoogle Scholar
  108. 108.
    Alamri, A.: Monitoring system for patients using multimedia for smart healthcare. IEEE Access 6, 23271–23276 (2018)CrossRefGoogle Scholar
  109. 109.
    Sano, A., Picard, R.W.: Stress recognition using wearable sensors and mobile phones. In: Proceedings of 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pp. 671–676. IEEE Computer Society (2013)Google Scholar
  110. 110.
    Martin, S., et al.: Participatory research to design a novel telehealth system to support the night-time needs of people with dementia: NOCTURNAL. Int. J. Environ. Res. Public Health 10(12), 6764–6782 (2013)CrossRefGoogle Scholar
  111. 111.
    Grünerbl, A., et al.: Smart-phone based recognition of states and state changes in bipolar disorder patients. IEEE J. Biomed. Health Inform. 19(1), 140–148 (2015)CrossRefGoogle Scholar
  112. 112.
    Banos, O., et al.: Mining human behavior for health promotion, pp. 5062–5065 (2015)Google Scholar
  113. 113.
    Garcia, A.C., Vivacqua, A.S., Pi, N.S., Martí, L., López, J.M.: Crowd-based ambient assisted living to monitor the elderly’s health outdoors. IEEE Softw. 34, 53–57 (2017)CrossRefGoogle Scholar
  114. 114.
    Billis, A.S., et al.: A decision-support framework for promoting independent living and ageing well. IEEE J. Biomed. Health Inform. 19(1), 199–209 (2015)CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Riga Technical UniversityRigaLatvia
  2. 2.Istanbul Technical UniversityIstanbulTurkey

Personalised recommendations