Keywords

1 Introduction

1.1 In-car Interaction Change Caused by the Development of Self-driving Technology

With the popularization of L1 and L2 self-driving technology, the driving system are gradually taking on the driving tasks [18]. The relationship among drivers, passengers and system has been changed fundamentally [11]. Which means that the role of driving system for drivers will gradually change from a tool to the collaborative partner [7]. Predictably, with the reduction of driving tasks, there will be more communication between drivers and passengers in the car, therefore, the design for back-seat passengers need to be devoted to more attention on it.

However, to date the design for back-seat passengers are mainly based on players and game consoles. Compared with other aspects of the car, many auto manufacturing companies only offer limited possibilities of personalization and adaptability for the back seat [3], which is due to the current in-car interaction design, which is trying to avoid the interference from the back to the front. However, such an idea shall not apply to the future self-driving. We should work on eliminating the gap between the front-seat driver and the back-seat passenger, and offer more opportunities to the interaction between them.

1.2 In-car Interaction: Difficulties in Communication

Driving can be regarded as a social activity, since we do not drive the car alone in most cases, sometimes passengers may distract the driver’s attention, or help the driver complete the task [3]. Sharing the same space by multiple people in a car can be considered as a kind of social activities. However, the space arrangement of the car weakens the observation between the people in it. For example, the design of seats makes people unable to see each other’s face due to the visual barriers, which results the fragility and distrust of the in-car social communication. The interior space of a car can be divided into the following areas: driver, front seat passenger and rear seat. Because of the space barrier, it is generally agreed that there is little communication between the front-seat and the back-seat people [9]. Moreover, we also find that there is obvious difference between their conversation experience during driving: In fact, it’s difficult for people in the car to truly understand what the other side said as a result of the driving noise, the acoustic characteristics of in-car space and the facial orientation of passengers [2].

1.3 Difficulties in Emotional Understanding of Language Dialogue

Human communication is naturally influenced by emotions [14]. Darwin believed that the face is the most important medium of emotional expression for human beings. Facial expressions can express all the predominant emotions, as well as every subtle change of them [16]. People are used to integrating facial and voice information to manage the emotional cognition, lacking either of them may lead to some misunderstandings [10]. However, the barrier between the front and back seats makes it impossible for people to judge others’ emotions based on their faces, which is relatively easier to cause the in-car interaction develops to negative direction.

Accordingly, we put forward the concept of AEIC (Augmented Emoji in Car). By displaying simple symbols which represent the expressions of the back-seat passengers in the central rear-view mirror, we can enhance driver’s comprehension of the emotions from the back-seat passengers without increasing the cognitive load. We expect to make up for the “neglected” back-seat scenarios in the current in-car interaction field to propose some valuable design directions and targets, as well as study their pain points by exploring the interactive scenes of the communication among people in a car. We made a prototype according to the design schemes and invite users to experiment with it, and draw conclusions through analyzing the data and interviews to support further design.

2 Related Work

In this section, we present an overview of related work on driver, front seat passenger and rear seat. These cases either introduce how to enhance the information exchange among people in a car or how to provide some activities for passengers to improve the riding experience:

nICE is an in-car game played by everyone in a car, including the driver, in accordance with their capabilities [17]. But it is mainly is designed to pass the long-distance travel time, rather than enhance the emotional communication.

HomeCar Organiser is a connected system that enables families to coordinate schedules, activities, and artifacts between the home and activities placed in the car [8]. However, it is mainly used for sharing information among families and eliminating the boundary between the home and family car to create a seamless experience rather than enhancing emotional connections.

RiddleRide investigated the activities and the technology usage in the rear seat as social and physical space by a cultural probing study [9], but the interaction between the rear seat and the driver has remained to be explored.

Backseat Games is an in-car augmented reality game [6]. It is designed to entertain children during long journeys. Unlike it mainly focus on the passengers’ experience rather than the driver’s participation, we extended our researches into the relationship between passengers and drivers.

Although these works are based on similar requirements with Emo-view, such as improving passengers’ experience or facilitating communication among people in a car. But very few of these works have been resolved the problems of emotional communication during riding. So far, little attention has been devoted to study and test the emotional communication between front and rear seats and driving efficiency of drivers. We focus on the driver to test and compare the prototypes to explore how to enhance in-car emotional communication to achieve good emotional interaction and driving experience in the premise of reasonable driver’s attention distribution.

3 Concept

Based on our design motivation and previous research, we believe that the interaction design for the car should be based on the following concepts:

  1. 1.

    Our design allows drivers to complete our experiments while doing the main task without completely changing their visual focus, so the prototype needs to be at a similar level to the driver’s eyes. By locating the information in the driver’s line of sight, we can minimize his/her scanning distance from the road to the mirror [4]. On the basis of this concept, we believe that we can modify the existing equipment, rather than adding new pieces of equipment and interactions to reduce the driver’s visual burden.

  2. 2.

    We consider that taking simple Emoji, color and voice interaction as the main features is more suitable and reliable for the driving. Given that the way of in-car interaction must adapt to the automation era of high development, it is difficult for traditional user interfaces to produce a coherent user experience in this complex environment [12], so we should consider multi-modal ways of interaction as the main method. We should not simply add more information on the screen, but adopt a simpler as well as more effective way to enhance the information in the car.

  3. 3.

    Taking the safety into account, the key is to distribute the driver’s attention which is ought to have for the driving task after following information display and interaction control [17]. Based on Wickens’ research on multi-resource load theory [22], and the fixed capacity hypothesis [24], it can be concluded that it is more likely to lead to the shortage of cognitive resources in a mobile environment [5]. Therefore, the design should offer the least information that requires drivers to pay attention to [15]. We suggest that by limiting the number of the focus points, the visual scanning time can be reduced to allow the driver focus on the main task (driving task) [21], so as to enhance the in-car information output without increasing the cognitive load.

4 Design

We consider that the key to in-car interactions is the ways of emotional communication and expression, since the improper understanding of emotions may lead to obstacles in language communication [19]. Using the Emoji, especially the positive one, properly is beneficial to the formation of interpersonal relationships and cognitive understanding. They not only help participants express emotions and manage relationships, but also serve as the words to help people understand information [20]. Based on the concept of AEIC, we modified the central rear-view mirror to make it as an output interface to enhance the Emoji information.

Our design is also called Emo-view, which can detect the emotional state of the back-seat passenger by facial recognition, and display the Emoji on the left side of central rear-view mirror to show the emotions of them. Emo-view means the combination of Emoji and the view of the driver, as well as the concept of AEIC to allow drivers look in their rear-view mirror along with the information enhancement, rather than increasement during usual driving.

5 Prototype and Test

5.1 Introduction of Prototype

This prototype is assembled by Microduino’s mCookie suite, and the display function of it is realized by LED Strip and Dot Matrix-Color (see Fig. 1).

Fig. 1.
figure 1

Prototype of Emo-view and 6 emotions. (Color figure online)

The LED Strip is attached on the bottom left side of the rear-view mirror, close to the driver. If the back-seat passenger is in the positive mood, it will turn green, or yellow if he/she is in the neutral mood, and red represents the negative mood.

The Dot Matrix-Color is installed on the left side of the rear-view mirror, which can display 6 emotions: calm, happy, exciting, bored, lost and angry.

5.2 The Experimental Platform

For evaluation of interaction design and user experience (UX), using laboratory equipment to capture and record the subjective performance of real users dynamically is particularly important and effective, which is also the advantage of field and laboratory experiments [23]. However, it’s unrealistic for the early prototype to have the early field test, due to the high cost and low efficiency, lack of conditional control, the difficulties in making prototypes, high-risk to participants and so on. Especially in the experiment of automobile interaction technology, security issues of participants will be extraordinarily magnified. Therefore, these studies mainly rely on laboratory experiments [1].

We hence determine to build a driving simulator platform for product tests and user experience experiments in the laboratory (see Fig. 2). The driving platform is equipped with large screens and speakers to simulate different scenarios, and placeholders of steering wheel, touchable screen and rear-view mirror to ensure multiple diversified tests.

Fig. 2.
figure 2

Driving simulator platform (Color figure online)

In order to simulate the main task of driving, it is also necessary to record the accuracy and response time of the users’ driving tasks. The platform contains a unique driving task simulation system to finish the main task of driving simulation through a pedal and animations on the screen. If a red light (or any custom event) appears in these animations, the driver needs to step on the pedal, and the system will record the reaction time of it. The red light shows up randomly in this experiment, and the driver need to step on the pedal within 3 s after it appears.

5.3 Introduction of Experiments

Research Through Design.

User-centered design is aimed to develop products that meet users’ needs, the point is identifying and providing solutions to meet users’ needs. We adopt the concept of research through design to explore users’ needs, and take our design as the experimental subject to explore them through experiments.

Our experiment was conducted in Haidian District, Beijing, China. 8 groups of experiments had been done, each of them consists 1 driver and 1 back-seat passenger, with a total of 16 people, whose age ranged from 19 to 30, including 10 females and 6 males.

Wizard of Oz (WoZ) is a technique for prototyping and experimenting dynamically with a system’s performance that uses a human in the design loop. It was originally developed by HCI researchers in the area of speech and natural language interfaces as a means to understand how to design systems before the underlying speech recognition or response generation systems were mature [13].

We use Wizard of OZ to understand the real-time characteristics of our design for in-car interaction so that we can get the response during driving simulation. We arrange human “wizard” to play the driving environment and sound, the back-seat passenger to simulate his/her emotions with sound, and the driver to participate in natural language dialogue for observation.

Each group of users has 2 free conversations, each for 5 min in the test. The back-seat passenger needs to deduce 3 kinds of emotions according to the prompt in every conversation, and every time the driver should take the simulated driving as the main task while communicating with the back-seat passenger. The difference between 2 tests is that for the first time, there is no Emo-view, and for the second time, Emo-view was added.

6 Analysis

After the experiments, we analyze the results by data analysis and video analysis and conducted unstructured interviews with each group of subjects.

  1. 1.

    First of all, we analyzed the overall response time of the main task with or without Emo-view. On the basis of single factor analysis of variance we concluded that the response time of main task with Emo-view (M = 180.89, SD = 55.51) was significantly shorter than that without Emo-view (M = 211.26, SD + 96.45), F(1,297) = 46.327, P < 0.001). Soon afterward, by observing the distribution of reaction time, we found that in most cases, the reaction time of the main task is more stable in the early stage, and fluctuates greatly in the later stage. Therefore, we compared the response time of the first 1/3 and the last 1/3 of the experiments, it can be concluded that when there was no Emo-view, the response time of main task in early stage (M = 182.47, SD = +58.52) was significantly shorter than that in later stage (M = 245.83, SD = +120.64), F(1,98) = 40.375, P < 0.001; and when there was Emo-view, the response time of main task in early stage (M = 184.36, SD = +52.13) had no significant difference (F(1,98) = 0.073, p = 0.787) with that in later stage (M = 183.12, SD = +6. 1.12). From the results we have obtained, one can conclude that his/her fatigue effect will show up as the experiment is carried out, which can lead to a longer reaction time without Emo-view, in contrast, with the help of Emo-view, the driver’s cognitive processing of the back-seat passenger’s emotion can be easier, which can be helpful to relieve his/her fatigue effect.

  2. 2.

    We take the driver as the main object of observation and record the performance of each group on the timeline, which includes the change of the back-seat passenger’s emotion, speaking, the point-in-time when the driver looks at Emo-view, and the reaction time he/she used to complete the main task. Here are 3 typical timelines (Fig. 3).

    Fig. 3.
    figure 3

    Timeline of the experimental process

    We analyzed the timelines of each group, mainly the situation of drivers watching Emo-view:

    1. a.

      Drivers’ reaction time of driving task has no obvious relationship with mood change, with or without Emo-view and dialogue.

    2. b.

      When drivers look at Emo-view, 67.5% of these actions occur when they speak and 27.3% when they listen. It can be concluded that drivers need to pay more attention to the emotion of the back-seat passenger when expressing their views.

    3. c.

      78.8% of the cases in which drivers look at Emo-view while whey talk occurred at the beginning (32.7%) and the end (46.1%). We conclude that drivers need to confirm the effect of their conversation by observing the passenger’s emotion when they start and end a topic.

    4. d.

      61.9% of the cases in which drivers look at Emo-view while they listening occurred in the middle period. It can be assumed that in most cases, the driver feel they need to rely on Emo-view to judge the emotion of the back-seat passenger during the listening.

    5. e.

      When drivers look at Emo-view, only 36.3% of the cases occur when Emo-view is switching emotion, which leads us to conclude that Emo-view doesn’t cause the drivers do not often notice switching, so it can be inferred that Emo-view does not cause the excessive cognitive load.

  3. 3.

    The content of our interviews is aimed at the cognitive difference between drivers and back-seat passengers about the dialogic emotion and the driver’s actual user experience. Each group was interviewed for about 3–5 min after the experiment. The results can be summarized as follows:

    1. a.

      With the help of Emo-view, the driver’s perception of the back-seat passenger’s emotion can be more accurate and reliable. The emotional cognition of both sides become more consistent.

    2. b.

      Drivers think that with the help of Emo-view, they can change the topic according to the emotional state of the back-seat passenger, so as to lead the topic to a more positive one.

    3. c.

      Emo-view can help drivers understand the back-seat passenger more easily and reduce the distraction cost of drivers. Some drivers said that without Emo-view it would affect their driving task when considering the emotion of the back-seat passenger.

    4. d.

      Drivers generally believe that Emo-view will not cause too much psychological burden. They only need to look at it when necessary and ignore it in normal times.

The analysis showed that Emo-view did not occupy the driver’s attention resource allocation, nor did it cause the driver’s cognitive load. And users generally believe that Emo-view is helpful for in-car communication, and the driver is more dependent on Emo-view when expressing.

7 Discussion and Future Work

According to the test of real users, without changing the space arrangement of the car, that is, the driver can not face the back-seat passenger directly, Emo-view can still be positive and effective. We have considered comprehensively whether it can assist the front and back seats to understand each other better, in which cases it is more needed by the driver, and whether it will affect the safety of driving.

First of all, the data makes us conclude that Emo-view won’t occupy the driver’s attention allocation, nor does it cause his/her cognition load. Since the interaction design of cars has always been accompanied with the severe safety problems, we concentrate the driver’s perception of emotional understanding into a simple Emoji based on the attention span theory and let the interaction happens in the driver’s most comfortable parallel line of sight, that is, the central rear-view mirror. So the driver does not need to do the additional interactive operation when looking at Emo-view, he/she can complete the action of viewing it and understand the emotions it expressed instantaneously at the same time in normal driving situations.

Secondly, on the basis of video analysis and interviews, we concluded that drivers need Emo-view. They generally believe that with the help of Emo-view, they can communicate better with the back-seat passenger and managed to bring the conversation to positive topics. However, without Emo-view, some drivers may ignore the driving task because of the deep thinking. And we find that Emo-view can be a reference for drivers’ in-car conversation. They will subconsciously look at Emo-view to confirm whether they have “wrong-talking” when they want to start or end a conversation.

However, Emo-view doesn’t take into account the relationship within the dialogue. We think that there should be different responses when they are parents, friends, couples, and strangers, since not everyone wants the driver to observe their emotions, and some people prefer to show positive emotions. Therefore, the future work will focus on the following aspects to make some breakthroughs:

  1. a.

    In order to make Emo-view applicable for people with different relationships, we set up several Emo-view modes, and invite participants with different relationships to take experiments to estimate the referentiality of interpersonal relationships to Emo-view modes’ settings.

  2. b.

    Considering that the back-seat passengers can debug Emo-view themselves during the experiment, such as avoid displaying emotions on certain topics. Emo-view can gradually learn the expression habits of back-seat passengers after several operations. Emo-view can be customized to adapt to any back-seat user.

  3. c.

    Particular emphasis should be placed on parent-child users. The car is the most frequent way for children to travel, and when the front and back seats are in a parent-child relationship, the driver will pay much more attention to the back-seat passenger than an ordinary one. Thus, we believe that parent-child users need Emo-view more than ordinary ones.

In view of the encouraging results of experiments in laboratory environment presented by now, we will continue to test in a higher fidelity environment in the future, as well as in a car for real world testing. Although laboratory testing has many advantages and is safer, we believe that the real driving environment can help us to reveal new discoveries about the application effect of Emo-view.

8 Implications for In-car Interaction

Our research inspired and expanded the possibility of the interaction between the front and back seats. We believe that the primary issue of in-car interaction is “how to make the communication between front and back seats more smoothly”. To improve the situation, we conducted a survey, and locate the main problem at the emotional understanding of the passengers in the car. The experiments demonstrate that Emo-view is effective and needed, and it won’t distract but optimize the driver’s concentration on driving tasks. We believe that our main design findings are as follows:

First, the output of emotional expression from the back-seat passenger based on the concept of AEIC can indeed lead the communication between the front and the back seats to a positive state. Drivers can be well accustomed to using Emo-view as a reference for conversation when expressing their views.

Second, minimizing the realizing focal points and the amount of information based on the attention span theory can almost eliminate the driver’s extra cognitive allocation beyond driving tasks. And the data shows that drivers’ observation of Emo-view can actually lighten their cognitive burden of back-seat emotions.

Third, modifying existing equipment to display information in it can minimize the driver’s scanning distance, so as to reduce the difficulties in operation and visual burden of the driver.

Last, for in-car interaction experiments, laboratory testing would be a better choice. Indoor driving simulator platform cannot only guarantee participants‘safety, but also allow researchers to customize many scenarios and variable conditions, and observe more detailed interactive data of subjects.

9 Conclusion

In order to adapt to the development of self-driving, we consider that as a social space, the in-car interaction will face notable variations. So we designed Emo-view to assist the communication between the front and back seats. With the help of Emo-view, the driver can easily observe the emotional state of the back-seat passenger in the rear-view mirror to have a more active dialogue with he/she. In this paper, we use Arduino to build a simple prototype, set up 6 emotional characteristics, and build a driving simulator platform to do some test by means of Wizards of OZ.

The good results of Emo-view make us confident about the concept of AEIC. From the results of indoor driving simulator platform we have obtained, it can be concluded that if we pay attention to both cognitive load theory and human-machine interaction design theory, we can enhance the in-car information output to drivers without increasing the cognitive load to ensure safe driving, as well as improve the dialogue experience between the front and back seats.

Future work will focus on the comprehension of emotions and interaction among the front and back seats. Not only study the applicability of Emo-view in various other situations, but also explore a range of possibilities of more different ways of interaction to gradually fill the gap in the in-car interaction design for the front and back seats.