Abstract
This paper provides an overview of the predictable possibilities and opportunities of autonomous human-computer interaction design offered by studying the driver’s attention distribution. We examined attention distribution from novice to advanced drivers and a visual experiment was conducted using stickers and camera to both quantify and qualify attention while participants were in a simulation auto-driving scenario. Thus, an consciousness hot-map could be determined by stickers’ distribution and eye-tracking data, which could contribute to interface design and its structure in autonomous driving vehicles.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction and Related Works
1.1 Autonomous Cars
Autonomous cars and its relevant technologies have been greatly developed in the past few years, and they are expected to occupy a large presence in the market, with a massive market demand and a promising future [1, 2]. Currently, automobile systems vary from those which assist with driving tasks (e.g. automatic transmission) to those which replace manual operations [3]. The National Highway Traffic Safety Administration (NHTSA) defines five degrees of car autonomy which vary in the integration of cars with Advanced Driver Assistance Systems (ADAS), and the extent to which a car is taken over by autonomous systems [4]. Autonomous and semiautonomous vehicles are currently being developed by over 14 companies [32], most of which stay in navigating in complex environment. Thus, this paper and its relevant experiment will focus on cars capable of driving automatically for a certain distance, which is L4, the current study has been conducted.
Different levels of driver engagement are involved in each degree of autonomous driving [5]. The term ‘Attention Distribution’ [6] is used to describe what drivers focus on and their engagement. In autonomous cars, drivers are relieved of the cognitive load from monitoring traffic and executing driving tasks [7] – which forms a clear contrast against traditional driving practices – and the driver’s attention is free to stray towards secondary tasks instead of focusing on the primary task of driving. Thus, the higher the degree of automation, the more tendency for the driver’s attention to shift from driving to secondary tasks.
This change in attention distribution (AD) raises new requirements and challenges of Human Machine Interface (HMI) for autonomous driving in two perspectives:
-
1.
How can the non-driving-related interface (for entertainment and recreation purposes) be designed to be capable of functioning without shifting attention from the driving interface? For example, as an expansion of the Internet of Things, autonomous cars should be capable of accessing the internet, communicating with smart devices as well as other cars and road infrastructures, and collecting and dealing with data for drivers [8]. The innovation of the mobile phone had changed the world, and the similar changes occurring in autonomous cars would do so again. This pivotal development would require these functions be well organized according to user experiences (UX) on varying levels of autonomy.
-
2.
Related study had investigated automated steering systems and found that they improve driver satisfaction and performance but also increase the time to recover from a system shutdown, demonstrating an out-of-the-loop problem. [33] How the driving interface can better transmit information with minimal input attention, increasing the efficiency of information as to decrease the frequency of driver distractions. Autonomous cars relieve drivers from the most stressful operations needed for driving [9], however, the sense of control during driving should still exist despite the decrease in driving tasks. Therefore, how the driving task interface conveys its reliability and safety during driving is another important factor.
1.2 Attention and Cognition
Attention has been described as the allocation of limited cognitive processing resources. [10] Generally speaking, you can think of attention as a highlighter [11]. For an example when you read through a section of text, the highlighted section where you are aiming at stands out, causing you to focus your interest in that area. In other words, with attention, some sensory inputs will process faster and deeper [12, 13].
It was pointed out that the pattern of eye fixations (visual attention) that a given observer produces was influenced by properties of the scene as well as the goals and interests of the perceiver [14]. Present a major categorical boundary of visual attention is the distinction between bottom-up (stimulus-driven) and top-down (goal-directed) attention. The point is that the attention function processing may depend on the properties of an image (e.g. a striking colors or a sudden movement) as well as goal driven (e.g. hungry people looking for food).
1.3 Top-Down Attention Review
The ability that distribution of attention can be affected by intentions of the observer was first noted by Helmholtz [15]. And its relevant perceptual consequences were studied thoroughly by Mertens [16]. Eriksen and his colleagues started a seminal series of studies about the quantitative understanding of the top-down deployment of attention [17,18,19,20,21,22]. Further exams were conducted by Posner and his colleagues for top-down attention control processing [23,24,25,26,27].
1.4 Bottom-Up Attention Review
Meanwhile, evidence and researches concerning captured attention (Bottom-up attention) are more recent and could be differed to two major categories based on stimulus properties: feature singletons and abrupt visual onsets [12]. Feature singletons can be identified easily in visual research. (Neisser found that curved letters distinguish themselves among straight letters [28]). Same as this, dynamic, and colorful road vision are much more attractive than stationary S-IVIS (In-vehicle information system) in a simulation auto-driving experiment. So we decided participants to allocate their attention points after the driving video and tell them to focus on S-IVIS.
1.5 Driver’s Attention
Psychologists have confirmed that top-down attention and bottom-up attention processes work together [12], as is under natural condition. In a typical experiment [29], participants were asked to search for and identify a red target in a display containing several white distractors, while on other trials a “cue” display (only containing red color singleton) precede the search display. The cue singleton involuntarily attracted top-down attention and affected participants’ response to upcoming targets even though participants were told to ignore the cue display and draw down-up attention as much as possible.
In driving processing, regardless of autonomous levels, drivers attention are distributed in both Top-down way and Bottom-up way. Drivers are responsible for constantly changing views (bottom-up) and naturally allocating their attention to driving task interface when drivers should operate it, or just watch it on the highly autonomous vehicle (top-down). Meanwhile, S-IVIS tends to capture drivers’ attention through attractive visual elements (shining screen, sudden movements and sounds). So driving process involves both two kinds of attention and its complex combination.
Despite this, we had refined opportunities and points associated with singleton attention from the two perspectives we mentioned before.
In perspective 1, when the drivers actively focus on non-driving-related tasks, the proper spatial location for secondary tasks interface and how to guide the attention shifts quickly and accurately could be the points for top-down attention research, out of drivers’ initiative intention to take secondary tasks. On the other side, visual elements on the non-driving task interface should also be limited in case bottom-up attention plays a negative role. (e.g. unnecessary massage distract drivers in driving and weak reminder may cause drivers to miss important messages.)
In perspective 2, involved with the driving-task interface, how to express driving information (speed; cars conditions; etc.) efficiently since top-down attention deployment inevitably decline because of developed automation and how emergency alert capturing driver’s bottom-up attention immediately could be meaning for the research on attention distribution on driving-task interface.
1.6 Research Approach
Conducting a user evaluation with a real self-driving car is difficult. Some studies have provided alternatives that a fake self-driving car [30, 34], a VR simulation [35, 36] and a video experiment [31, 37] can be appropriate for such research. This study therefore adopted a video experiment method to extract both quantitative and qualitative data on top-down and Bottom-up attention distribution on the driving process. Both driver’s view and co-driver’s view were taken into consideration because there will be no driver in a highly-autonomous car. And passengers’ experiment could not only be a control experiment but a reference for fully-auto driving experiments as well. These data were used to frame an attention heat map for HMI (human machine interface) in autonomous cars. Thus, the research and conclusion of this hot map may act as a guideline for the functionality planning and interaction disciplines of interactive interface design for autonomous automobiles.
2 Experiment
In the video visual experiment. 12 adults were selected and invited into a mimicked typical driving environment in terms of the sample selection criteria as below:
-
(1)
The age of them strictly ranged from 22 to 46, which contained 90% drivers in the range of age.
-
(2)
Gender was logically taken into account, therefore, we introduced 6 male drivers and 6 female drivers.
-
(3)
The experience of autopilot also affects the feedback from the participants. We have chosen drivers who have at least a low level of assisted driving experience.
-
(4)
Both novice drivers and experienced drivers were introduced to this experiment. We also noticed that the participants’ driving experience would definitely affect their feedback. The novices would not be expected to show signs of distraction in these studies, of course, because they had an observer with them at all times and so they might be expected to behave as they thought they should behave [31]. And expert, naturally, could be more proficient in driving behavior, and easier to immerse in driving experiments.
2.1 Driving Video
For our experiments, the video we showed to participants is transformed from a driving recording video. We edited the recording video to exclude the original driving interface and to include several common driving scenarios (reversing, turning, sharp turns, changing lanes and overtaking, accelerating or braking). We edited video out of two reasons: first, the video should be as inclusive as possible, to ensure that the experiment can completely replicate the real scene; second, different driving operations will lead to different attention distribution. The fluency after video editing was also considered, as we wanted to provide the complete driving experience for the subjects. Meanwhile, driving video was slightly blurred according to perspective.
2.2 Participants
During the test, while watching a piece of driving video provided by a projector on the window area of the driving interface paper, participants were asked to evaluate and rank their active attention (goal-directed) as playing a role of a real driver and then add attention stickers (10 in total) to the relevant area. In this way, we measured top-down attention distribution through the stickers added by participants. At the same time, a camera, above the participant’s head, would monitor where the participant was looking at. Camera data would be evidence of framing bottom-up attention. Thus, both stimulus-driven and goal-directed attention on the driving interface could be framed out in this experiment.
In addition, we also introduced the passenger’s perspective for the control experiment: Participants were asked to play a passengers’ role in co-driver seat location, and allocate their attention stickers to the same interface. Data from these passenger-view experiments could be fundamental support for both passengers’ attention and drivers’ attention in L5 autonomous driving scenario.
2.3 Potential Error Source
As we mentioned before, conducting a user evaluation with a real self-driving car is difficult, therefore we applied video experiments as an alternative. The lack of an unrelated driving environment, caused by simulation, can make the participants in the experiment unable to immerse themselves in driving imagination. Apparently, the biggest source of error in this test is determined by the simulation of the simulated driving environment.
To ensure the effect of driving simulation during the experiment, we used a projector that could provide a real size driving interface instead of a limited size digital screen, which had led us to be unable to use the eye tracker to monitor changes in visual attention precisely. To address this problem, a camera monitored participants eye’s moving was introduced. Therefore, inaccurate positioning caused by the manual approach leads to experimental error.
Although repeatedly told that the participants should not be attracted too much from the dynamic video, the objective existence of the dynamic image tends to affect the subjective evaluation and ranking after the video.
3 Data Collecting and Analysis
Unlike previous psychology visual experiments that gather eye-tracking data while participants are viewing static pictures, web pages, or documents, every operation in driving environment is rather complex, even though we have refined the visual elements related to driving attention in this experiments. Therefore, it is hard to say which attention process (top-down or bottom-up) dominates a certain attention shift during a single driving action. And we decided to focus on one of them, separately.
3.1 Top-Down Attention (Goal-Driven)
For driver’s goal-driven attention, we rely on the points (stickers) to figure out how much attention is applied to and where it is applied to. Through studying the extracted data, we present the data as a heat map lying on the graphic driving interface (see Fig. 1).
Also passenger’s top-down attention is presented in a heat map too, as an addition to drivers’ heat map (see Fig. 2).
As we can see from these two graphs, when participants assume driving tasks and do not undertake driving tasks, their top-down attention distribution is clearly biased: the driver’s attention is relatively concentrated on several points which are almost completely cover the whole driving interface; meanwhile the passenger’s attention tends to be evenly dispersed to the core of the driver’s interface, and part of it is concentrated in the middle of the lower half of the driver’s interface.
This result can be explained by the driver’s driving habits: the driver controlled the car by focusing on several control areas, just like how they usually drove. Once liberated from the driving task, their attention will still be placed on the driving interface, but there will be no specific concentration points. So our point of the whole driving interface is: the driving interface should be as flat as possible and decentralized, since users without driving tasks no longer pay top-down attention to a few points but the whole interface. And the perception and cognition could be better developed on a flat interface without visual elements interference. (“Flat”, is not to say be 2D, but to say nothing distinguish itself and be simple.)
Both driver’s attention and passenger’s attention show relativeness to their aerial location: Steering wheel The steering wheel and its vicinity occupy the most attention of driver’s attention, which is exactly the center of driver’s normal sight. And the most attractive area for passengers is side control panel where passengers naturally turn their heads when communicating with drivers.
The results show that the aerial location has an influence on the attention distribution. However, we have proposed in the above that the driving interface should try to avoid protruding parts. Therefore, we believe that the driving interface should be changed to curved if conditions permit, reducing visual differences in different parts in the driving interface due to changes in spatial distance.
Also, we put their attention heat map together, to see how the attention of these two transforms and conflicts (see Fig. 3).
Obviously, we can see that the attention of passengers and drivers is barely coincident in the whole driving interface except side control panel in the middle of driving interface, which means drivers and passengers still tends to behave differently under auto-driving scenario, even though they were thought to be almost similar in high-level automatic driving.
At a autonomous cars, user’s attention re-distribute when their identity is transformed from driver to passenger. Therefore, the area, to which both drivers and passengers pay mainly top-down attention, should be the places that information be expressed. And the control panel, the only conflict area, is being fully developed and used in Tesla, which is consistent with the phenomenon observed from our experiments.
3.2 Bottom-Up Attention (Stimulus-Driven)
We also counted the heat map of a distribution from the video recording the participants’s (driver) eye-gazing area (see Fig. 4).
However, there is still significant error due to the statistics by manual approach. Therefore, the heat map of bottom-up attention only has very limited reference value.
Different from Top-down attention, the distribution of bottom-up attention is more concentrated on areas above both sides of the interface. We insist that this is caused by dynamic video, and drivers are still be aware of road view. Therefore, both side of window, which containing the most traffic information, are highlighted.
The result reveals two opportunities for auto-driving: first, drivers’ attention, under auto-driving circumstance, is affected by road view in bottom-up process; second, interface design should avoid highlighters or different texture on both side of interface and important messages should not been presented on the side of interface, because they can be easily distracted by dynamic road view.
4 Conclusion
In conclusion, we have conducted a visual experiment to simulate auto-driving scenario. Top-down attention has become our main research object, due to the characteristics of experiments and autonomous driving. Based on visualization results of the experiment, we have presented interface suggestions including interface frame, function layout discipline and visual guideline. We also have developed a new researching approach for analyzing attention distribution.
Although studies on spatial characteristics of attention distribution is available, there is still a need to conduct detailed functional division studies on the driving interface. This paper roughly considered the driving interface as several parts. The actual driving interface may need more specific distinction as different driving approaches involve different driving actions, and therefore the attention distribution will be slightly different. The future research may apply experiments in specific areas for certain driving actions.
References
McKerracher, C., et al.: An integrated perspective on the future of mobility. McKinsey & Company and Bloomberg New Energy Finance (2016)
Borojeni, S.S., Chuang, L., Heuten, W., Boll, S.: Assisting drivers with ambient take-over requests in highly automated driving. In: Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 237–244. ACM, October 2016
Rödel, C., Stadler, S., Meschtscherjakov, A., Tscheligi, M.: Towards autonomous cars: the effect of autonomy levels on acceptance and user experience. In: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 1–8. ACM, September 2014
National Highway Traffic Safety Administration. Preliminary statement of policy concerning automated vehicles, Washington, DC, pp. 1–14 (2013)
Taxonomy SAE. Definitions for terms related to on-road motor vehicle automated driving systems. Technical report, SAE International (2014)
Hughes, P.K., Cole, B.L.: What attracts attention when driving? Ergonomics 29(3), 377–391 (1986)
Eckoldt, K., Knobel, M., Hassenzahl, M., Schumann, J.: An experiential perspective on advanced driver assistance systems. IT-Information Technology Methoden und innovative Anwendungen der Informatik und Informationstechnik, 54(4), 165–171 (2012)
Coppola, R., Morisio, M.: Connected car: technologies, issues, future trends. ACM Comput. Surv. (CSUR) 49(3), 46 (2016)
Damböck, D., Weißgerber, T., Kienle, M., Bengler, K.: Evaluation of a contact analog head-up display for highly automated driving. In: 4th International Conference on Applied Human Factors and Ergonomics, San Francisco, USA (2012)
Anderson, J.R.: Cognitive Psychology and Its Implications, 6th edn, p. 519. Worth Publishers, New York (2004). ISBN 978-0-7167-0110-1
James, W.: The Principles of Psychology. Read Books Ltd., Redditch (2013)
Egeth, H.E., Yantis, S.: Visual attention: control, representation, and time course. Ann. Rev. Psychol. 48(1), 269–297 (1997)
Posner, M.I.: Attention: the mechanisms of consciousness. Proc. Nat. Acad. Sci. 91(16), 7398–7403 (1994)
Yarbus, A.L.: Eye movements during perception of complex objects. In: Eye Movements and Vision, pp. 171–211. Springer, Boston (1967). http://doi.org/10.1007/978-1-4899-5379-7_8
Helmholtz, H.: Treatise on physiological optics. III. In: Southall, J.P.C. (ed.) The Perceptions of Vision. Optical Society of America, New York (1925)
Mertens, J.J.: Influence of knowledge of target location upon the probability of observation of peripherally observable test flashes. JOSA 46(12), 1069–1070 (1956)
Eriksen, B.A., Eriksen, C.W.: Effects of noise letters upon the identification of a tar- get letter in a nonsearch task. Percept. Psychophys. 16, 143–149 (1974)
Eriksen, C.W., Collins, J.F.: Temporal course of selective attention. J. Exp. Psychol. 80, 254–261 (1969)
Eriksen, C.W., Hoffman, J.E.: Temporal and spatial characteristics of selective encoding from visual displays. Percept. Psychophys. 12, 201–204 (1972)
Eriksen, C.W., Hoffman, J.E.: The extent of processing noise elements during selective encoding from visual displays. Percept. Psychophys. 14, 155–160 (1973)
Eriksen, C.W., Murphy, T.D.: Movement of attentional focus across the visual field: a critical look at the evidence. Percept. Psychophys. 42, 299–305 (1987)
Eriksen, C.W., Rohrbaugh, J.W.: Some factors determining efficiency of selective at- tention. Am. J. Psychol. 83, 330–342 (1970)
Posner, M.I.: Orienting of attention. Q. J. Exp. Psychol. 32, 3–25 (1980)
Posner, M.I., Cohen, Y.: Components of visual orienting. In: Bouma, H., Bouwhuis, D.G. (eds.) Attention and Performance, 10th edn, pp. 531–555. Erlbaum, Hillsdale (1984)
Posner, M.I., Marin, O. (eds.): Attention and Performance, 11th edn. Erlbaum, Hillsdal (1985)
Posner, M.I., Rafal, R.D., Choate, L., Vaughan, J.: Inhibition of return: neural basis and function. Cognit. Neuropsychol. 2, 211–228 (1985)
Posner, M.I., Snyder, C.R.R., Davidson, B.J.: Attention and the detection of signals. J. Exp. Psychol. Gen. 10, 160–174 (1980)
Neisser, U.: Cognitive Psychology, p. 351. Appleton-Century-Crofts, New York (1967)
Folk, C.L., Remington, R., Johnston, J.C.: Involuntary covert orienting is contingent on attentional control settings. J. Exp. Psychol.: Hum. Percept. Perform. 18, 1030–1044 (1992)
Rothenbücher, D., Li, J., Sirkin, D., Mok, B., Ju, W.: Ghost driver: a platform for investigating interactions between pedestrians and driverless vehicles. In: Adjunct Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 44–49 (2015)
Underwood, G.: Visual attention and the transition from novice to advanced driver. Ergonomics 50(8), 1235–1249 (2007)
BMWNEWS. Autonomous cars to be in production by 2017, July 2016. https://news.bmw.co.uk/article/autonomous-cars-to-be-in-production-by-2021/. Accessed Apr 2017
Lagstrom, T., Lundgren, V. M.: AVIP-Autonomous vehicles interaction with pedestrians. Doctoral Dissertation, Chalmers University of Technology, Gothenborg (2015)
Debargha, D., Martens, M., Eggen, B., Terken, J.: The impact of vehicle appearance and vehicle behavior on pedestrian interaction with autonomous vehicles. In: Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 158–162, ACM (2017)
Marc-Philipp, B., Brenden, A.P., Klingegård, M., Habibovic, A., Bout, M.: SAV2P: exploring the impact of an interface for shared automated vehicles on pedestrians’ experience. In: Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, pp. 136–140. ACM (2017)
Chang, C.-M., Toda, K., Sakamoto, D., Igarashi, T.: Eyes on a car: an interface design for communication between an autonomous car and a pedestrian. In: Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 65–73. ACM, September 2017
Matthias, B., Witzlack, C., Krems, J.F.: Gap acceptance and time-to-arrival estimates as basis for informal communication between pedestrians and vehicles. In: Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 50–57. ACM (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, W., Liu, W. (2019). HMI Design for Autonomous Cars: Investigating on Driver’s Attention Distribution. In: Krömker, H. (eds) HCI in Mobility, Transport, and Automotive Systems. HCII 2019. Lecture Notes in Computer Science(), vol 11596. Springer, Cham. https://doi.org/10.1007/978-3-030-22666-4_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-22666-4_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22665-7
Online ISBN: 978-3-030-22666-4
eBook Packages: Computer ScienceComputer Science (R0)