Advertisement

Eye Fixation Location Recommendation in Advanced Driver Assistance System

  • Jiawei Xu
  • Kun Guo
  • Federica Menchinelli
  • Seop Hyeong ParkEmail author
Original Article
  • 4 Downloads

Abstract

Recent research progress on the approach of visual attention modeling for mediated perception to advanced driver assistance system (ADAS) has drawn the attention of computer and human vision researchers. However, it is still debatable whether the actual driver’s eye fixation locations (EFLs) or the predicted EFLs which are calculated by computational visual attention models (CVAMs) are more reliable for safe driving under real-life driving conditions. We analyzed the suitability of the following two EFLs using ten typical categories of natural driving video clips: the EFLs of human drivers and the EFLs predicted by CVAMs. In the suitability analysis, we used the EFLs confirmed by two experienced drivers as the reference EFLs. We found that both approaches alone are not suitable for safe driving and EFL suitable for safe driving depends on the driving conditions. Based on this finding, we propose a novel strategy for recommending one of the EFLs to the driver for ADAS under predefined 10 real-life driving conditions. We propose to recommend one of the following 3 EFL modes for different driving conditions: driver’s EFL only, CVAM’s EFL only, and interchangeable EFL. In interchangeable EFL mode, driver’s EFL and CVAM’s EFL are interchangeable. The selection of two EFLs is a typical binary classification problem, so we apply support vector machines (SVMs) to solve this problem. We also provide a quantitative evaluation of the classifiers. The performance evaluation of the proposed recommendation method indicates that it is potentially useful to ADAS for future safe driving.

Keywords

Perceptual blindness Driver’s EFL CVAM’s EFL Safe driving 

References

  1. 1.
    Crundall D, Underwood G (1998) Effects of experience and processing demands on visual information acquisition in drivers. Ergonomics 41:448–458CrossRefGoogle Scholar
  2. 2.
    Lee Y, Lee J, Boyle L (2007) Visual attention in driving: the effects of cognitive load and visual disruption. Hum Fact 49(4):721–733CrossRefGoogle Scholar
  3. 3.
    Konstantopoulos P, Chapman P, Crundall D (2010) Driver’s visual attention as a function of driving experience and visibility. Using a driving simulator to explore drivers’ eye movements in day, night and rain driving. Accid Anal Prev 42(3):827–834CrossRefGoogle Scholar
  4. 4.
    Xu J, Yue S, Menchinell F, Guo K (2017) What have been missed for predicting human attention in viewing driving videos? PeerJ 5:e2946CrossRefGoogle Scholar
  5. 5.
    Cartwright-Finch U, Lavie N (2007) The role of perceptual load in inattentional blindness. Cognition 102(3):321–340CrossRefGoogle Scholar
  6. 6.
    Galpin A, Underwood G, Crundall D (2009) Change blindness in driving scenes. Trans Res Part F 12(2):179–185CrossRefGoogle Scholar
  7. 7.
    Weitzenhoffer AM (2000) The practice of hypnotism. Wiley, Hoboken, pp 413–414Google Scholar
  8. 8.
    Zhai Y, Shah M (2006) Visual attention detection in video sequences using spatiotemporal cues. In: Proceeding of the 14th ACM international conference on Multimedia, pp 815–824Google Scholar
  9. 9.
    Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259CrossRefGoogle Scholar
  10. 10.
    Bruce N, Tsotsos J (2006) Saliency based on information maximization. In: Proceeding of advances in neural information processing systems (NIPS), pp 155–162Google Scholar
  11. 11.
    Hou X, Zhang L (2009) Dynamic visual attention: Searching for coding length increments. In: Advances in neural information processing systems (NIPS), pp. 681–688Google Scholar
  12. 12.
    Mahadevan V, Vasconcelos N (2010) Spatiotemporal saliency in dynamic scenes. IEEE Trans Pattern Anal Mach Intell 32(1):171–177CrossRefGoogle Scholar
  13. 13.
    Marat S, Phuoc TH, Granjon L, Guyader N, Pellerin D, Guèrin-Duguè A (2009) Modelling spatio-temporal saliency to predict gaze direction for short videos. Int J Comput Vis 82:231–243CrossRefGoogle Scholar
  14. 14.
    Xu J, Yue S (2014) Mimicking visual searching with integrated top down cues and low-level features. Neurocomputing 133:1–17CrossRefGoogle Scholar
  15. 15.
    Judd T, Ehinger K, Durand F, Torralba A (2009) Learning to predict where humans look. In: Proceeding of international conference on computer vision (ICCV), pp 2106–2113Google Scholar
  16. 16.
    VIP: A unifying framework for eye-gaze research, 2015. http://mmas.comp.nus.edu.sg/VIP.html. Accessed 29 Dec. 2015
  17. 17.
    NUSEF: The National University of Singapore Eye-Fixation database (2015). Available: http://mmas.comp.nus.edu.sg/NUSEF.html. Accessed 30 June 2018
  18. 18.
    Bruce N (2015) Eye tracking data. http://www-sop.inria.fr/members/Neil.Bruce. Accessed 30 June 2018
  19. 19.
    Ehinger K, Hidalgo-Sotelo B, Torralba A, Oliva A (2009) Modelling search for people in 900 scenes: a combined source model of eye guidance. Vis Cogn 17(6–7):945–978CrossRefGoogle Scholar
  20. 20.
    Cerf M, Frady E, Koch C (2009) Faces and text attract gaze independent of the task: experimental data and computer model. J Vis 12(10):1–15Google Scholar
  21. 21.
    van der Linde I, Rajashekar U, Bovik A, Cormack K (2009) DOVES: a database of visual eye movements. Spat Vis 22(2):161–177CrossRefGoogle Scholar
  22. 22.
    LeMeur O (2014) ‘eye-tracking dataset’. http://people.irisa.fr/Olivier.Le_Meur/visualAttention/#database. Accessed 30 June 2018
  23. 23.
    IRCCyN lab, ‘Visual attention and eye-tracking—databases IVC’ (2015), http://ivc.univ-nantes.fr/en/pages/view/23/. Accessed 30 June 2018
  24. 24.
    CRCNS.org, ‘eye-1’ (2015), http://crcns.org/data-sets/eye/eye-1. Accessed 30 June 2018
  25. 25.
    The DIEM Project (2015), http://thediemproject.wordpress.com/. Accessed 30 June 2018
  26. 26.
    Hadizadeh H, Enriquez MJ, Bajić IV (2012) Eye-tracking database for a set of standard video sequences. IEEE Trans Image Process 21(2):898–903MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Riche N, Mancas M, Ćulibrk D, Ćrnojevic V, Gosselin B, Dutoit T (2012) Dynamic saliency models and human attention: a comparative study on videos. In: Proceedings of the 11th Asian conference on computer vision (ACCV), Daejeon, Korea, pp 586–598Google Scholar
  28. 28.
    Enns J, Lleras A (2008) What’s next? New evidence for prediction in human vision. Trends Cogn Sci 12(9):327–333CrossRefGoogle Scholar
  29. 29.
    Grossberg S (1973) Contour enhancement, short-term memory, and constancies in reverberating neural networks. Stud Appl Math 52:213–257MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Howard I, Rogers B (2012) Perceiving in depth, 1st edn. Oxford University, New YorkCrossRefGoogle Scholar
  31. 31.
    Google D (2018) Natural driving database. zip. [online] Available at: https://sites.google.com/view/xu-jiawei/database. Accessed 30 June 2018
  32. 32.
    Röhrbein F, Goddard P, Schneider M, James G, Guo K (2015) How does image noise affect actual and predicted human gaze allocation in assessing image quality? Vis Res 112:11–25CrossRefGoogle Scholar
  33. 33.
    Wang D, Hou X, Xu J, Yue S, Liu C (2017) Traffic sign detection using a cascade method with fast feature extraction and saliency test. IEEE Trans Intell Transp Syst 18(12):3290–3302CrossRefGoogle Scholar
  34. 34.
    Zhou Y, Liu L, Shao L, Mellor M (2018) Fast automatic vehicle annotation for urban traffic surveillance. IEEE Trans Intell Transp Syst 19(6):1973–1984CrossRefGoogle Scholar
  35. 35.
    Dollar P, Wojek C, Schiele B, Perona P (2012) Pedestrian detection: an evaluation of the state of the art. IEEE Trans Pattern Anal Mach Intell 34(4):743–761CrossRefGoogle Scholar
  36. 36.
    Anderson N, Bischof W, Laidlaw K, Risko E, Kingstone A (2013) Recurrence quantification analysis of eye movements. Behav Res Methods 45(3):842–856CrossRefGoogle Scholar
  37. 37.
    Lappin J, Tadin D, Nyquist J, Corn A (2009) Spatial and temporal limits of motion perception across variations in speed, eccentricity, and low vision. J Vis 1(30):1–14Google Scholar
  38. 38.
    Everingham M, Van Gool L, Williams C, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vis 88(2):303–338CrossRefGoogle Scholar
  39. 39.
    Sivaraman S, Trivedi M (2010) A general active-learning framework for on-road vehicle recognition and tracking. IEEE Trans Intell Trans Syst 11(2):267–276CrossRefGoogle Scholar
  40. 40.
    Houben S, Stallkamp J, Salmen J, Schlipsing M, Igel C (2013) Detection of traffic signs in real-world images: The German traffic sign detection benchmark. In: Proceeding of international joint conference on neural network, pp 1–8Google Scholar
  41. 41.
    Johnson J, Hollingworth A, Luck S (2010) The role of attention in binding features in visual working memory. J Vis 5(8):426–427CrossRefGoogle Scholar
  42. 42.
    Underwood G (1998) Eye guidance in reading and scene perception, 1st edn. Elsevier, AmsterdamGoogle Scholar
  43. 43.
    Nabatilan L, Aghazadeh F, Nimbarte A, Harvey C, Chowdhury S (2012) Effect of driving experience on visual behaviour and driving performance under different driving conditions. Cogn Technol Work 14:355–363CrossRefGoogle Scholar
  44. 44.
    Peden M, Scurfield R, Sleet D, Mohan D, Hyder AA, Jarawan E, Mathers C (2004) World report on road traffic injury prevention. World Health Organization, GenevaGoogle Scholar
  45. 45.
    Chang C, Lin C (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):27Google Scholar
  46. 46.
    Powers D (2011) Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. J Mach Learn Technol 2(1):37–63MathSciNetGoogle Scholar
  47. 47.
    Guo K, Meints K, Hall C, Hall S, Mills D (2009) Left gaze bias in humans, rhesus monkeys and domestic dogs. Anim Cogn 12:409–418CrossRefGoogle Scholar
  48. 48.
    Xu J, Chen Y, Guo K, Wang J, Menchinelli F, Jiang C, Zhang C, Shao L (2017) What has been missed for real life driving? An inspirational thinking from human innate biases. In: IEEE international conference on advanced video and signal based surveillance (AVSS), Lecce, pp 1–6Google Scholar
  49. 49.
    Meador K, Loring D, Lee G, Brooks B, Nichols F, Thompson E, Thompson W, Heliman K (1989) Hemisphere asymmetry for eye gaze mechanisms. Brain 112(1):103–111CrossRefGoogle Scholar
  50. 50.
    Borji A, Sihite DN, Itti L (2012) Probabilistic learning of task-specific visual attention. In: IEEE international conference on computer vision and pattern recognition, pp 470–477Google Scholar

Copyright information

© The Korean Institute of Electrical Engineers 2019

Authors and Affiliations

  • Jiawei Xu
    • 2
  • Kun Guo
    • 3
  • Federica Menchinelli
    • 3
  • Seop Hyeong Park
    • 1
    Email author
  1. 1.School of SoftwareHallym UniversityChuncheon-siKorea
  2. 2.School of ComputingNewcastle UniversityNewcastle-upon-TyneUK
  3. 3.School of PsychologyUniversity of LincolnLincolnEngland, UK

Personalised recommendations