Advertisement

Drowsy Driver Posture, Facial, and Eye Monitoring Methods

  • Jixu Chen
  • Qiang Ji

Abstract

This chapter presents a real-time computer vision system for monitoring drowsy driver. It uses one remotely located charge coupled device (CCD) camera to acquire video of the driver’s face. From the video, various computer vision algorithms are employed to simultaneously, nonintrusively, and in real time recognize the facial behaviors that closely relate to the driver’s level of vigilance. The facial behaviors include rigid head movement (characterized by 3D face pose), nonrigid facial muscular movement (characterized by facial expressions), and eye gaze movement. The system was tested in a simulating environment with different subjects and it was found robust, reliable, and accurate in characterizing facial behaviors.

Keywords

Facial Expression Expression Recognition Facial Expression Recognition Facial Activity Dynamic Bayesian Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Anon (1998) Proximity array sensing system: head position monitor/metric. Advanced Safety Concepts, Sante FeGoogle Scholar
  2. Anon (1999) Perclos and eyetracking: challenge and opportunity. Technical Report, Applied Science Laboratories, BedfordGoogle Scholar
  3. Bartlett M, Littlewort G, Frank M, Lainscsek C, Fasel I, Movellan J (2005) Recognizing facial expression: machine learning and application to spontaneous behavior. In: Proceedings of IEEE Computer society conference computer vision and pattern recognition, San DiegoGoogle Scholar
  4. Bartlett M, Littlewort G, Frank M, Lainscsek C, Fasel I, Movellan J (2006) Automatic recognition of facial actions in spontaneous expressions. J Multimedia 1:22–35Google Scholar
  5. Bartlett M, Littlewort G, Frank M, Lainscsek C, Fasel I, Movellan J (2007) http://mplab.ucsd.edu/grants/project1/research/fully-auto-facs-coding.html
  6. Bazzo J, Lamar M (2004) Recognizing facial actions using gabor wavelets with neutral face average difference. In: Proceedings of the sixth IEEE international conference on automatic face and gesture recognition, SeoulGoogle Scholar
  7. Bourel F, Chibelushi CC, Low AA (2000) Robust facial feature tracking. In: Proceedings of the British Machine Vision Conference, BristolGoogle Scholar
  8. Boverie S, Lcqellec J, Hirl A (1998) Intelligent systems for video monitoring of vehicle cockpit. International congress and exposition ITS: advanced controls and vehicle navigation systems, pp 1–5Google Scholar
  9. Buenaposada JM, Munoz E, Baumela L (2008) Recognising facial expressions in video sequences. Pattern Anal Appl 11(1):101–116MathSciNetCrossRefGoogle Scholar
  10. Cohen I, Sebe N, Ashutosh G, Chen LS, Huang TS (2003) Facial expression recognition from video sequences: temporal and static modeling. Comput Vis Image Underst 91(1–2):160–187CrossRefGoogle Scholar
  11. Cootes TF, Taylor CJ, Cooper DH, Graham J (1995) Active shape models – their training and application. Comput Vis Image Underst 61(1):38–59CrossRefGoogle Scholar
  12. Cootes TF, Edwards GJ, Taylor CJ (2001) Active appearance model. IEEE Trans Pattern Anal Mach Intell 23(6):681–685CrossRefGoogle Scholar
  13. Daugman J (1988) Complete discrete 2-d gabor transforms by neural networks for image analysis and compression. IEEE Trans ASSP 36(7):1169–1179MATHCrossRefGoogle Scholar
  14. Dinges D, Mallis M, Maislin G, Powell J (1998) Evaluation of techniques for ocular measurement as an index of fatigue and the basis for alertness management. Department of Transportation Highway Safety Publication 808 762Google Scholar
  15. Donato G, Bartlett MS, Hager JC, Ekman P, Sejnowski TJ (1999) Classifying facial actions. IEEE Trans Pattern Anal Mach Intell 21:974–989CrossRefGoogle Scholar
  16. Dornaika F, Davoine F (2008) Simultaneous facial action tracking and expression recognition in the presence of head motion. Int J Comput Vis (IJCV) 76:251–281Google Scholar
  17. Dryden IL, Mardia KV (1998) Shape analysis. Wiley, ChichesterMATHGoogle Scholar
  18. Ekman P, Friesen WV (1978) Facial action coding system (FACS): manual. Consulting Psychologists Press, Palo AltoGoogle Scholar
  19. Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395MathSciNetCrossRefGoogle Scholar
  20. Fleet DJ, Jepson AD (1990) Computation of component image velocity from local phase information. Int J Comput Vis 5(1):77–104CrossRefGoogle Scholar
  21. Guestrin ED, Eizenman M (2006) General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Trans Biomed Eng 53:1124–1133CrossRefGoogle Scholar
  22. Ishii T, Hirose M, Iwata H (1987) Automatic recognition of drivers facial expression by image analysis. J Soc Automot Eng Japan 41:1398–1403Google Scholar
  23. Ishikawa T, Baker S, Matthews I, Kanade T (2004) Passive driver gaze tracking with active appearance models. In: Proceedings of the 11th World congress on intelligent transportation systems, BerkeleyGoogle Scholar
  24. Ji Q, Zhu Z, Lan P (2004) Real-time nonintrusive monitoring and prediction of driver fatigue. IEEE Trans Veh Technol 53(4):1052–1068CrossRefGoogle Scholar
  25. Ji Q, Lan P, Looney C (2006) A probabilistic framework for modeling and real-time monitoring human fatigue. IEEE Trans Syst Man Cybernetics Part A Syst Humans 36(5):862–875CrossRefGoogle Scholar
  26. Jiao F, Li SZ, Shum HY, Schuurmans D (2003) Face alignment using statistical models and wavelet features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR03), vol 1. Madison, pp 321–327Google Scholar
  27. Kanade T, Cohn J, Tian YL (2000) Comprehensive database for facial expression analysis. In: Proceedings of the 4th IEEE international conference on automatic face and gesture recognition (FG’00), GrenobleGoogle Scholar
  28. Lc Technologies (2005) http://www.eyegaze.com
  29. Lien JJJ, Kanade T, Cohn J, Li C (1999) Detection, tracking, and classification of action units in facial expression. J Robot Auton Syst 31:131–146CrossRefGoogle Scholar
  30. McKenna SJ, Gong S, Würtz RP, Tanner J, Banin D (1997) Tracking facial feature points with gabor wavelets and shape models. In: Proceedings of International Conference on Audio- and Video-Based Biometric Person Authentication, Hilton Rye Town, pp 35–42Google Scholar
  31. Morimoto CH, Mimica MR (2005) Eye gaze tracking techniques for interactive applications. Comput Vis Image Underst 98:4–24 (Special issue on eye detection and tracking)CrossRefGoogle Scholar
  32. Murphy K (1998) Inference and learning in hybrid bayesian networks. Report No. UCBCSD-98-990, Computer Science Department, U.C. BerkeleyGoogle Scholar
  33. Murphy K (2002) Dynamic Bayesian networks (draft)Google Scholar
  34. Or SH, Luk WS, Wong KH, King I (1998) An efficient iterative pose estimation algorithm. Image Vision Comput 16(5):353–362CrossRefGoogle Scholar
  35. Oyster CW (1999) The human eye: structure and function. Sinauer Associate, SunderlandGoogle Scholar
  36. Saito S (1992) Does fatigue exist in a quantitative of eye movement? Ergonomics 35:607–615CrossRefGoogle Scholar
  37. Saito H, Ishiwaka T, Sakata M, Okabayashi S (1994) Applications of drivers's line of sight to automobiles – what can driver’s eye tell. In: Proceedings of 1994 vehicle navigation and information systems conference, Yokohama, pp 21–26Google Scholar
  38. Shan C, Gong S, MacOwan PW (2006) Dynamic facial expression recognition using a bayesian temporal manifold model. In: Proceedings of the British machine vision conference, EdinburghGoogle Scholar
  39. Smith P, Shah M, da Vitoria Lobo N (2000) Monitoring head/eye motion for driver alertness with one camera. In: Proceedings of the 15th international conference on pattern recognition, vol 4. Barcelona, pp 636–642Google Scholar
  40. Theimer WM (1994) Phase-based binocular vergence control and depth reconstruction using active vision. CVGIP: Image Underst 60(3):343–358CrossRefGoogle Scholar
  41. Tian YI, Kanade T, Cohn J (2001) Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 23:33–80CrossRefGoogle Scholar
  42. Tomasi C, Kanade T (1991) Detection and tracking of point features. Carnegie Mellon University Technical Report CMU-CS-91-132Google Scholar
  43. Tong Y, Liao W, Ji Q (2007a) Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE Trans Pattern Anal Mach Intell 29:1683–1699CrossRefGoogle Scholar
  44. Tong Y, Liao W, Xue Z, Ji Q (2007) A unified probabilistic framework for spontaneous facial activity modeling and understanding. In: Proceedings of the 2007 IEEE Conference on computer vision and pattern recognition (CVPR), MinneapolisGoogle Scholar
  45. Tong Y, Wang Y, Zhu Z, Ji Q (2007c) Robust facial feature tracking under varying face pose and facial expression. Pattern Recogn 40:3195–3208MATHCrossRefGoogle Scholar
  46. Tong Y, Chen J, Ji Q (2010) A unified probabilistic framework for spontaneous facial action modeling and understanding. IEEE Trans Pattern Anal Mach Intell 32(2):258–273CrossRefGoogle Scholar
  47. Trucco E, Verri A (1998) Introductory techniques for 3-D computer vision. Prentice-Hall, Upper Saddle RiverGoogle Scholar
  48. Wiskott L, Fellous JM, Krger N, von der Malsburg C (1997) Active appearance model. IEEE Trans Pattern Anal Mach Intell 19:775–779CrossRefGoogle Scholar
  49. Yammamoto K, Higuchi S (1992) Development of a drowsiness warning system. J Soc Automot Engin Japan 46:127–133Google Scholar
  50. Zhang Y, Ji Q (2005) Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Trans Pattern Anal Mach Intell (PAMI) 27(5):699–714MathSciNetCrossRefGoogle Scholar
  51. Zhou S, Krueger V, Chellappa R (2003) Probabilistic recognition of human faces from video. Comput Vis Image Underst 91(1–2):214–245CrossRefGoogle Scholar
  52. Zhu Z, Ji Q (2007) Novel eye gaze tracking techniques under natural head movement. IEEE Trans Biomed Eng 54(12):2246–2260MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd. 2012

Authors and Affiliations

  1. 1.Visualization and Computer Vision LabGE Global Research CenterNiskayunaUSA
  2. 2.Department of Electrical, Computer, and Systems EngineeringRensselaer Polytechnic InstituteTroyUSA

Personalised recommendations