Transportation Technologies for Sustainability

2013 Edition
| Editors: Mehrdad Ehsani, Fei-Yue Wang, Gary L. Brosch

Night Vision Pedestrian Warning in Intelligent Vehicles

Reference work entry

Definition of the Subject and Its Importance

Today’s second generation of automotive night vision systems uses near-infrared (NIR) or far-infrared (FIR) cameras combined with integrated pedestrian detection systems. These systems direct the driver’s attention by showing all detected pedestrians on a display inside the car. This enables the driver to analyze an imminent situation earlier and react appropriately to prevent a potential hazardous situation, but the driver has to decide on his own whether a detected pedestrian is a potential risk or not. Thus, the next generation of night vision systems needs to integrate an additional warning component which signals to the driver only the relevant objects, that is, all objects on the road or near the road. Such a warning system needs to detect objects more reliably and sooner and has to provide information about the object’s position. Moreover, the system has to know the road course in front of the vehicle to do a threat analysis of the...

This is a preview of subscription content, log in to check access.



This research was supported by PROPEDES (Predictive Pedestrian Protection at Night), BMBF FKZ 13N9750-13N9754, Germany.


Primary Literature

  1. 1.
    Green M (2000) ‘How long does it take to stop?’ Methodological analysis of driver perception times. Transp Hum Factors 2(3):195–216CrossRefGoogle Scholar
  2. 2.
    Zhang Y, Antonsson E, Grote K (2006) A new threat assessment measure for collision avoidance systems. In: International conference on intelligent transportation systems (ITSC), 2006 IEEE, Toronto, pp 968–975.Google Scholar
  3. 3.
    Beutnagel-Buchner U, Arnon M, Sparbert J, Moisel J, Rode M, Ritter W, Oberländer M, Löhlein O, Serfling M, Strobel M, Graf HG, Valipour H, Müller F, Schweiger R (2007) N I R W A R N Gesamtschlussbericht, BMBF, Forschungsbericht 01M3157A-DGoogle Scholar
  4. 4.
    Franke U, Loose H, Knöppel C (2007) Lane recognition on country roads. In: Intelligent vehicles symposium (IV), 2007 IEEE, Istanbul, pp 99–104CrossRefGoogle Scholar
  5. 5.
    Jung CR, Kelber CR (2005) Lane following and lane departure using a linear-parabolic model. Image Vis Comput 23(13):1192–1202CrossRefGoogle Scholar
  6. 6.
    Apostoloff N, Zelinsky A (2003) Robust vision based lane tracking using multiple cues and particle filtering. In: Intelligent vehicles symposium (IV), 2003 IEEE, Columbus, pp 558–563Google Scholar
  7. 7.
    Southall J, Taylor C (2001) Stochastic road shape estimation. In: International conference on computer vision (ICCV), 2001 IEEE, vol 1. Vancouver, pp 205–212Google Scholar
  8. 8.
    Franke U (1992) Real-time 3D road modeling for autonomous vehicle guidance. In: Johansen P, Olsen S (eds) Theory and applications of image analysis: selected papers from the 7th Scandinavian conference on image analysis, Aalborg, Denmark, vol 2, Machine perception artificial intelligence. World Scientific Publishing Company, Singapore, pp 277–284CrossRefGoogle Scholar
  9. 9.
    Dickmanns ED, Zapp A (1986) A curvature-based scheme for improved road vehicle guidance by computer vision. In: Conference on mobile robots, 1986 SPIE, vol 727. Cambridge, pp 161–168Google Scholar
  10. 10.
    Bertozzi M, Bombini L, Cerri P, Medici P, Antonello P, Miglietta M (2008) Obstacle detection and classification fusing radar and vision. In: Intelligent vehicles symposium (IV), 2008 IEEE, Eindhoven, Netherlands, pp 608–613CrossRefGoogle Scholar
  11. 11.
    Alessandretti G, Broggi A, Cerri P (2007) Vehicle and guard rail detection using radar and vision data fusion. IEEE Trans Intell Transp Syst 8(1):95–105CrossRefGoogle Scholar
  12. 12.
    Haselhoff A, Kummert A, Schneider G (2007) Radar-vision fusion with an application to car-following using an improved AdaBoost detection algorithm. In: Intelligent transportation systems conference (ITSC), 2007 IEEE, Seattle, pp 854–858CrossRefGoogle Scholar
  13. 13.
    Dehesa M, Allezard N, Furca L, Kaiser A, Kirchner A, Meinecke MM, Vinciguerra R (2005) SAVE-U: high and low level data fusion algorithms, Public Deliverable 19, 2005. Commission of the European Communities, BrusselsGoogle Scholar
  14. 14.
    Enzweiler M, Gavrila DM (2009) Monocular pedestrian detection: survey and experiments. IEEE Trans Pattern Anal Mach Intell 31(12):2179–2195CrossRefGoogle Scholar
  15. 15.
    Milch S, Behrens M (2001) Pedestrian detection with radar and computer vision. In: Symposium on progress in automobile lighting, Darmstadt, Germany, pp 657–664Google Scholar
  16. 16.
    Schweiger R, Hamer H, Löhlein O (2007) Determining posterior probabilities on the basis of cascaded classifiers as used in pedestrian detection systems. In: Proceedings of IEEE intelligent vehicles symposium, Istanbul, Turkey, pp 1284–1289Google Scholar
  17. 17.
    Kallenbach I, Schweiger R, Palm G, Löhlein O (2006) Multi-class object detection in vision systems using a hierarchy of cascaded classifiers. In: Proceedings of IEEE intelligent vehicles symposium, Tokyo, pp 383–387Google Scholar
  18. 18.
    Miyamoto R, Sugano H, Nakamura Y (2007) Pedestrian recognition suitable for night vision systems. Int J Comput Sci Netw Secur 7:1–8Google Scholar
  19. 19.
    Maehlisch M, Oberländer M, Löhlein O, Gavrila D, Ritter W (2005) A multiple detector approach to low-resolution FIR pedestrian recognition. In: Proceedings of IEEE intelligent vehicles symposium, Las Vegas, pp 325–330Google Scholar
  20. 20.
    Milch S, Behrens M (2001) Pedestrian detection with radar and computer vision. In: Proceedings of the symposium on progress in automobile lighting, Darmstadt, Germany, pp 657–664Google Scholar
  21. 21.
    Lindl R, Walchshusl L (2006) Three-level early fusion for road use detection. PReVENT Fusion Forum J 1(1):19–24Google Scholar
  22. 22.
    ProFusion (2004) Subproject of the PReVENT-project (Preventive and Active Safety Applications), founded by European CommissionGoogle Scholar
  23. 23.
    Dickmann J et al (2009) Sensorfusion as key technology for future driver assistance systems. In: Workshop Sensor- und Datenfusion – Architekturen und Algorithmen, Berlin, 20 Nov 2009Google Scholar
  24. 24.
    Zhang Z (1999) Flexible camera calibration by viewing a plane from unknown orientations. In: Proceedings of the international conference on computer vision, Kerkyra, Corfu, pp 666–673Google Scholar
  25. 25.
    Bouguet J (1999) Visual methods for three-dimensional modeling. Ph.D. dissertation, California Institute of Technology, Pasadena, CAGoogle Scholar
  26. 26.
    Tsai R (1987) A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Automation 3(4):323–344CrossRefGoogle Scholar
  27. 27.
    Skolnik MI (1990) Radar handbook, 2nd edn. McGraw-Hill, New YorkGoogle Scholar
  28. 28.
    Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55:119–139CrossRefMathSciNetMATHGoogle Scholar
  29. 29.
    Viola P, Jones M (2002) Robust real-time object detection. Int J Comput Vision 57(2):137–154CrossRefGoogle Scholar
  30. 30.
    Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of IEEE computer society conference on computer vision and pattern recognition, Kauai, HI, vol 1, pp 511–518Google Scholar
  31. 31.
    Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Proceedings of the international conference on machine learning, Bari, Italy, pp 148–156Google Scholar
  32. 32.
    Freund Y, Schapire RE (1999) A short introduction to boosting. J Jpn Soc Artif Intell 14(5):771–780Google Scholar
  33. 33.
    Schapire RE (2002) The boosting approach to machine learning: an overview. In: MSRI workshop on nonlinear estimation and classification, Berkeley, CAGoogle Scholar
  34. 34.
    Papageorgiou C, Oren M, Poggio T (1998) A general framework for object detection. In: Proceedings of international conference on computer vision, Bombay, India, pp 555–562Google Scholar
  35. 35.
    Oren M, Papageorgiou C, Sinha P, Osuna Z, Poggio T (1997) Pedestrian detection using wavelet templates. In: Proceedings of IEEE computer society conference on computer vision and pattern recognition, San Juan, Puerto Rico, pp 193–199Google Scholar
  36. 36.
    Dellaert F, Fox D, Burgard W, Thrun S (1999) Monte Carlo localization for mobile robots. In: International conference on robotics and automation (ICRA) 1999, Detroit, vol 2. IEEE, Washington, DC, pp 1322–1328Google Scholar
  37. 37.
    Fox D, Burgard Z, Dellaert F, Thrun S (1999) Monte Carlo localization: efficient position estimation for mobile robots. In: National conference on artificial intelligence (AAAI), Orlando, pp 343–349Google Scholar
  38. 38.
    Isard I, Blake A (1998) Condensation – conditional density propagation for visual tracking. Int J Comput Vis (IJCV) 29(1):5–28CrossRefGoogle Scholar
  39. 39.
    Snyder JP (1987) Map projections – a working manual. US Government Printing Office, Washington, DCGoogle Scholar
  40. 40.
    Ress C, Balzer D, Bracht A, Durekovic S, Löwenau J (2008), ADASIS protocol for advanced in-vehicle applications. In: World congress on intelligent transportation systems (ITS) 2008, New York. IEEE, Washington, DCGoogle Scholar
  41. 41.
    Schubert R, Richter E, Wanielik G (2008) Comparison and evaluation of advanced motion models for vehicle tracking. In: International conference on information fusion 2008, Cologne, Germany. IEEE, Washington, DC, pp 1–6Google Scholar
  42. 42.
    Bierman GJ (1975) Measurement updating using the U-D factorization. In: Conference on decision and control 1975, Houston. IEEE, Washington, DC, pp 337–346Google Scholar

Books and Reviews

  1. Bar-Shalom Y, Li XR, Kirubarajan T (2001) Estimation with applications to tracking and navigation, 1st edn. Wiley, New YorkCrossRefGoogle Scholar
  2. Blervaque V, Mezger K, Beuk L, Löwenau J (2006) ADAS horizon – How digital maps can contribute to road safety. In: Valldorf J, Gessner W (eds) Advanced microsystems for automotive applications. Springer, Berlin, pp 427–436Google Scholar
  3. Grabham D (2009) Street to screen: how sat-nav maps are made. SatNav News on, August 2009
  4. Hager JW, Behensky JF, Drew BW (1989) The universal grids: universal transverse mercator (UTM) and universal polar stereographic (UPS). Technical manual DMATM 8358.2, Defense Mapping Agency (DMA), Fairfax, VAGoogle Scholar
  5. Misra P, Enge P (2006) Global positioning system: signals, measurements and performance, 2nd edn. Ganga-Jamuna Press, LincolnGoogle Scholar
  6. Moran T (2008) Tapping mapping for an extra eye on the road. Automotive News: Information Technology, Detroit. July 2008Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Machine Perception Systems and ReliabilityEnvironment Perception Daimler AGUlmGermany