Skip to main content

Night Vision Pedestrian Warning in Intelligent Vehicles

  • Reference work entry
  • First Online:
Transportation Technologies for Sustainability

Definition of the Subject and Its Importance

Today’s second generation of automotive night vision systems uses near-infrared (NIR) or far-infrared (FIR) cameras combined with integrated pedestrian detection systems. These systems direct the driver’s attention by showing all detected pedestrians on a display inside the car. This enables the driver to analyze an imminent situation earlier and react appropriately to prevent a potential hazardous situation, but the driver has to decide on his own whether a detected pedestrian is a potential risk or not. Thus, the next generation of night vision systems needs to integrate an additional warning component which signals to the driver only the relevant objects, that is, all objects on the road or near the road. Such a warning system needs to detect objects more reliably and sooner and has to provide information about the object’s position. Moreover, the system has to know the road course in front...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 749.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 549.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abbreviations

ACC:

Adaptive Cruise Control – A cruise control system which controls the vehicle speed in order to keep a set distance to the vehicle in front.

AdaBoost:

Adaptive Boosting – A supervised machine learning algorithm which combines several weak learners (very simple classifiers) to construct a strong learner classifier.

Baseline:

The baseline of a stereo camera system defines the distance of the corresponding camera coordinate systems and influences the accuracy of the depth analysis.

Bayes classifier:

A simple probabilistic classifier based on applying Bayes’ theorem from Bayesian statistics.

CAN:

Controller Area Network – A standard vehicle bus system which is designed to allow microcontrollers and devices to communicate with each other within a vehicle.

CFAR:

Constant False Alarm Rate – An adaptive algorithm used especially in radar systems to detect targets in noisy measurements.

CMOS:

Complementary Metal Oxide Semiconductor – A technology for constructing integrated circuits.

CTRV:

Constant Turn Rate and Velocity – A motion model which assumes an object moves in sample time T with constant velocity and constant turn rate.

Disparity:

A term in multiple view geometry in computer vision which describes the distance of two points in two different images by identifying the same point in the scene observed from different positions.

Digital map:

The knowledge database used in automotive navigation systems which provide information about the road network.

Doppler:

An effect that describes the change in frequency of a wave for an observer moving relative to the source of the wave.

EU:

European Union

Epipolar line:

A line describing the positions in a sensor coordinate system, which correspond to a single position in a second sensor coordinate system. Typical term used in stereo camera systems.

False alarm rate:

A failure measure in detection systems describing the amount or the rate of detections which do not correspond to a proper object.

FIR (far-infrared):

A device analyzing the infrared electromagnetic radiation with a wavelength between 0.7 and 300 μm.

Flat world assumption:

A simple model in the field of computer vision which assumes that all objects perceived by a sensor (e.g., a camera) are positioned in a flat world.

GPS (global positioning system):

A space-based global navigation satellite system that provides reliable location and time information of an object on or near the Earth.

Haar wavelets:

A basic wavelet method to decompose a signal into independent signal parts.

INS:

Inertial Navigation System – A system that uses motion sensors and rotational sensors installed on a platform to continuously calculate via dead reckoning the position, orientation, and velocity of a moving object without any external references.

MCL:

Monte Carlo Localization – A method in robotics and sensors to determine the position of a robot given a map of its environment based on Markov localization.

NIR:

Near-Infrared – A device analyzing the electromagnetic spectrum from approximately 800 nm to 2,500 nm.

NIRWARN:

Near-Infrared Warning

Particle Filter:

A probabilistic method in the field of computer vision to track objects using a Monte Carlo approach.

PReVENT:

Preventive and Active Safety Application, European founded project.

ProFusion:

ProFusion Subproject of the PReVENT project.

RADAR:

Radio Detection and Ranging – An object-detection system that uses an electromagnetic wave to identify the range, direction, (and velocity) of a target.

ROI:

Region of interest – A small region selected for further processing.

ROC:

Receiver operating characteristic – A graphical plot to show the fraction of true positives vs. the fraction of false positives for a binary classifier system as its discrimination threshold is varied.

Stereo camera system:

A type of camera system which observes the environment with two lenses in order to simulate human binocular vision, and thus reconstructs the depth information of the environment.

Two-class problem:

A typical classification problem with an object class and a non-object class.

UTM:

Universal Transverse Mercator – A grid-based two-dimensional Cartesian coordinate system, which specifies locations on the surface of the Earth.

VGA:

Video Graphics Array – A standard VGA camera has a resolution of 640 × 480 pixels.

Bibliography

Primary Literature

  1. Green M (2000) ‘How long does it take to stop?’ Methodological analysis of driver perception times. Transp Hum Factors 2(3):195–216

    Article  Google Scholar 

  2. Zhang Y, Antonsson E, Grote K (2006) A new threat assessment measure for collision avoidance systems. In: International conference on intelligent transportation systems (ITSC), 2006 IEEE, Toronto, pp 968–975.

    Google Scholar 

  3. Beutnagel-Buchner U, Arnon M, Sparbert J, Moisel J, Rode M, Ritter W, Oberländer M, Löhlein O, Serfling M, Strobel M, Graf HG, Valipour H, Müller F, Schweiger R (2007) N I R W A R N Gesamtschlussbericht, BMBF, Forschungsbericht 01M3157A-D

    Google Scholar 

  4. Franke U, Loose H, Knöppel C (2007) Lane recognition on country roads. In: Intelligent vehicles symposium (IV), 2007 IEEE, Istanbul, pp 99–104

    Chapter  Google Scholar 

  5. Jung CR, Kelber CR (2005) Lane following and lane departure using a linear-parabolic model. Image Vis Comput 23(13):1192–1202

    Article  Google Scholar 

  6. Apostoloff N, Zelinsky A (2003) Robust vision based lane tracking using multiple cues and particle filtering. In: Intelligent vehicles symposium (IV), 2003 IEEE, Columbus, pp 558–563

    Google Scholar 

  7. Southall J, Taylor C (2001) Stochastic road shape estimation. In: International conference on computer vision (ICCV), 2001 IEEE, vol 1. Vancouver, pp 205–212

    Google Scholar 

  8. Franke U (1992) Real-time 3D road modeling for autonomous vehicle guidance. In: Johansen P, Olsen S (eds) Theory and applications of image analysis: selected papers from the 7th Scandinavian conference on image analysis, Aalborg, Denmark, vol 2, Machine perception artificial intelligence. World Scientific Publishing Company, Singapore, pp 277–284

    Chapter  Google Scholar 

  9. Dickmanns ED, Zapp A (1986) A curvature-based scheme for improved road vehicle guidance by computer vision. In: Conference on mobile robots, 1986 SPIE, vol 727. Cambridge, pp 161–168

    Google Scholar 

  10. Bertozzi M, Bombini L, Cerri P, Medici P, Antonello P, Miglietta M (2008) Obstacle detection and classification fusing radar and vision. In: Intelligent vehicles symposium (IV), 2008 IEEE, Eindhoven, Netherlands, pp 608–613

    Chapter  Google Scholar 

  11. Alessandretti G, Broggi A, Cerri P (2007) Vehicle and guard rail detection using radar and vision data fusion. IEEE Trans Intell Transp Syst 8(1):95–105

    Article  Google Scholar 

  12. Haselhoff A, Kummert A, Schneider G (2007) Radar-vision fusion with an application to car-following using an improved AdaBoost detection algorithm. In: Intelligent transportation systems conference (ITSC), 2007 IEEE, Seattle, pp 854–858

    Chapter  Google Scholar 

  13. Dehesa M, Allezard N, Furca L, Kaiser A, Kirchner A, Meinecke MM, Vinciguerra R (2005) SAVE-U: high and low level data fusion algorithms, Public Deliverable 19, 2005. Commission of the European Communities, Brussels

    Google Scholar 

  14. Enzweiler M, Gavrila DM (2009) Monocular pedestrian detection: survey and experiments. IEEE Trans Pattern Anal Mach Intell 31(12):2179–2195

    Article  Google Scholar 

  15. Milch S, Behrens M (2001) Pedestrian detection with radar and computer vision. In: Symposium on progress in automobile lighting, Darmstadt, Germany, pp 657–664

    Google Scholar 

  16. Schweiger R, Hamer H, Löhlein O (2007) Determining posterior probabilities on the basis of cascaded classifiers as used in pedestrian detection systems. In: Proceedings of IEEE intelligent vehicles symposium, Istanbul, Turkey, pp 1284–1289

    Google Scholar 

  17. Kallenbach I, Schweiger R, Palm G, Löhlein O (2006) Multi-class object detection in vision systems using a hierarchy of cascaded classifiers. In: Proceedings of IEEE intelligent vehicles symposium, Tokyo, pp 383–387

    Google Scholar 

  18. Miyamoto R, Sugano H, Nakamura Y (2007) Pedestrian recognition suitable for night vision systems. Int J Comput Sci Netw Secur 7:1–8

    Google Scholar 

  19. Maehlisch M, Oberländer M, Löhlein O, Gavrila D, Ritter W (2005) A multiple detector approach to low-resolution FIR pedestrian recognition. In: Proceedings of IEEE intelligent vehicles symposium, Las Vegas, pp 325–330

    Google Scholar 

  20. Milch S, Behrens M (2001) Pedestrian detection with radar and computer vision. In: Proceedings of the symposium on progress in automobile lighting, Darmstadt, Germany, pp 657–664

    Google Scholar 

  21. Lindl R, Walchshusl L (2006) Three-level early fusion for road use detection. PReVENT Fusion Forum J 1(1):19–24

    Google Scholar 

  22. ProFusion (2004) Subproject of the PReVENT-project (Preventive and Active Safety Applications), founded by European Commission

    Google Scholar 

  23. Dickmann J et al (2009) Sensorfusion as key technology for future driver assistance systems. In: Workshop Sensor- und Datenfusion – Architekturen und Algorithmen, Berlin, 20 Nov 2009

    Google Scholar 

  24. Zhang Z (1999) Flexible camera calibration by viewing a plane from unknown orientations. In: Proceedings of the international conference on computer vision, Kerkyra, Corfu, pp 666–673

    Google Scholar 

  25. Bouguet J (1999) Visual methods for three-dimensional modeling. Ph.D. dissertation, California Institute of Technology, Pasadena, CA

    Google Scholar 

  26. Tsai R (1987) A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Automation 3(4):323–344

    Article  Google Scholar 

  27. Skolnik MI (1990) Radar handbook, 2nd edn. McGraw-Hill, New York

    Google Scholar 

  28. Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55:119–139

    Article  MathSciNet  MATH  Google Scholar 

  29. Viola P, Jones M (2002) Robust real-time object detection. Int J Comput Vision 57(2):137–154

    Article  Google Scholar 

  30. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of IEEE computer society conference on computer vision and pattern recognition, Kauai, HI, vol 1, pp 511–518

    Google Scholar 

  31. Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Proceedings of the international conference on machine learning, Bari, Italy, pp 148–156

    Google Scholar 

  32. Freund Y, Schapire RE (1999) A short introduction to boosting. J Jpn Soc Artif Intell 14(5):771–780

    Google Scholar 

  33. Schapire RE (2002) The boosting approach to machine learning: an overview. In: MSRI workshop on nonlinear estimation and classification, Berkeley, CA

    Google Scholar 

  34. Papageorgiou C, Oren M, Poggio T (1998) A general framework for object detection. In: Proceedings of international conference on computer vision, Bombay, India, pp 555–562

    Google Scholar 

  35. Oren M, Papageorgiou C, Sinha P, Osuna Z, Poggio T (1997) Pedestrian detection using wavelet templates. In: Proceedings of IEEE computer society conference on computer vision and pattern recognition, San Juan, Puerto Rico, pp 193–199

    Google Scholar 

  36. Dellaert F, Fox D, Burgard W, Thrun S (1999) Monte Carlo localization for mobile robots. In: International conference on robotics and automation (ICRA) 1999, Detroit, vol 2. IEEE, Washington, DC, pp 1322–1328

    Google Scholar 

  37. Fox D, Burgard Z, Dellaert F, Thrun S (1999) Monte Carlo localization: efficient position estimation for mobile robots. In: National conference on artificial intelligence (AAAI), Orlando, pp 343–349

    Google Scholar 

  38. Isard I, Blake A (1998) Condensation – conditional density propagation for visual tracking. Int J Comput Vis (IJCV) 29(1):5–28

    Article  Google Scholar 

  39. Snyder JP (1987) Map projections – a working manual. US Government Printing Office, Washington, DC

    Google Scholar 

  40. Ress C, Balzer D, Bracht A, Durekovic S, Löwenau J (2008), ADASIS protocol for advanced in-vehicle applications. In: World congress on intelligent transportation systems (ITS) 2008, New York. IEEE, Washington, DC

    Google Scholar 

  41. Schubert R, Richter E, Wanielik G (2008) Comparison and evaluation of advanced motion models for vehicle tracking. In: International conference on information fusion 2008, Cologne, Germany. IEEE, Washington, DC, pp 1–6

    Google Scholar 

  42. Bierman GJ (1975) Measurement updating using the U-D factorization. In: Conference on decision and control 1975, Houston. IEEE, Washington, DC, pp 337–346

    Google Scholar 

Books and Reviews

  • Bar-Shalom Y, Li XR, Kirubarajan T (2001) Estimation with applications to tracking and navigation, 1st edn. Wiley, New York

    Book  Google Scholar 

  • Blervaque V, Mezger K, Beuk L, Löwenau J (2006) ADAS horizon – How digital maps can contribute to road safety. In: Valldorf J, Gessner W (eds) Advanced microsystems for automotive applications. Springer, Berlin, pp 427–436

    Google Scholar 

  • Grabham D (2009) Street to screen: how sat-nav maps are made. SatNav News on www.techradar.com, August 2009

  • Hager JW, Behensky JF, Drew BW (1989) The universal grids: universal transverse mercator (UTM) and universal polar stereographic (UPS). Technical manual DMATM 8358.2, Defense Mapping Agency (DMA), Fairfax, VA

    Google Scholar 

  • Misra P, Enge P (2006) Global positioning system: signals, measurements and performance, 2nd edn. Ganga-Jamuna Press, Lincoln

    Google Scholar 

  • Moran T (2008) Tapping mapping for an extra eye on the road. Automotive News: Information Technology, Detroit. July 2008

    Google Scholar 

Download references

Acknowledgments

This research was supported by PROPEDES (Predictive Pedestrian Protection at Night), BMBF FKZ 13N9750-13N9754, Germany.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Matthias Serfling or Otto Löhlein .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this entry

Cite this entry

Serfling, M., Löhlein, O. (2013). Night Vision Pedestrian Warning in Intelligent Vehicles. In: Ehsani, M., Wang, FY., Brosch, G.L. (eds) Transportation Technologies for Sustainability. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-5844-9_782

Download citation

Publish with us

Policies and ethics