Advertisement

Vehicle Ego-Localization by Matching In-Vehicle Camera Images to an Aerial Image

  • Masafumi Noda
  • Tomokazu Takahashi
  • Daisuke Deguchi
  • Ichiro Ide
  • Hiroshi Murase
  • Yoshiko Kojima
  • Takashi Naito
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6469)

Abstract

Obtaining an accurate vehicle position is important for intelligent vehicles in supporting driver safety and comfort. This paper proposes an accurate ego-localization method by matching in-vehicle camera images to an aerial image. There are two major problems in performing an accurate matching: (1) image difference between the aerial image and the in-vehicle camera image due to view-point and illumination conditions, and (2) occlusions in the in-vehicle camera image. To solve the first problem, we use the SURF image descriptor, which achieves robust feature-point matching for the various image differences. Additionally, we extract appropriate feature-points from each road-marking region on the road plane in both images. For the second problem, we utilize sequential multiple in-vehicle camera frames in the matching. The experimental results demonstrate that the proposed method improves both ego-localization accuracy and stability.

Keywords

Iterative Close Point Aerial Image Vehicle Position Multiple Frame Accurate Match 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Google Inc.: Google Maps (2005), http://maps.google.com/
  2. 2.
    Brakatsoulas, S., Pfoser, D., Salas, R., Wenk, C.: On map-maching vehicle tracking data. In: Proc. 32nd Conf. on Very Large Data Bases, pp. 853–864 (2005)Google Scholar
  3. 3.
    Kawasaki, H., Miyamoto, A., Ohsawa, Y., Ono, S., Ikeuchi, K.: Multiple video camera calibration using EPI for city modeling. In: Proc. 6th Asian Conf. on Computer Vision, vol. 1, pp. 569–574 (2004)Google Scholar
  4. 4.
    Ono, S., Mikami, T., Kawasaki, H., Ikeuchi, K.: Space-time analysis of spherical projection image. In: Proc. 18th Int. Conf. on Pattern Recognition, pp. 975–979 (2006)Google Scholar
  5. 5.
    Uchiyama, H., Deguchi, D., Takahashi, T., Ide, I., Murase, H.: Ego-localization using streetscape image sequences from in-vehicle cameras. In: Proc. Intelligent Vehicle Symp. 2009, pp. 185–190 (2009)Google Scholar
  6. 6.
    Lin, Y., Yu, Q., Medioni, G.: Map-enhanced UAV image sequence registraton. In: Proc. 8th Workshop on Applications of Computer Vision, pp. 15–20 (2007)Google Scholar
  7. 7.
    Caballero, F., Luis Merino, J.F., Ollero, A.: Homography based Kalman filter for mosaic building. applications to UAV position estimation. In: Proc. Int. Conf. on Robotics and Automation, pp. 2004–2009 (2007)Google Scholar
  8. 8.
    Pink, O., Moosmann, F., Bachmann, A.: Visual features for vehicle localization and ego-motion estimation. In: Proc. Intelligent Vehicle Symp. 2009, pp. 254–260 (2009)Google Scholar
  9. 9.
    Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: SURF: Speeded up robust features. Computer Vision and Image Understanding 110, 346–359 (2008)CrossRefGoogle Scholar
  10. 10.

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Masafumi Noda
    • 1
  • Tomokazu Takahashi
    • 1
    • 2
  • Daisuke Deguchi
    • 1
  • Ichiro Ide
    • 1
  • Hiroshi Murase
    • 1
  • Yoshiko Kojima
    • 3
  • Takashi Naito
    • 3
  1. 1.Nagoya UniversityNagoyaJapan
  2. 2.Gifu Shotoku Gakuen UniversityGifuJapan
  3. 3.Toyota Central Research & Development Laboratories, Inc.NagakuteJapan

Personalised recommendations