Research of Intelligent Car Dual-Navigation System Based on Complex Environment

  • Guang-bin Bao
  • Le ZhangEmail author
  • Hong Zhao
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 503)


The visual path recognition is one of the most important means of intelligent Car navigation. However, it is still faces with many challenges to realize accurate visual navigation in complex environment. As a supplement, manual navigation has the characteristic of high reliability and high flexibility. In this paper, the intelligent car was provided with two navigation modes which can be switched in real time. The first mode adopted the combination between MC9SXS128 Single Chip Micyoco (SCM) as the core controller and a COMS camera to achieve autonomous navigation. The second mode, manual navigation, was brought about by a Play Station 2 (PS2) handle connected to the above SCM mediately. This combination of two navigation mode above is called dual-navigation system. Finally, the experiment results show that the autonomous navigation is competent in navigation task in general complex environment, and it is able to overcome the interference from external environment effectively, such as shadow generated by the backlight and uneven lighting. Besides, manual navigation is quite qualified for navigation tasks in non-general complex environment. And two navigation modes work in with each other well.


Intelligent car Complex environment Recognition Visual autonomous navigation Manual navigation 

With the instant increase in car parc, urban traffic problems become increasingly prominent, and intelligent transportation system (ITS) has received extensive attention and research [1]. Intelligent car is an important part of ITS, so intelligent traffic cannot be achieved without the intelligent car [2, 3, 4]. Navigation system is one of the most important subsystems of intelligent car, and its performance determines the stability and safety of intelligent car directly [5].

In recent years, with the rapid development of intelligent car and the technology of ITS, the trend of car intelligence has been unstoppable, however, because of the complexity and diversity of the road environment, there are still many problems to be solved from intelligent driving to true all-weather unmanned driving. Song and others put forward intelligent cars aided navigation system based on machine vision. In the paper, a new arithmetic of front objective cars area checking based on image entropy was presented. Front objective cars are detected on the considering space-time continuity, the statistical characteristics, and texture character of sequential images. The detection method according to the area information entropy of the image not only improves the accuracy of the goal detection but also reduces system algorithm complexity, and strengthens systematic real-time character and robustness. The intelligent navigate cruise system has significant effect to reduce traffic accident [6]. Li and others proposed a dual-drive dual-control intelligent car bus system so that intelligent car has two driving styles: manual driving and automatic driving, Simultaneously, the style can be switched by voice, touch, and stampede flexibly, but the system has not been implemented concretely [7].

In this paper, 16-bit microcontroller MC9SXS128 was considered as the core controller of intelligent car, and a dual-navigation system which can switch navigation mode in real-time was realized. Then we researched the intelligent car dual-control system by this platform. The visual navigation system and manual navigation system in a complex environment were studied, and some feasible solutions are proposed. Finally, the effectiveness of the navigation system in a complex environment was verified by experiments.

1 Hardware System of Intelligent Car

The hardware system of intelligent car is mainly composed of power module, minimum system of microcontrolleru (MCU), motor drive module, and path information acquisition sensor [8, 9]. The power module is responsible for supplying power to the hardware system; MC9SXS128 MCU of Freescale company was chosen as the core controller; L298N was used to the motor drive module, which can drive the four direct-current (DC) motor w1, w2, w3, w4; COMS area-array-camera OV7620 was selected as the path information acquisition sensor; Arduino UNO R3 core board serves as a communication link between the MC9SXS128 core controller and the PS2 remote control handle. The intelligent light compensation module intelligently turns on or off the LEDs by detecting the light intensity of surrounding environment.

The core controller processes the path information which collected from the camera and realizes the four-channel DC motor’s control by designed algorithm, then intelligent car achieved visual autonomous navigation. For some special scenarios or tasks, manual navigation can be achieved through the remote control handle.

2 Research on Visual Autonomous Navigation System

2.1 Intelligent Lights

The camera is the core sensor of intelligent car, and it directly determines whether the intelligent car can track accurately. Because the camera is a photosensitive device and sensitive to light, the shadow generated by backlight of intelligent car’s body will cause a large area of noise in its vision (shown in Fig. 8). It is also easy to produce noise in an environment with insufficient exposure and uneven light. Usually, the noise will affect path identification of intelligent car seriously.

Based on the above problems, the basic solution is to adjust the appropriate light threshold according to different light intensities of the environment. Therefore, the determination of the threshold is vital. The threshold method is divided into dynamic threshold method and fixed threshold method. The fixed threshold method cannot adapt well to the change of the ambient light intensity, which is not conducive to extract the black line (path edge). Consequently, general method is to design an algorithm to set the threshold dynamically, which can adapt to different light intensities of the environment and eliminate the interference of noise and pulse effectively to protect edge information of the black line of image. But the dynamic threshold method has its own shortcomings, on the one hand, algorithm of dynamic threshold requires a large number of operations, which increases the operation burden of core controller. Once we want to get more path information and increase the size of the image, it is easy to cause the delay of microcontroller’s process and lead to intelligent car unable to track normally; on the other hand, it cannot solve the interference of intelligent car backlight shadow.

In order to solve the defects of dynamic threshold method, this paper proposed a solution which combined hardware and software, that is, under fixed threshold, using the photosensitive resistance of intelligent complement module to detect the surrounding environmental light, then open or close the light (composed of five White light LED lamps) automatically. Once ambient light intensity is lower than illumination intensity which has set in the algorithm, the light will be on; otherwise, the lamps are closed normally.

2.2 Path Identification and Control Strategy

2.2.1 Image Preprocessing

First, the image acquired by the camera is binarized, that is, and the pixel in the image is directly compared with the threshold. If the value of the pixel is no less than the threshold, it is judged that the point is white, otherwise, the point is black. Then use a median filter to process the image, which removes the noise caused by system noise, environmental interference, and other factors, and get a good edge contour image for the late path centerline (path center line) extraction.

2.2.2 Path Centerline Extraction and Path Pattern Recognition

In this paper, Fig. 1 was used for a test track to conduct this research, where A and B are the track exit or entrance. Because the path is not both sides of the situation, there are complex path and special path, in order to enable the intelligent car to travel along the track centerline (namely path centerline mentioned above), this paper proposed a horizontal scan and longitudinal scan method to achieve track centerline extraction and path pattern recognition.
Fig. 1

Test track

The specific ideas for horizontal and vertical scanning are as follows:

  1. (1)

    Blackline extraction and visualization of general track center

The information collected by the camera is a two-dimensional image (the image pixels collected in this paper are 120 × 40), where the coordinates of upper and lower corners of the image are (0,0) and (39,119) respectively, as shown in the Fig. 2 (left figure). A two-dimensional matrix can be constructed according to the size of the captured image to store these image pixels. First, the nearest image of the headlight is given to the MCU to deal with, and the pixels far away from the front can be used as path anticipation and path pattern recognition. So the pixel which is near the front of intelligent car should be scanned by MCU at first, that is, scan the edge of two paths from pixel C (C is the center of the abscissa of the image) to pixel point A and B simultaneously, and record the pixels A and B vertical and horizontal coordinates, which is horizontal scanning, as shown in Fig. 2 (right figure). Accordingly, the actual track centerline abscissa is \({\text{Center}} = ({\text{A}} + {\text{B}})\).
Fig. 2

Image capture

In order to eliminate interference of random noise, a better solution is to take the abscissa of the n lines near the center of the abscissa as the center of the track abscissa. The specific calculation method as shown in equation
$${\text{Center}}\_{\text{now}} = \frac{1}{{{\text{row}} - k}}\sum\limits_{i = k}^{\text{row}} {\frac{{\left( {A_{i} + B_{i} } \right)}}{2}} ,k = 0,1, \ldots 39.$$

Among them, the row is the collection of image width, k generally take k ≥ 3, that is (row-k) generally take 3–6.

In order to facilitate observation, the center line can be visualized through the upper computer, as shown in Fig. 3.
Fig. 3

Extraction and visualization of track centerline

  1. (2)

    Pattern recognition on special track paths

For the track path shown on the left in Fig. 4, it is necessary to perform a longitudinal scan on the basis of the lateral scan. Since there is no available value in the longitudinal scanning of the image information in the area which closer to the intelligent car, it is only necessary to scan the one-way pixel D further from the front to identify the path information of the front, as shown in the right figure of Fig 4. When scanning to point D, record the coordinates of the D point pixels.
Fig. 4

The special track (which is more complex than the normal tracks)

Horizontal scanning and longitudinal scanning results can be combined to identify any path mode in front of the intelligent car. And then intelligent cat makes the appropriate driving action according to the current path mode. The program flow chart of horizontal-vertical scan is shown in Fig. 5.
Fig. 5

Program flow chart of horizontal-vertical scan

Among them, \({\text{Row}}\_{\text{on}} = 0\) means that the point of scanning is black spots; \({\text{Column}}\_{\text{left}} = 0\) or 1, respectively, indicates the left side of the scan points is black or white.

In order to keep the intelligent cat moving along the vicinity of central line of the track, the car needs a reference. Usually choose track centerline as the reference line, namely the car axle and the perpendicular bisector of the running state of the track line overlap is the ideal situation, in which the track line \({\text{Image}}\_{\text{center}}\_{\text{line }}\) is half the width of the track, 60. In fact, however, this ideal situation is usually only a moment, and then there will be a deviation between the two lines, the deviation is defined as \({\text{Column}}\_{\text{center}}\_{\text{Dev}}\), that is \({\text{Column}}\_{\text{center}}\_{\text{Dev }} = {\text{Image}}\_{\text{center}}\_{\text{line }} - {\text{Center}}\_{\text{now}}\). Obviously, the value of \({\text{Column}}\_{\text{center}}\_{\text{Dev}}\) is positive or negative.

The deviation value can be used to control the DC motor, and the moving route of intelligent car can be adjusted to the desired state. The method ensures that intelligent car travels near the center line and does not run out of autodrome, however, this method still has a flaw. From the driving trajectory, it is not difficult to find that the swing amplitude of intelligent car is very large when car runs near the center line of the track, which causes the intelligent car to jitter intermittently, further, affects the camera collecting path information, and the probability of noise generated will be increased.

Correction to driving direction of intelligent car varies dynamically through PWM (Pulse Width Modulation). That is, when the \({\text{Column}}\_{\text{center}}\_{\text{Dev}}\) is small, the car steer slightly, on the contrary, when the \({\text{Column}}\_{\text{center}}\_{\text{Dev}}\) is large, the car steer greatly. According to the difference of the deviation value, the dynamic selection of the correct correction range can be used to drive the real intelligent car smoothly. The improved algorithm flow chart is shown in Fig. 6.
Fig. 6

Improved algorithm flow chart

Likewise, in order to remove the interference from random noise, intelligent car should scan the n points near the black spots which have been detected, where n generally takes 2–5. If these pixels are still determined as black dots, then intelligent car can confirm that the line has been scanned to the front line.

3 Research on Manual Navigation System

The moving process of the intelligent car is affected by working environment and its structure. The single drive structure of traditional intelligent car is not flexible enough to meet the real-time control of complex conditions [10]. Handle is a common way of interaction between people and cars, through a handle to control intelligent car can achieve real-time control. Usually, intelligent car and remote control handle need special communication protocol, which is difficult to be transplanted in the cross platform. In this paper, a simple communication method is proposed, through analysis instruction of PS2 by mc9sxs128 core controller, the control of intelligent car is realized easily. The communication block diagram of the handle and the MC9SXS128 core controller is shown in Fig. 7.
Fig. 7

Communication principle frame diagram

On the premise that the Arduino can communicate with the PS2 handle properly, Arduino UNO R3 controller is considered as a part of the handle receiver (as shown in Fig. 7 dotted box). By modifying the communication codes between Arduino UNO R3 and PS2 handle, the handle instructions are represented by external interrupt I/O, which performance may be high, low, or level transitions, etc. Then this kind of performance is transmitted to the external interrupt of the MC9SXS128 core controller through wires, and the controller analysis the corresponding performance signals, thus manual navigation system realized indirect communication between the MC9SXS128 core controller and the remote control handle, greatly reducing the cross-platform transplant code difficulty.

4 Results Analysis and Comparison

Through comparison experiment, as shown in Figs. 8 and 9, the method of using intelligent light compensation is easier to realize, which solved the problem of backlight shadow and underexposure, then less noisy images were obtained, the burden of SCM was reduced. In the overall, the performance is better than the pure fixed threshold method.
Fig. 8

Pure fixed threshold method centerline

Fig. 9

Intelligent light compensation method

Through the comparison experiment, it is not difficult to find that the improved algorithm can make the swing of the intelligent car relative to the center line smaller and make the trajectory more smooth. This algorithm also made the intelligent car run much steadier. Then, intelligent car collected images which included less noise. Comparing the improved algorithm and the previous, the trajectory of the intelligent car which runs on the straight track is shown in Fig. 10.
Fig. 10

Comparison of trajectories

After comparison, the indirect communication method proposed in this paper can greatly reduce the difficulty of cross-platform code migration, make the operation simpler, and save a lot of workload. Of course, this method also has disadvantage, it needs to occupy the external interrupt resources of MC9SXS128 and Arduino UNO R3, each pair of interrupts can only resolve one PS2 remote control instruction, but the remote control operation for small instruction is fully feasible. It is worth noting that the ground wire of MC9SXS128 and Arduino UNO R3 must be connected together.

5 Conclusions

Taking MC9SXS128 as the core controller, the intelligent car navigation system was designed and implemented. In this paper, an intelligent light compensation method was proposed to solve the problems that light may be uneven and the dynamic threshold algorithm aggravates load of MCU, at last, it obtained an ideal result. Through the improvement of the path recognition algorithm, the accurate identification of the special path mode can be realized. Through the improvement of control strategy, the swing deviated from the track centerline of intelligent car reduced obviously. In addition, this paper also proposed a simple communication method, which realized the communication between the core controller and the remote control handle, and greatly reduced the work difficulty and saved time.

The experiments show that the intelligent car can be switched freely in the automatic tracking mode and remote control handle control mode, and the intelligent car can complete all necessary tasks, and the system has a good anti-interference ability and robustness. How to improve the intelligence, stability, safety, and intelligence of intelligent system will be the focus of the next step.


  1. 1.
    Wang J, Chao Z, Shan Y et al (2010) Research on key technologies for urban unmanned intelligent car. In: Intelligent systems, IEEE, pp 51–54Google Scholar
  2. 2.
    Sun DX, Zhang WB, Liu XY (2012) The intelligent car navigation system based on photoelectric sensor. Adv Mater Res 510:835–841CrossRefGoogle Scholar
  3. 3.
    Babu DS, Joseph P et al (2012) Intelligent turning system for a smart car. ICETT 4:45–47Google Scholar
  4. 4.
    Hasan N, Didaralalam SM, Rezwanul Huq S (2011) Intelligent car control for a smart car. Int J Comput Appl 14(3):15–19Google Scholar
  5. 5.
    Zhang YZ, Shi EY, Wu CD et al (2009) On the navigation system based on CCD for smart car. J Northeast Univ 30(2):162–165Google Scholar
  6. 6.
    Song G, Pan Y (2007) The Research of intelligent cars aided navigation system based on machine vision. In: International conference on mechanical engineering and mechanics 2007Google Scholar
  7. 7.
    Li D, Zhang X, Han W et al (2014) Double-drive dual-control smart car bus system. CN: CN104079669AGoogle Scholar
  8. 8.
    Gao YB, Cong JI, Han PW (2013) Design of smart car system with camera-based path recognition. J Lanzhou Univ Technol 39(06):97–102Google Scholar
  9. 9.
    Zhang HT, Zhao SS, Han JH (2009) Design of intelligent car based on CMOS image sensor. J Henan Univ Sci Technol 30(01):18–21Google Scholar
  10. 10.
    Dai S, Chen B, Fan S (2011) Control and study of wireless control system of intelligent car. Comput Meas Control 19(9):2125–2127Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.School of Computer and CommunicationLanzhou University of TechnologyLanzhouChina

Personalised recommendations