Advertisement

An End-to-End Practical System for Road Marking Detection

  • Chaonan Gu
  • Xiaoyu WuEmail author
  • He Ma
  • Lei Yang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11902)

Abstract

Road marking is a special kind of symbol on the road surface, used to regulate the behavior of traffic participants. According to our survey, it seems that no papers has yet proposed a mature, highly practical method to detect and classify these important fine-grained markings. Deep learning techniques, especially deep neural networks, have proven to be effective in coping with a variety of computer vision tasks. Using deep neural networks to construct road marking detection systems is a practical solution.

In this paper, we present an accurate and effective road marking detection system to handle seven common road markings. Our model is based on the R-FCN network framework, with the ResNet-18 model as backbone. SE blocks and data balancing strategies are also used to further improve the accuracy of the detection model. Our model has made a good trade-off between accuracy and speed, and achieved quite good results in our self-built road marking dataset.

Keywords

Road marking Detection R-FCN SE block Median frequency balancing 

Notes

Acknowledgement

This work was supported by National Natural Science Foundation of China (61801441) and the Fundamental Research Funds for the Central Universities.

References

  1. 1.
    Touqeer, A., David, I., Ebrahim, E., George, B.: Symbolic road marking recognition using convolutional neural networks. In: Intelligent Vehicles Symposium (IV), pp. 1428–1433. IEEE, Los Angeles (2017)Google Scholar
  2. 2.
    Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRefGoogle Scholar
  3. 3.
    Xinyu, H., et al.: The ApolloScape dataset for autonomous driving. In: Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 954–960. IEEE, Salt Lake City (2018)Google Scholar
  4. 4.
    Seokju, L., et al.: VPGNet: vanishing point guided network for lane and road marking detection and recognition. In: International Conference on Computer Vision (ICCV), pp. 1947–1955. IEEE, Venice (2017)Google Scholar
  5. 5.
    Zhe, Z., Dun, L., Songhai, Z., Xiaolei, H., Baoli, L., Shimin, H.: Traffic-sign detection and classification in the wild. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2110–2118. IEEE, Las Vegas (2016)Google Scholar
  6. 6.
    Kaiming, H., Georgia, G., Piotr, D., Ross, G.: Mask R-CNN. In: The IEEE International Conference on Computer Vision (CVPR), pp. 2961–2969. IEEE, Hawaii (2017)Google Scholar
  7. 7.
    Ross, G., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 580–587. IEEE, Columbus (2014)Google Scholar
  8. 8.
    Ross, G.: Fast RCNN. In: The IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448. IEEE, Santiago Chile (2015)Google Scholar
  9. 9.
    Shaoqing, R., Kaiming, H., Ross, G., Jian, S.: Faster RCNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems(NIPS), Curran Associates, Montreal (2015)Google Scholar
  10. 10.
    Joseph, R., Santosh, D., Ross, G., Ali, F.: You only look once: unified, real-time object detection. In: The IEEE International Conference on Computer Vision (ICCV), pp. 779–788. IEEE, Las Vegas (2016)Google Scholar
  11. 11.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  12. 12.
    Cheng-Yang, F., Wei, L., Ananth, R., Ambrish, T., Alexander, C.B.: DSSD: deconvolutional single shot detector. In: The IEEE International Conference on Computer Vision (ICCV), arXiv preprint arXiv:1701.06659 (2017)
  13. 13.
    Jifeng, D., Yi, L., Kaiming, H., Jian, S.: R-FCN: object detection via region-based fully convolutional networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 379–387. Curran Associates, Barcelona (2016)Google Scholar
  14. 14.
    Jonathan, L., Evan, S., Trevor, D.: Fully convolutional networks for semantic segmentation. In: The IEEE International Conference on Computer Vision (ICCV), pp. 3431–3440. IEEE, Boston (2015)Google Scholar
  15. 15.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132–7141. IEEE, Hawaii (2018)Google Scholar
  16. 16.
    Kaiming, H., Xiangyu, Z., Shaoqing, R., Sun, J.: Deep residual learning for image recognition. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE, Las Vegas (2016)Google Scholar
  17. 17.
    Vijay, B., Alex, K., Roberto, C.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561 (2015)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Communication University of ChinaBeijingChina

Personalised recommendations