Advertisement

An Attention Bi-box Regression Network for Traffic Light Detection

  • Juncai Ma
  • Yao ZhaoEmail author
  • Ming Luo
  • Xiang Jiang
  • Ting Liu
  • Shikui Wei
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11935)

Abstract

Recently, object detection has made significant progress due to the development of deep learning. Since the traffic lights are extremely small objects, it leads to unsatisfactory performance when directly applying the off-the-shelf methods based on deep convolutional neural networks. To deal with this problem, we propose an improved detection network based on Faster R-CNN framework. By introducing an attention module on the top of the network, the network can focus better on the small object regions. At the same time, the features from shallow layers are leveraged for classification and bounding box regression, in which the features of small objects can be captured better. In addition, we design a two-branch network for detecting the traffic light box and the bulb box at the same time. In this manner, the performance of traffic light detection is improved obviously. Compared with other detection algorithms, our model achieves competitive results on VIVA traffic light challenge dataset.

Keywords

Traffic light detection Two-branch structure Attention Convolutional neural networks 

Notes

Acknowledgements

This work is supported by Fundamental Research Funds for the Central Universities (No. 2018JBZ001).

References

  1. 1.
    Barnes, D., Maddern, W., Posner, I.: Exploiting 3D semantic scene priors for online traffic light interpretation. In: 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 573–578. IEEE (2015)Google Scholar
  2. 2.
    Behrendt, K., Novak, L., Botros, R.: A deep learning approach to traffic lights: detection, tracking, and classification. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1370–1377. IEEE (2017)Google Scholar
  3. 3.
    Dai, J., Li, Y., He, K., Sun, J.: R-fcn: object detection via region-based fully convolutional networks. In: Advances in Neural Information Processing Systems, pp. 379–387 (2016)Google Scholar
  4. 4.
    Diaz-Cabrera, M., Cerri, P., Sanchez-Medina, J.: Suspended traffic lights detection and distance estimation using color features. In: 2012 15th International IEEE Conference on Intelligent Transportation Systems, pp. 1315–1320. IEEE (2012)Google Scholar
  5. 5.
    Dollár, P., Appel, R., Belongie, S., Perona, P.: Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 36(8), 1532–1545 (2014)CrossRefGoogle Scholar
  6. 6.
    Gomez, A.E., Alencar, F.A., Prado, P.V., Osório, F.S., Wolf, D.F.: Traffic lights detection and state estimation using hidden Markov models. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings, pp. 750–755. IEEE (2014)Google Scholar
  7. 7.
    Gong, J., Jiang, Y., Xiong, G., Guan, C., Tao, G., Chen, H.: The recognition and tracking of traffic lights based on color segmentation and camshift for intelligent vehicles. In: 2010 IEEE Intelligent Vehicles Symposium, pp. 431–435. IEEE (2010)Google Scholar
  8. 8.
    Huang, J., Gao, Y., Lu, S., Zhao, X., Deng, Y., Gu, M.: Energy-efficient automatic train driving by learning driving patterns. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  9. 9.
    Jensen, M.B., Nasrollahi, K., Moeslund, T.B.: Evaluating state-of-the-art object detector on challenging traffic light data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 9–15 (2017)Google Scholar
  10. 10.
    Jensen, M.B., Philipsen, M.P., Møgelmose, A., Moeslund, T.B., Trivedi, M.M.: Vision for looking at traffic lights: issues, survey, and perspectives. IEEE Trans. Intell. Transp. Syst. 17(7), 1800–1815 (2016)CrossRefGoogle Scholar
  11. 11.
    Jetley, S., Lord, N.A., Lee, N., Torr, P.H.: Learn to pay attention. arXiv preprint arXiv:1804.02391 (2018)
  12. 12.
    Korchev, D., Jammalamadaka, A., Bhattacharyya, R.: Automatic rule learning for autonomous driving using semantic memory. arXiv preprint arXiv:1809.07904 (2018)
  13. 13.
    Levinson, J., Askeland, J., Dolson, J., Thrun, S.: Traffic light mapping, localization, and state detection for autonomous vehicles. In: 2011 IEEE International Conference on Robotics and Automation, pp. 5784–5791. IEEE (2011)Google Scholar
  14. 14.
    Li, X., Ma, H., Wang, X., Zhang, X.: Traffic light recognition for complex scene with fusion detections. IEEE Trans. Intell. Transp. Syst. 19(1), 199–208 (2018)CrossRefGoogle Scholar
  15. 15.
    Lindner, F., Kressel, U., Kaelberer, S.: Robust recognition of traffic signals. In: 2004 IEEE Intelligent Vehicles Symposium, pp. 49–53. IEEE (2004)Google Scholar
  16. 16.
    Liu, S., Huang, D., Wang, Y.: Receptive field block net for accurate and fast object detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 404–419. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01252-6_24CrossRefGoogle Scholar
  17. 17.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  18. 18.
    Lu, K.H., Wang, C.M., Chen, S.Y.: Traffic light recognition. J. Chin. Inst. Eng. 31(6), 1069–1075 (2008)CrossRefGoogle Scholar
  19. 19.
    Omachi, M., Omachi, S.: Traffic light detection with color and edge information. In: 2009 2nd IEEE International Conference on Computer Science and Information Technology, pp. 284–287. IEEE (2009)Google Scholar
  20. 20.
    Omachi, M., Omachi, S.: Detection of traffic light using structural information. In: IEEE 10th International Conference on Signal Processing Proceedings, pp. 809–812. IEEE (2010)Google Scholar
  21. 21.
    Philipsen, M.P., Jensen, M.B., Møgelmose, A., Moeslund, T.B., Trivedi, M.M.: Traffic light detection: a learning algorithm and evaluations on challenging dataset. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems, pp. 2341–2345. IEEE (2015)Google Scholar
  22. 22.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  23. 23.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)Google Scholar
  24. 24.
    Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)
  25. 25.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  26. 26.
    Shen, Y., Ozguner, U., Redmill, K., Liu, J.: A robust video based traffic light detection algorithm for intelligent vehicles. In: 2009 IEEE Intelligent Vehicles Symposium, pp. 521–526. IEEE (2009)Google Scholar
  27. 27.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  28. 28.
    Siogkas, G., Skodras, E., Dermatas, E.: Traffic lights detection in adverse conditions using color, symmetry and spatiotemporal information. In: VISAPP (1), pp. 620–627 (2012)Google Scholar
  29. 29.
    Wang, C., Jin, T., Yang, M., Wang, B.: Robust and real-time traffic lights recognition in complex urban environments. Int. J. Comput. Intell. Syst. 4(6), 1383–1390 (2011)CrossRefGoogle Scholar
  30. 30.
    Weber, M., Wolf, P., Zöllner, J.M.: DeepTLR: a single deep convolutional network for detection and classification of traffic lights. In: 2016 IEEE Intelligent Vehicles Symposium (IV), pp. 342–348. IEEE (2016)Google Scholar
  31. 31.
    Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S.Z.: Single-shot refinement neural network for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4203–4212 (2018)Google Scholar
  32. 32.
    Zhou, C., Yuan, J.: Bi-box regression for pedestrian detection and occlusion estimation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 138–154. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01246-5_9CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Juncai Ma
    • 1
    • 2
  • Yao Zhao
    • 1
    • 2
    Email author
  • Ming Luo
    • 1
    • 2
  • Xiang Jiang
    • 1
    • 2
  • Ting Liu
    • 1
    • 2
  • Shikui Wei
    • 1
    • 2
  1. 1.School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina
  2. 2.The National Engineering Laboratory of Urban Rail Transit Communication and Operation ControlBeijingChina

Personalised recommendations