Advertisement

Computational Visual Media

, Volume 4, Issue 3, pp 253–266 | Cite as

Traffic signal detection and classification in street views using an attention model

  • Yifan Lu
  • Jiaming Lu
  • Songhai Zhang
  • Peter Hall
Open Access
Research Article

Abstract

Detecting small objects is a challenging task. We focus on a special case: the detection and classification of traffic signals in street views. We present a novel framework that utilizes a visual attention model to make detection more efficient, without loss of accuracy, and which generalizes. The attention model is designed to generate a small set of candidate regions at a suitable scale so that small targets can be better located and classified. In order to evaluate our method in the context of traffic signal detection, we have built a traffic light benchmark with over 15,000 traffic light instances, based on Tencent street view panoramas. We have tested our method both on the dataset we have built and the Tsinghua–Tencent 100K (TT100K) traffic sign benchmark. Experiments show that our method has superior detection performance and is quicker than the general faster RCNN object detection framework on both datasets. It is competitive with state-of-the-art specialist traffic sign detectors on TT100K, but is an order of magnitude faster. To show generality, we tested it on the LISA dataset without tuning, and obtained an average precision in excess of 90%.

Keywords

traffic light detection traffic light benchmark small object detection CNN 

Notes

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61772298), Research Grant of Beijing Higher Institution Engineering Research Center, and the Tsinghua–Tencent Joint Laboratory for Internet Innovation Technology.

References

  1. [1]
    Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In: Proceedings of the Advances in Neural Information Processing Systems, 91–99, 2015.Google Scholar
  2. [2]
    Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A. C. SSD: Single shot multibox detector. In: Computer Vision–ECCV 2016. Lecture Notes in Computer Science, Vol. 9905. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 21–37, 2016.Google Scholar
  3. [3]
    Chen, C.; Liu, M.-Y.; Tuzel, O.; Xiao, J. R-CNN for small object detection. In: Computer Vision–ACCV 2016. Lecture Notes in Computer Science, Vol. 10115. Lai, S. H.; Lepetit, V.; Nishino, K.; Sato, Y. Eds. Springer Cham, 214–230, 2016.Google Scholar
  4. [4]
    Jin, J.; Fu, K.; Zhang, C. Traffic sign recognition with hinge loss trained convolutional neural networks. IEEE Transactions on Intelligent Transportation Systems Vol. 15, No. 5, 1991–2000, 2014.CrossRefGoogle Scholar
  5. [5]
    Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-sign detection and classification in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2110–2118, 2016.Google Scholar
  6. [6]
    Girshick, R. Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, 1440–1448, 2015.Google Scholar
  7. [7]
    Rensink, R. A. The dynamic representation of scenes. Visual Cognition Vol. 7, Nos. 1–3, 17–42, 2000.CrossRefGoogle Scholar
  8. [8]
    Jensen, M. B.; Philipsen, M. P.; Møgelmose, A.; Moeslund, T. B.; Trivedi., M. M. Vision for looking at traffic lights: Issues, survey, and perspectives. IEEE Transactions on Intelligent Transportation Systems Vol. 17, No. 7, 1800–1815, 2016.CrossRefGoogle Scholar
  9. [9]
    Diaz, M.; Cerri, P.; Pirlo, G.; Ferrer, M. A.; Impedovo, D. A survey on traffic light detection. In: New Trends in Image Analysis and Processing–ICIAP 2015 Workshops. Lecture Notes in Computer Science, Vol. 9281. Murino, V.; Puppo, E.; Sona, D.; Cristani, M.; Sansone, C. Eds. Springer Cham, 201–208, 2015.Google Scholar
  10. [10]
    Maldonado-Bascon, S.; Lafuente-Arroyo, S.; Gil-Jimenez, P.; Gomez-Moreno, H.; Lopez-Ferreras, F. Road-sign detection and recognition based on support vector machines. IEEE Transactions on Intelligent Transportation Systems Vol. 8, No. 2, 264–278, 2007.CrossRefzbMATHGoogle Scholar
  11. [11]
    Jang, C.; Kim, C.; Kim, D.; Lee, M.; Sunwoo, M. Multiple exposure images based traffic light recognition. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 1313–1318, 2014.Google Scholar
  12. [12]
    De Charette, R.; Nashashibi, F. Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 358–363, 2009.Google Scholar
  13. [13]
    Cai, Z.; Gu, M.; Li, Y. Real-time arrow traffic light recognition system for intelligent vehicle. In: Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition, 1, 2012.Google Scholar
  14. [14]
    Sooksatra, S.; Kondo, T. Red traffic light detection using fast radial symmetry transform. In: Proceedings of the 11th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, 1–6, 2014.Google Scholar
  15. [15]
    Ji, Y.; Yang, M.; Lu, Z.; Wang, C. Integrating visual selective attention model with HOG features for traffic light detection and recognition. In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV), 280–285, 2015.Google Scholar
  16. [16]
    Fairfield, N.; Urmson, C. Traffic light mapping and detection. In: Proceedings of the IEEE International Conference on Robotics and Automation, 5421–5426, 2011.Google Scholar
  17. [17]
    John, V.; Yoneda, K.; Qi, B.; Liu, Z.; Mita, S. Traffic light recognition in varying illumination using deep learning and saliency map. In: Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems, 2286–2291, 2014.Google Scholar
  18. [18]
    John, V.; Yoneda, K.; Liu, Z.; Mita, S. Saliency map generation by the convolutional neural network for realtime traffic light detection using template matching. IEEE Transactions on Computational Imaging Vol. 1, No. 3, 159–173, 2015.MathSciNetCrossRefGoogle Scholar
  19. [19]
    Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013.Google Scholar
  20. [20]
    Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 580–587, 2014.Google Scholar
  21. [21]
    He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Computer Vision–ECCV 2014. Lecture Notes in Computer Science, Vol. 8691. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 346–361, 2014.Google Scholar
  22. [22]
    Uijlings, J. R.; van de Sande, K. E. A.; Gevers, T.; Smeulders, A. W. Selective search for object recognition. International Journal of Computer Vision Vol. 104, No. 2, 154–171, 2013.CrossRefGoogle Scholar
  23. [23]
    Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 779–788, 2016.Google Scholar
  24. [24]
    Mnih, V.; Heess, N.; Graves, A.; kavukcuoglu, k. Recurrent models of visual attention. In: Proceedings of the Advances in Neural Information Processing Systems, 2204–2212, 2014.Google Scholar
  25. [25]
    Ba, J.; Mnih, V.; Kavukcuoglu, K. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014.Google Scholar
  26. [26]
    Huang, W.; He, D.; Yang, X.; Zhou, Z.; Kifer, D.; Giles, C. L. Detecting arbitrary oriented text in the wild with a visual attention model. In: Proceedings of the ACM on Multimedia Conference, 551–555, 2016.Google Scholar
  27. [27]
    Gidaris, S.; Komodakis, N. Attend refine repeat: Active box proposal generation via in-out localization. arXiv preprint arXiv:1606.04446, 2016.Google Scholar
  28. [28]
    Gidaris, S.; Komodakis, N. Object detection via a multiregion and semantic segmentation-aware CNN model. In: Proceedings of the IEEE International Conference on Computer Vision, 1134–1142, 2015.Google Scholar
  29. [29]
    Zeiler, M. D.; Fergus, R. Visualizing and understanding convolutional networks. In: Computer Vision–ECCV 2014. Lecture Notes in Computer Science, Vol. 8689. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 818–833, 2014.Google Scholar
  30. [30]
    He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, 1026–1034, 2015.Google Scholar

Copyright information

© The Author(s) 2018

Authors and Affiliations

  • Yifan Lu
    • 1
  • Jiaming Lu
    • 1
  • Songhai Zhang
    • 1
  • Peter Hall
    • 2
  1. 1.TNListTsinghua UniversityBeijingChina
  2. 2.Department of Computer ScienceUniversity of BathBathUK

Personalised recommendations