Skip to main content

Adaptive Traffic Signal Control Methods Based on Deep Reinforcement Learning

  • Chapter
  • First Online:
Intelligent Transport Systems for Everyone’s Mobility

Abstract

Smart cities are characterized by their use of intelligent transportation systems (ITS), which utilize advanced traffic signal control methods to achieve effective and efficient traffic operations. Recently, due to significant progress in artificial intelligence, research has focused on machine learning-based frameworks of adaptive traffic signal control (ATSC). In particular, deep reinforcement learning (DRL) can be formulated as a model-free technique and applied to optimal action selection problems. We propose a DRL-based ATSC method for two kinds of neural network models: deep neural networks (DNN) and convolutional neural networks (CNN). In the training processes, the microscopic simulator Vissim builds a virtual intersection that the agent uses to scan all possible observations and makes interactive decisions. For each testing scenario, five random experiments are generated. The average system total delay (ASTD) over the five experiments is compared for the two proposed neural network models and a fixed timing plan by way of the Webster delay formulas. Based on these preliminary tests, the DNN-model (CNN-model) signal control agent performed a lowest value of ASTD for the unsaturated (oversaturated) cases. Moreover, the CNN-model has better feature extraction capabilities than the DNN-model particularly for the oversaturated cases. We found that situations with specific traffic maneuvers, such as a spillback of a protected left-turn bay, are well learned by the proposed CNN-model, even the training scenario remains only unsaturated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Roess, R.P., Prassas, E.S., McShane, W.R.: Traffic Engineering, 3rd edn. Prentice Hall (2004)

    Google Scholar 

  2. Hunt, P.B., Robertson, D.I., Bretherton, R.D., Winton, R.I.: SCOOT—a traffic responsive method of coordinating signals. Transportation Research Lab., Crowthorne, U. K., Tech. Report (1981)

    Google Scholar 

  3. Sims, A.G., Dobinson, K.W.: The Sydney coordinated adaptive traffic (SCAT) system philosophy and benefits. IEEE Trans. Veh. Technol. 29(2), 130–137 (1980)

    Article  Google Scholar 

  4. Gartner, N.H.: OPAC: a demand-responsive strategy for traffic signal control. Transp. Res. Rec. 906, 75–81 (1983)

    Google Scholar 

  5. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge, Massachusetts, USA/London, England (2011)

    Google Scholar 

  6. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Article  Google Scholar 

  7. Bertsekas, D.P.: Dynamic Programming and Optimal Control, vol. 2, 3rd edn. Athena Scientific (2007)

    Google Scholar 

  8. Abdulhai, B., Kattan, L.: Reinforcement learning: Introduction to theory and potential for transport applications. Can. J. Civ. Eng. 30, 981–991 (2003)

    Article  Google Scholar 

  9. Abdulhai, B., Kattan, L.: Reinforcement learning: introduction to theory and potential for transport applications. Can. J. Civ. Eng. 30, 981–991 (2003)

    Article  Google Scholar 

  10. Watkins, C.J.C.H.: Learning from Delayed Rewards. Doctoral dissertation, Cambridge University (1989)

    Google Scholar 

  11. Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8(3), 279–292 (1992)

    Google Scholar 

  12. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  13. Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2, 1–127 (2009)

    Article  Google Scholar 

  14. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1106–1114 (2012)

    Google Scholar 

  15. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)

    Article  Google Scholar 

  16. Abdulhai, B., Pringle, R., Karakoulas, G.J.: Reinforcement learning for true adaptive traffic signal control. J. Transp. Eng. 129(3), 278–285 (2003)

    Article  Google Scholar 

  17. Wiering, M.: Multi-agent reinforcement learning for traffic light control. In: ICML, pp. 1151–1158 (2000)

    Google Scholar 

  18. Bingham, E.: Reinforcement learning in neurofuzzy traffic signal control. Eur. J. Oper. Res. 131(2), 232–241 (2001)

    Article  Google Scholar 

  19. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with deep reinforcement learning (2013). arXiv preprint arXiv:1312.5602

  20. Rijken, T.: DeepLight: Deep reinforcement learning for signalised traffic control. Master’s thesis. University College London (2015)

    Google Scholar 

  21. Ozan, C., Baskan, O., Haldenbilen, S., Ceylan, H.: A modified reinforcement learning algorithm for solving coordinated signalized networks. Transp. Res., Part C Emerg. Technol. 54, 40–55 (2015)

    Article  Google Scholar 

  22. Genders, W. and Razavi, S., 2016. Using a deep reinforcement learning agent for traffic signal control. https://arxiv.org/abs/1611.01142. Accessed November 2016

  23. Li, L., Lv, Y., Wang, F.Y.: Traffic signal timing via deep reinforcement learning. IEEE/CAA J. Autom. Sin. 3(3), 247–254 (2016)

    Article  Google Scholar 

  24. van der Pol, E., 2016. Deep reinforcement learning for coordination in traffic light control. Master’s thesis, University of Amsterdam

    Google Scholar 

  25. Lin, L.J.: Self-improving reactive agents based on reinforcement learning, planning and teaching. Mach. Learn. 8(3–4), 293–321 (1992)

    Google Scholar 

  26. Matusugu, M., Mori, K., Mitari, Y., Kaneda, Y.: Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. 16(5), 555–559 (2003)

    Google Scholar 

  27. Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization (2014). arXiv:1412.6980 [cs.LG]

  28. Webster, F.V.: Traffic Signal Settings. Road Research Technical Paper No. 39. London: Great Britain Road Research Laboratory (1958)

    Google Scholar 

  29. Wan, C.H., Hwang, M.C.: A Study on Applying Deep Q-learning Network to Isolated Intersection Adaptive Signal Control. ITS Asia-Pacific Forum FUKUOKA 2018 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chia-Hao Wan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wan, CH., Hwang, MC. (2019). Adaptive Traffic Signal Control Methods Based on Deep Reinforcement Learning. In: Mine, T., Fukuda, A., Ishida, S. (eds) Intelligent Transport Systems for Everyone’s Mobility. Springer, Singapore. https://doi.org/10.1007/978-981-13-7434-0_11

Download citation

Publish with us

Policies and ethics