Skip to main content

Data Extraction from Traffic Videos Using Machine Learning Approach

  • Conference paper
  • First Online:
Book cover Soft Computing for Problem Solving

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 816))

Abstract

Traffic safety has become one of the major concerns in most of the countries with extensive road networks. With the ever-increasing traffic and its various types, it has become increasingly difficult to check if the road network can sustain the surge. To evaluate the efficiency and safety of a network, several factors such as speed, vehicular composition, traffic volume are required. Data collection for calculating each of these factors is time-consuming. Most of the current activities in the area of intelligent transportation systems involve the collection of data through various sources such as surveillance cameras, but the collection alone is not sufficient. It requires a lot of time to process this data and determine the safety level of the road network which becomes manifold for a country with a vast network like India. It is a necessity to expand the use of intelligent transportation for the processing of the data. To achieve this initially, it is required to have traffic flow data. Therefore, high-resolution video cameras were placed at vantage points approximately 100–150 m away from the center of intersection locations. Two such intersections were selected from the National Capital Region (NCR) of India. Traffic flow-related data was recorded from 10 am to 4 pm during good weather condition. The obtained videos were then processed to segregate different types of vehicles. The proposed algorithm deals with the vehicles which are up to 70% occluded. A CNN-LSTM (Krizhevsky et al. in Advances in neural information processing systems, pp 1097–1105, 2012 [6]) model is trained for the recognition of a vehicle. Following this, a minimal cover volume algorithm is developed using bi-grid mapping for classifying vehicles and evaluating various parameters such as base center, orientation, and minimizing error due to occlusion. The proposed algorithm is based on machine learning, and it can estimate the required parameters with minimal human assistance and accuracy of 95.6% on test video and 87.6% on cifar-100 for object detection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

    Google Scholar 

  2. Goodfellow, I.J., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. arXiv:1302.4389 (2013)

  3. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: Overfeat: integrated recognition, localization and detection using convolutional networks. In: ICLR (2014)

    Google Scholar 

  4. Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks. In: CVPR (2014)

    Google Scholar 

  5. Szegedy, C., Reed, S., Erhan, D., Anguelov, D.: Scalable, high-quality object detection. arXiv:1412.1441v2 (2015)

  6. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  7. Sainath, T.N., Vinyals, O., Senior, A., Sak, H.: Convolutional, long short-term memory, fully connected deep neural networks. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4580–4584. IEEE (2015)

    Google Scholar 

  8. Nokandeh, M.M., Ghosh, I., Chandra, S.: Determination of passenger-car units on two-lane intercity highways under heterogeneous traffic conditions. J. Transp. Eng. 142(2), 04015040 (2015)

    Article  Google Scholar 

  9. World Health Organization (WHO): World Report on Road Traffic Injury Prevention, WHO, Geneva, http://www.who.int/violence_injury_prevention/publications/road_traffic/world_report/en/ (2004)

  10. NTDPC ~ Vol-02 Part1 ~ Ch02.indd26 page30, http://planningcommission.nic.in/sectors/NTDPC/volume2_p1/trends_v2_p1.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anshul Mittal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mittal, A., Gupta, M., Ghosh, I. (2019). Data Extraction from Traffic Videos Using Machine Learning Approach. In: Bansal, J., Das, K., Nagar, A., Deep, K., Ojha, A. (eds) Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, vol 816. Springer, Singapore. https://doi.org/10.1007/978-981-13-1592-3_16

Download citation

Publish with us

Policies and ethics