Advertisement

An End-to-End LSTM-MDN Network for Projectile Trajectory Prediction

  • Li-he Hou
  • Hua-jun LiuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11936)

Abstract

Trajectory prediction from radar data is an example of a signal processing problem, which is challenging due to small sample sizes, high noise, non-stationarity, and non-linear. A data-driven LSTM-MDN end-to-end network from incomplete and noisy radar measurements to predict projectile trajectory is investigated in this paper. Traditional prediction algorithm usually uses Kalman Filter (KF) or the like to estimate target’s position and speed, then uses the numerical integral, such as Runge-Kutta, Adams, etc. to extrapolate the launch point or impact point, which mainly relies on the accuracy of dynamic models. A Long Short-Term Memory (LSTM) network is designed to estimate the real position from sampled and noisy radar measurements series, and a Mixture Density Network (MDN) is developed for trajectory extrapolation and projectile launch point prediction. These two subnetworks are integrated into an end-to-end network, which is trained by the radar measurement samples of a projectile and the corresponding ground truth of its launch point. Compared with the traditional methods, amount of experiments show that our proposed method is superior to the traditional model-based methods, and its adaptability to the range of initial launch angle is significantly better than the traditional method.

Keywords

Trajectory prediction Mixture Density Network Long Short-Term Memory Network 

References

  1. 1.
    Bao, J., Zheng, Y., Mokbel, M.F.: Location-based and preference-aware recommendation using sparse geo-social networking data. In: Proceedings of the 20th International Conference on Advances in Geographic Information Systems, pp. 199–208. ACM (2012)Google Scholar
  2. 2.
    Bao, J., He, T., Ruan, S., et al.: Planning bike lanes based on sharing-bikes’ trajectories. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and data MINING, pp. 1377–1386. ACM (2017)Google Scholar
  3. 3.
    Feng, Z., Zhu, Y.: A survey on trajectory data mining: techniques and applications. IEEE Access 4, 2056–2067 (2016)CrossRefGoogle Scholar
  4. 4.
    Amirian, J., Hayet, J.B., Pettré, J.: Social ways: learning multi-modal distributions of pedestrian trajectories with GANs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)Google Scholar
  5. 5.
    Al-Molegi, A., Jabreel, M., Ghaleb, B.: STF-RNN: space time features-based recurrent neural network for predicting people next location. In: 2016 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–7. IEEE (2016)Google Scholar
  6. 6.
    Park, S.H., Kim, B.D., Kang, C.M., et al.: Sequence-to-sequence prediction of vehicle trajectory via LSTM encoder-decoder architecture. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1672–1678. IEEE (2018)Google Scholar
  7. 7.
    Shi, Z., Xu, M., Pan, Q., et al.: LSTM-based flight trajectory prediction. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018)Google Scholar
  8. 8.
    Li, Z., Li, S.H.: General aircraft 4D flight trajectory prediction method based on data fusion. In: 2015 International Conference on Machine Learning and Cybernetics (ICMLC), vol. 1309–315. IEEE (2015)Google Scholar
  9. 9.
    Shah, R., Romijnders, R.: Applying deep learning to basketball trajectories. arXiv preprint arXiv:1608.03793. (2016)
  10. 10.
    Wang, K.C., Zemel, R.: Classifying NBA offensive plays using neural networks. In: Proceedings of MIT Sloan Sports Analytics Conference, vol. 4 (2016)Google Scholar
  11. 11.
    Liu, H., Xia, L., Wang, C.: Maneuvering target tracking using simultaneous optimization and feedback learning algorithm based on Elman neural network. Sensors 19, 1–19 (2019)CrossRefGoogle Scholar
  12. 12.
    Wang, N., Li, C., Lu, M.: Research on the simulation method of external Ballistic flight speed of fuze. In: 2017 36th Chinese Control Conference (CCC), pp. 10252–10258. IEEE (2017)Google Scholar
  13. 13.
    Mousavi, M.S.R., Boulet, B.: Estimation of the state variables and unknown input of a two-speed electric vehicle driveline using fading-memory Kalman filter. IEEE Trans. Transp. Electrif. 2(2), 210–220 (2016)CrossRefGoogle Scholar
  14. 14.
    Gambs, S., Killijian, M.O., del Prado Cortez, M.N. : Next place prediction using mobility markov chains. In: Proceedings of the First Workshop on Measurement, Privacy, and Mobility, vol. 3. ACM (2012)Google Scholar
  15. 15.
    Krumm, J.: A Markov model for driver turn prediction. In: Proceedings of the Society of Automotive Engineers World Congress, Detroit, pp. 1–7, 14–17 April 2008. SAE World Congress, Warrendale (2016)Google Scholar
  16. 16.
    Rendle, S., Freudenthaler, C., Schmidt-Thieme, L.: Factorizing personalized markov chains for next-basket recommendation. In: Proceedings of the 19th international conference on World wide web, pp. 811–820. ACM (2010)Google Scholar
  17. 17.
    Mathew, W., Raposo, R., Martins, B.: Predicting future locations with hidden Markov models. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pp. 911–918. ACM (2012)Google Scholar
  18. 18.
    Karpathy, A., Johnson, J., Fei-Fei, L.: Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078. (2015)
  19. 19.
    Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. (2013)
  20. 20.
    Gao, Ya., Jiang, G., Qin, X., Wang, Z.: Location prediction algorithm of moving object based on LSTM. J. Front. Comput. Sci. Technol. (2019)Google Scholar
  21. 21.
    Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)CrossRefGoogle Scholar
  22. 22.
    Zhang, H., Liu, H., Wang, C.: Learning to multi-target tracking in dense clutter environment with JPDA-recurrent neural networks. J. Phy.: Conf. Series 1207(1), 012011 (2019). IOP PublishingGoogle Scholar
  23. 23.
    Liu, H., Zhang, H., Mertz, C.: DeepDA: LSTM-based Deep Data Association Network for Multi-Targets Tracking in Clutter. arXiv preprint arXiv:1907.09915. (2019)
  24. 24.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. (2014)
  25. 25.
    Andrychowicz, M., Denil, M., Gomez, S., et al.: Learning to learn by gradient descent by gradient descent. In: Advances in Neural Information Processing Systems, pp. 3981–3989 (2016)Google Scholar
  26. 26.
    Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1126–1135. JMLR. org (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringNanjing University of Science and TechnologyNanjingChina

Personalised recommendations