Delving Deeper with Dual-Stream CNN for Activity Recognition
Video-based human activity recognition has fascinated researchers of computer vision community due to its critical challenges and wide variety of applications in surveillance domain. Thus, the development of techniques related to human activity recognition has accelerated. There is now a trend towards implementing deep learning-based activity recognition systems because of performance improvement and automatic feature learning capabilities. This paper implements fusion-based dual-stream deep model for activity recognition with emphasis on minimizing amount of pre-processing required along with fine-tuning of pre-trained model. The architecture is trained and evaluated using standard video actions benchmarks of UCF101. The proposed approach not only provides results comparable with state-of-the-art methods but is also better at exploiting pre-trained model and image data.
KeywordsActivity recognition Deep learning Spatio-temporal features Convolution neural network
- 4.Soomro, K., Roshan Zamir, A., & Shah, M. (2012). UCF101: A dataset of 101 human action classes from videos in the wild CRCV-TR-12-01, 1, 2, 3, 5.Google Scholar
- 6.Wang, P., Zhang, J., & Ogunbona, P. O. (2015). Action recognition from depth maps using deep convolutional neural networks. IEEE Transactions on Human-Machine Systems.Google Scholar
- 7.Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L., 2014. Large-scale video classification with convolutional neural networks. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1725–1732.Google Scholar
- 8.Simonyan, K., & Zisserman, A.. (2014). Two-stream convolutional networks for action recognition in videos. In Proceedings of the Advances in Neural Information Processing Systems (NIPS) (pp. 568–576).Google Scholar
- 9.Feichtenhofer, C., Pinz, A., & Zisserman, A.. (2016). Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1933–1941).Google Scholar
- 10.Zolfaghari, M., Oliveira, G. L., Sedaghat, N., & Brox, T. Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection. https://arxiv.org/abs/1704.00616.
- 11.Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015) .Learning spatiotemporal features with 3D convolutional networks. In ICCV.Google Scholar
- 12.Ji, S., Xu, W., Yang, M., & Yu, K. (2010). 3D convolutional neural networks for human action recognition. In ICML.Google Scholar
- 14.Baccouche, M., Mamalet, F., Wolf, C., Garcia, C., & Baskurt A.. (2011). Sequential deep learning for human action recognition, A.. A. Salah & B. Lepri (Eds.) HBU, LNCS 7065 (pp. 29–39).Google Scholar
- 15.Varol, G., Laptev, I., & Schmid, C. (2016). Long-term Temporal Convolutions for Action Recognition. arXiv:1604.04494.
- 16.Deng, J., Dong, W., Socher, R., Li, L., Li, K., & Li, F. (2009). ImageNet: a large-scale hierarchical image database. In CVPR (pp. 248–255).Google Scholar
- 17.He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 770–778), 1, 2, 3, 4, 5.Google Scholar
- 18.Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). Going deeper with convolutions. In CVPR (pp. 1–9).Google Scholar