An Improved Human Action Recognition Method Based on 3D Convolutional Neural Network
Aiming at the problems such as complex feature extraction, low recognition rate and low robustness in the traditional human action recognition algorithms, an improved 3D convolutional neural network method for human action recognition is proposed. The network only uses grayscale images and the number of image frames as input. At the same time, two layers of nonlinear convolutional layers are added to the problem of less convolution and convolution kernels in the original network, which not only increases the number of convolution kernels in the network. Quantity, and make the network have better abstraction ability, at the same time in order to prevent the network from appearing the phenomenon of overfitting, the dropout technology was added in the network to regularize. Experiments were performed on the UCF101 data set, achieving an accuracy of 96%. Experimental results show that the improved 3D convolutional neural network model has a higher recognition accuracy in human action recognition.
KeywordsHuman body motion recognition 3D convolutional neural network Dropout
This work was supported by the National Key Research and Development Plan of China under Grant No. 2016YFB0801004.
- 2.Sheikh, Y., Sheikh, M., Shah, M.: Exploring the space of a human action. In: Tenth IEEE International Conference on Computer Vision, vol. 1, pp. 144–149. IEEE (2005)Google Scholar
- 3.Yamato, J., Ohya, J., Ishii, K.: Recognizing human action in time-sequential images using hidden Markov model. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 379–385. IEEE Computer Society (1992)Google Scholar
- 4.Brand, M., Oliver, N., Pentland, A.: Coupled hidden Markov models for complex action recognition. In: Proceedings of the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1997, p. 994. IEEE (2002)Google Scholar
- 7.Ji, S., Xu, W., Yang, M., et al.: 3D convolutional neural networks for automatic human action recognition. IEEE US8345984 (2013)Google Scholar
- 8.Soomro, K., Zamir, A.R., Shah, M.: UCF101: a dataset of 101 human actions classes from videos in the wild. Computer Science (2012)Google Scholar
- 10.Wang, L., Qiao, Y., Tang, X.: Action recognition with trajectory-pooled deep-convolutional descriptors. In: Computer Vision and Pattern Recognition, pp. 4305–4314. IEEE (2015)Google Scholar
- 11.Liu, J., Huang, Y., Peng, X., et al.: Multi-view descriptor mining via codeword net for action recognition. In: IEEE International Conference on Image Processing, pp. 793–797. IEEE (2015)Google Scholar