Video-Based Pig Recognition with Feature-Integrated Transfer Learning

  • Jianzong WangEmail author
  • Aozhi Liu
  • Jing Xiao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10996)


Automatic detection and recognition of animals has long been a popular topic. It can be used on different areas, such as ecosystem protection, farming industry, insurance industry, etc. Currently, there is still no robust and efficient method for this problem. Deep neural network, a recently rapid developing technology, has shown its great power on image processing, but suffers from low training speed problem. Recently, transfer learning has become popular because it avoids training the network from scratch, which significantly speeds up the training speed. In this paper, we focus on the pig recognition contest organized by a Chinese finance company. Applying all frames for training the neural networks with VGG-19 will result in an accuracy lower than 60% in the prediction steps. With experiments, we find out a key to enhance the accuracy of the video-based pig recognition task is that the frames have to be carefully selected with a certain algorithm. To take advantage of the strengths of different network architectures, we apply feature integration method with the deep neural networks of DPN131, InceptionV3 and Xception network together. We then implement the integrated feature to train the labelled dataset which are frames extracted from the videos of 30 pigs. The resulted model receive an prediction accuracy of 96.41%. Experiments show that the best performance of our proposed methods outperforms all classic deep neural networks training from scratch.


Pig recognition Automatic detection Feature-integration Video-analysis 


  1. 1.
    Yang, L., Hanneke, S., Carbonell, J.: A theory of transfer learning with applications to active learning. Mach. Learn. 90(2), 161–189 (2013)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Jarraya, I., Ouarda, W., Alimi, A.M.: Deep neural network features for horses identity recognition using multiview horses face pattern. In: Ninth International Conference on Machine Vision (ICMV 2016), vol. 10341. International Society for Optics and Photonics (2017)Google Scholar
  3. 3.
    Szegedy, C., et al.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, vol. 4 (2017)Google Scholar
  4. 4.
    Chen, Y., et al.: Dual path networks. In: Advances in Neural Information Processing Systems (2017)Google Scholar
  5. 5.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  6. 6.
    Zhang, Z., et al.: Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification. IEEE Trans. Multimedia 18(10), 2079–2092 (2016)CrossRefGoogle Scholar
  7. 7.
    Wichmann, F.A., et al.: Animal detection in natural scenes: critical features revisited. J. Vis. 10(4), 6–6 (2010)CrossRefGoogle Scholar
  8. 8.
    Socher, R.: CS224d: Deep Learning for Natural Language ProcessingGoogle Scholar
  9. 9.
    Zhang, W., et al.: Distributed embedded deep learning based real-time video processing. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE (2016)Google Scholar
  10. 10.
    He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  11. 11.
    Lu, Y., Mahmoud, M., Robinson, P.: Estimating Sheep Pain Level Using Facial Action Unit Detection. In: 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017). IEEE (2017)Google Scholar
  12. 12.
    Poon, B., Ashraful Amin, M., Yan, H.: Improved methods on PCA based human face recognition for distorted images. In: Proceedings of the International Multi Conference of Engineers and Computer Scientists, vol. 1. (2016)Google Scholar
  13. 13.
    Mandal, B.: Face recognition: perspectives from the real world. In: 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE (2016)Google Scholar
  14. 14.
    Shin, H.-C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5), 1285–1298 (2016)CrossRefGoogle Scholar
  15. 15.
    Kumar, S., Gao, X., Welch, I.: Learning under data shift for domain adaptation: a model-based co-clustering transfer learning solution. In: Ohwada, H., Yoshida, K. (eds.) PKAW 2016. LNCS (LNAI), vol. 9806, pp. 43–54. Springer, Cham (2016). Scholar
  16. 16.
    Long, M., et al.: Deep transfer learning with joint adaptation networks. arXiv preprint arXiv:1605.06636 (2016)
  17. 17.
    Zhou, Y., Xie, L., Fishman, E.K., Yuille, A.L.: Deep supervision for pancreatic cyst segmentation in abdominal CT scans. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 222–230. Springer, Cham (2017). Scholar
  18. 18.
    Dahl, G.E., Sainath, T.N., Hinton, G.E.: Improving deep neural networks for LVCSR using rectified linear units and dropout. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE (2013)Google Scholar
  19. 19.
    Norouzzadeh, M.S., et al.: Automatically identifying wild animals in camera trap images with deep learning. arXiv preprint arXiv:1703.05830 (2017)
  20. 20.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  21. 21.
    Szegedy, C., et al.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Ping An Technology (Shenzhen) Co., LtdShenzhenChina

Personalised recommendations