Skip to main content
Log in

Detecting Animals in Infrared Images from Camera-Traps

  • Proceedings of the 6th International Workshop
  • Published:
Pattern Recognition and Image Analysis Aims and scope Submit manuscript

Abstract

Camera traps mounted on highway bridges capture millions of images that allow investigating animal populations and their behavior. As the manual analysis of such an amount of data is not feasible, automatic systems are of high interest. We present two different of such approaches, one for automatic outlier classification, and another for the automatic detection of different objects and species within these images. Utilizing modern deep learning algorithms, we can dramatically reduce the engineering effort compared to a classical hand-crafted approach. The results achieved within one day of work are very promising and are easily reproducible, even without specific computer vision knowledge.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J. M. Rowcliffe and C. Carbone, “Surveys using cameratraps: are we looking to a brighter future?”, Anim. Conserv. 11 (3), 185–186 (2008).

    Article  Google Scholar 

  2. B. Radig and P. Follmann, “Training a classifier for automatic flash detection in million images from camera–traps,” in Proc. ICPRAI 2018–Int. Conf. on Pattern Recognition and Artificial Intelligence, Workshop on Image Mining–Mathematical Theory and Applications (Montreal, Canada, 2018), pp. 589–591.

    Google Scholar 

  3. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Int. Conf. Advances in Neural Information Processing Systems 25 (NIPS 2012) (Lake Tahoe, NV, 2012), pp. 1097–1105.

    Google Scholar 

  4. B. Barz and J. Denzler, “Deep learning is not a matter of depth but of good training,” in Proc. ICPRAI 2018–Int. Conf. on Pattern Recognition and Artificial Intelligence, Workshop on Image Mining–Mathematical Theory and Applications (Montreal, Canada, 2018), pp. 683–687.

    Google Scholar 

  5. C.–A. Brust. T. Burghardt, et al., “Towards automated visual monitoring of individual gorillas in the wild,” in Proc. 2017 IEEE International Conference on Computer Vision Workshops (ICCVW) (Venice, Italy, 2017), pp. 2820–2830.

    Chapter  Google Scholar 

  6. S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster RCNN: Towards real–time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39 (6), 62–66 (2017).

    Article  Google Scholar 

  7. T.–Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. 2017 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (Honolulu, HI, 2017), pp. 936–944.

    Chapter  Google Scholar 

  8. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. 2016 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (Las Vegas, NV, 2016), pp. 770–778.

    Chapter  Google Scholar 

  9. T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proc. 2017 IEEE International Conference on Computer Vision (ICCV) (Venice, Italy, 2017), pp. 2999–3007.

    Google Scholar 

  10. R. Girshick, I. Radosvovic, G. Gkioxari, P. Dollár, K. He, Detectron (2018) https://github.com/facebookresearch/detectron.

    Google Scholar 

  11. T. Y. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: common objects in context,” in Computer Vision–ECCV 2014, Proc. 13th European Conf. on Computer Vision, Part V, Ed. by D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Lecture Notes in Computer Science (Springer, Cham, 2013), Vol. 8693, pp. 740–755.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Follmann.

Additional information

The article is published in the original.

With permission of the Autobahndirektion Nordbayern, Sachbereich Landschaftsplanung.

Patrick Follmann studied Mathematics in Bioscience at the Technische Universität München (TUM) and received his MSc degree in 2015. He is currently working toward the PhD degree at the Research department of MVTec Software GmbH. His research interests spread between the areas of machine learning and computer vision, with special focus on image classification, object detection and instanceaware semantic segmentation.

Bernd Radig is professor emeritus at the Technical University of Munich (TUM), department of Informatics. His research area is Artificial Intelligence, specially image and image sequence understanding. His PhDthesis (1978) was on tracking of cars in traffic scenes. Other focus was on the analysis of football matches from television broadcasts, recognition of the course of human emotions from image sequences of the face, humanrobot communication, analysis of the actions of persons in interior spaces, extension of driver assistance systems for road tracking and for automatic emergency braking, and currently on a large German infrastructure project to classify and count animals in the wild. Here, he leads a team from six universities to set up the visual part of monitoring biodiversity by fully automatized monitoring stations to be distributed in representative areas all over Germany. He studied Physics at the University of Bonn, got his PhD and Venia Legendi from the University of Hamburg, was there associated professor and acting head of the chair Cognitive Systems until he got a chair for Image Understanding and Knowledge Based Systems at TUM. Among other external positions he was the founder and chairman of the Bavarian Research Center for Knowledge Based Systems and served as a member of the board the Excellence Cluster Cognition for Technical Systems. He is a lifetime member of the privileged TUM group Emeriti of Excellence.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Follmann, P., Radig, B. Detecting Animals in Infrared Images from Camera-Traps. Pattern Recognit. Image Anal. 28, 605–611 (2018). https://doi.org/10.1134/S1054661818040107

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1054661818040107

Keywords

Navigation