Abstract
Camera traps are a vital tool for ecologists to enable them to monitor wildlife over large areas in order to determine population changes, habitat, and behaviour. As a result, camera-trap datasets are rapidly growing in size. Recent advancements in Artificial Neural Networks (ANN) have emerged in image recognition and detection tasks which are now being applied to automate camera-trap labelling. An ANN designed for species detection will output a set of activations, representing the observation of a particular species (an individual class) at a particular location and time and are often used as a way to calculate population sizes in different regions. Here we go one step further and explore how we can combine ANNs with probabilistic graphical models to reason about animal behaviour using the ANN outputs over different geographical locations. By using the output activations from ANNs as data along with the trap’s associated spatial coordinates, we build spatial Bayesian networks to explore species behaviours (how they move and distribute themselves) and interactions (how they distribute in relation to other species). This combination of probabilistic reasoning and deep learning offers many advantages for large camera trap projects as well as potential for other remote sensing datasets that require automated labelling.
Benjamin C. Evans work is funded by NERC (The Natural Environment Research Council).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Beery, S., Morris, D., Yang, S.: Efficient pipeline for camera trap image review. arXiv:1907.06772 [cs.CV] (2019). https://arxiv.org/abs/1907.06772
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North (2019). https://doi.org/10.18653/v1/n19-1423
Franco, C., Hepburn, L.A., Smith, D.J., Nimrod, S., Tucker, A.: A Bayesian belief network to assess rate of changes in coral reef ecosystems. Environ. Model. Softw. 80, 132–142 (2016). https://doi.org/10.1016/j.envsoft.2016.02.029. http://www.sciencedirect.com/science/article/pii/S1364815216300494
Glover-Kapfer, P., Soto-Navarro, C.A., Wearn, O.R.: Camera-trapping version 3.0: current constraints and future priorities for development. Remote Sens. Ecol. Conserv. 5(3), 209–223 (2019). https://doi.org/10.1002/rse2.106. https://zslpublications.onlinelibrary.wiley.com/doi/abs/10.1002/rse2.106
Maldonado, A., Uusitalo, L., Tucker, A., Blenckner, T., Aguilera, P., Salmerón, A.: Prediction of a complex system with few data: evaluation of the effect of model structure and amount of data with dynamic Bayesian network models. Environ. Model. Softw. 118, 281–297 (2019). https://doi.org/10.1016/j.envsoft.2019.04.011. http://www.sciencedirect.com/science/article/pii/S1364815218310338
van den Oord, A., et al.: Wavenet: a generative model for raw audio. arXiv:1609.03499 [cs.SD] (2016). https://arxiv.org/abs/1609.03499
Rowcliffe, J.M., Carbone, C.: Surveys using camera traps: are we looking to abrighter future? Anim. Conserv. 11(3), 185–186 (2008). https://doi.org/10.1111/j.1469-1795.2008.00180.x. https://zslpublications.onlinelibrary.wiley.com/doi/abs/10.1111/j.1469-1795.2008.00180.x
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y
Spirtes, P., Glymour, C., Scheines, R.: Causation, prediction, and search (1993)
Trifonova, N., Kenny, A., Maxwell, D., Duplisea, D., Fernandes, J., Tucker, A.: Spatio-temporal Bayesian network models with latent variables for revealingtrophic dynamics and functional networks in fisheries ecology. Ecol. Inf. 30, 142–158 (2015). https://doi.org/10.1016/j.ecoinf.2015.10.003. http://www.sciencedirect.com/science/article/pii/S1574954115001648
Uusitalo, L.: Advantages and challenges of Bayesian networks in environmentalmodelling. Ecol. Model. 203(3), 312–318 (2007). https://doi.org/10.1016/j.ecolmodel.2006.11.033. http://www.sciencedirect.com/science/article/pii/S0304380006006089
Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. https://doi.org/10.1109/cvpr.2017.634
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Evans, B.C., Tucker, A., Wearn, O.R., Carbone, C. (2020). Reasoning About Neural Network Activations: An Application in Spatial Animal Behaviour from Camera Trap Classifications. In: Koprinska, I., et al. ECML PKDD 2020 Workshops. ECML PKDD 2020. Communications in Computer and Information Science, vol 1323. Springer, Cham. https://doi.org/10.1007/978-3-030-65965-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-65965-3_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-65964-6
Online ISBN: 978-3-030-65965-3
eBook Packages: Computer ScienceComputer Science (R0)