Advertisement

Deep Distance Transform to Segment Visually Indistinguishable Merged Objects

  • Sören Klemm
  • Xiaoyi JiangEmail author
  • Benjamin Risse
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11269)

Abstract

We design a two stage image segmentation method, comprising a distance transform estimating neural network and watershed segmentation. It allows segmentation and tracking of colliding objects without any assumptions on object behavior or global object appearance as the proposed machine learning step is trained on contour information only. Our method is also capable of segmenting partially vanishing contact surfaces of visually merged objects. The evaluation is performed on a dataset of collisions of Drosophila melanogaster larvae manually labeled with pixel accuracy. The proposed pipeline needs no manual parameter tuning and operates at high frame rates. We provide a detailed evaluation of the neural network design including 1200 trained networks.

References

  1. 1.
    Bai, M., Urtasun, R.: Deep watershed transform for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2858–2866 (2017).  https://doi.org/10.1109/CVPR.2017.305
  2. 2.
    Dell, A.I., et al.: Automated image-based tracking and its application in ecology. Trends Ecol. Evol. 29(7), 417–428 (2014).  https://doi.org/10.1016/j.tree.2014.05.004CrossRefGoogle Scholar
  3. 3.
    Fiaschi, L., et al.: Tracking indistinguishable translucent objects over time using weakly supervised structured learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2736–2743 (2014).  https://doi.org/10.1109/CVPR.2014.356
  4. 4.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  5. 5.
    Jia, Y., et al.: Caffe: Convolutional architecture for fast feature embedding. CoRR abs/1408.5093 (2014)Google Scholar
  6. 6.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014)Google Scholar
  7. 7.
    Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S.: Self-normalizing neural networks. In: Proceedings of the 30th Annual Conference on Neural Information Processing Systems, pp. 972–981 (2017)Google Scholar
  8. 8.
    Klemm, S., Scherzinger, A., Drees, D., Jiang, X.: Barista - a graphical tool for designing and training deep neural networks. CoRR abs/1802.04626 (2018)Google Scholar
  9. 9.
    LeCun, Y., Bottou, L., Orr, G.B., Müller, K.: Efficient backprop. In: Montavon, G., Orr, G.B., Müller, K. (eds.) Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science, vol. 7700, 2nd edn, pp. 9–48. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35289-8_3CrossRefGoogle Scholar
  10. 10.
    Michels, T., Berh, D., Jiang, X.: An RJMCMC-based method for tracking and resolving collisions of drosophila larvae. IEEE/ACM Trans. Comput. Biol. Bioinform. (2017).  https://doi.org/10.1109/TCBB.2017.2779141
  11. 11.
    Pérez-Escudero, A., Vicente-Page, J., Hinz, R.C., Arganda, S., de Polavieja, G.G.: idTracker: tracking individuals in a group by automatic identification of unmarked animals. Nat. Methods 11, (2014).  https://doi.org/10.1038/nmeth.2994CrossRefGoogle Scholar
  12. 12.
    Risse, B., Otto, N., Berh, D., Jiang, X., Kiel, M., Klämbt, C.: FIM\(^{2{\rm c}}\): multicolor, multipurpose imaging system to manipulate and analyze animal behavior. IEEE Trans. Biomed. Eng. 64(3), 610–620 (2017).  https://doi.org/10.1109/TBME.2016.2570598CrossRefGoogle Scholar
  13. 13.
    Romero-Ferrero, F., Bergomi, M.G., Hinz, R., Heras, F.J.H., de Polavieja, G.G.: idtracker.ai: tracking all individuals in large collectives of unmarked animals. CoRR abs/1803.04351 (2018)Google Scholar
  14. 14.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015).  https://doi.org/10.1007/978-3-319-24574-4_28Google Scholar
  15. 15.
    Scherzinger, A., Klemm, S., Berh, D., Jiang, X.: CNN-based background subtraction for long-term in-vial FIM imaging. In: Proceedings of the 17th International Conference on Computer Analysis of Images and Patterns, pp. 359–371 (2017).  https://doi.org/10.1007/978-3-319-64689-3_29CrossRefGoogle Scholar
  16. 16.
    Yurchenko, V., Lempitsky, V.S.: Parsing images of overlapping organisms with deep singling-out networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4752–4760 (2017).  https://doi.org/10.1109/CVPR.2017.505

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Faculty of Mathematics and Computer ScienceUniveristy of MünsterMünsterGermany

Personalised recommendations