Advertisement

Triplet Feature Learning on Endoscopic Video Manifold for Online GastroIntestinal Image Retargeting

  • Yun Gu
  • Benjamin Walter
  • Jie YangEmail author
  • Alexander Meining
  • Guang-Zhong YangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Optical biopsy is a popular technique for GastroIntestinal oncological analysis. Due to practical constraints on tissue handling, the biopsy is only limited to a few target sites. Therefore, retargeting of optical biopsy sites is fundamental in examining the GastroIntestinal tracts. As an online object tracking problem, learning the intrinsic feature is critical for robust retargeting. In this paper, we proposed an online retargeting framework for GastroIntestinal biopsy. During offline training, the endoscopic video manifold is built to mine the latent triplets to train the SiamFC tracker; In online tracking, we use both short-term and long-term template to locate the biopsy site in candidate image. To handle the out-of-view cases, reliability measurement and re-detection modules are introduced. Experiments on in-vivo GastroIntestinal videos demonstrate the effectiveness of the proposed method and the robustness to visual variations.

Keywords

GastroIntestinal images Biopsy site retargeting Siamese neural networks Triplet mining 

Notes

Acknowledgement

This research is partly supported by NSFC (No.61572315), Committee of Science and Technology, Shanghai, China (No.17JC1403000) and 973 Plan, China (No.2015CB856004).

References

  1. 1.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE TPAMI 34(7), 1409–1422 (2012)CrossRefGoogle Scholar
  2. 2.
    Ma, C., Yang, X., Zhang, C., Yang, M.H.: Long-term correlation tracking. In: CVPR, pp. 5388–5396 (2015)Google Scholar
  3. 3.
    Bolme, D.S., Beveridge, J.R., Draper, B.A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: CVPR, pp. 2544–2550. IEEE (2010)Google Scholar
  4. 4.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.: Staple: Complementary learners for real-time tracking. In: CVPR, pp. 1401–1409 (2016)Google Scholar
  5. 5.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE TPAMI 37(3), 583–596 (2015)CrossRefGoogle Scholar
  6. 6.
    Danelljan, M., Bhat, G., Shahbaz Khan, F., Felsberg, M.: Eco: efficient convolution operators for tracking. In: CVPR, pp. 6638–6646 (2017)Google Scholar
  7. 7.
    Ye, M., Giannarou, S., Meining, A., Yang, G.Z.: Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations. Med. Image Anal. 30, 144–157 (2016)CrossRefGoogle Scholar
  8. 8.
    Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: Fully-convolutional siamese networks for object tracking. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 850–865. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48881-3_56CrossRefGoogle Scholar
  9. 9.
    Valmadre, J., Bertinetto, L., Henriques, J., Vedaldi, A., Torr, P.H.: End-to-end representation learning for correlation filter based tracking. In: CVPR, pp. 2805–2813 (2017)Google Scholar
  10. 10.
    Yun, S., Choi, J., Yoo, Y., Yun, K., Young Choi, J.: Action-decision networks for visual tracking with deep reinforcement learning. In: CVPR, pp.2711–2720 (2017)Google Scholar
  11. 11.
    Dong, X., Shen, J.: Triplet loss in siamese network for object tracking. In: ECCV, pp. 459–474 (2018)CrossRefGoogle Scholar
  12. 12.
    Atasoy, S., Mateus, D., Meining, A., Yang, G.Z., Navab, N.: Endoscopic video manifolds for targeted optical biopsy. IEEE TMI 31(3), 637–653 (2012)Google Scholar
  13. 13.
    Dijkstra, E.W.: A note on two problems in connexion with graphs. Numerische Mathematik 1(1), 269–271 (1959)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Lee, H., Choi, S., Kim, C.: A memory model based on the siamese network for long-term tracking. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11129, pp. 100–115. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-11009-3_5CrossRefGoogle Scholar
  15. 15.
    Kiani Galoogahi, H., Fagg, A., Lucey, S.: Learning background-aware correlation filters for visual tracking. In: ICCV, pp. 1135–1143 (2017)Google Scholar
  16. 16.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Institute of Image Processing and Pattern RecognitionShanghai Jiao Tong UniversityShanghaiChina
  2. 2.Institute of Medical RoboticsShanghai Jiao Tong UniversityShanghaiChina
  3. 3.Hamlyn Centre for Robotic Surgery, Imperial College LondonLondonUK
  4. 4.Ulm UniversityUlmGermany

Personalised recommendations