Advertisement

Unsupervised Temporal Video Segmentation as an Auxiliary Task for Predicting the Remaining Surgery Duration

  • Dominik RivoirEmail author
  • Sebastian Bodenstedt
  • Felix von Bechtolsheim
  • Marius Distler
  • Jürgen Weitz
  • Stefanie Speidel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11796)

Abstract

Estimating the remaining surgery duration (RSD) during surgical procedures can be useful for OR planning and anesthesia dose estimation. With the recent success of deep learning-based methods in computer vision, several neural network approaches have been proposed for fully automatic RSD prediction based solely on visual data from the endoscopic camera. We investigate whether RSD prediction can be improved using unsupervised temporal video segmentation as an auxiliary learning task. As opposed to previous work, which presented supervised surgical phase recognition as auxiliary task, we avoid the need for manual annotations by proposing a similar but unsupervised learning objective which clusters video sequences into temporally coherent segments. In multiple experimental setups, results obtained by learning the auxiliary task are incorporated into a deep RSD model through feature extraction, pretraining or regularization. Further, we propose a novel loss function for RSD training which attempts to counteract unfavorable characteristics of the RSD ground truth. Using our unsupervised method as an auxiliary task for RSD training, we outperform other self-supervised methods and are comparable to the supervised state-of-the-art. Combined with the novel RSD loss, we slightly outperform the supervised approach.

Keywords

Unsupervised learning Representation learning Remaining surgery duration Temporal segmentation Computer-assisted surgery 

References

  1. 1.
    Aksamentov, I., Twinanda, A.P., Mutter, D., Marescaux, J., Padoy, N.: Deep neural networks predict remaining surgery duration from cholecystectomy videos. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 586–593. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_66CrossRefGoogle Scholar
  2. 2.
    Bodenstedt, S., Wagner, M., Mündermann, L., Kenngott, H., Müller-Stich, B., et al.: Prediction of laparoscopic procedure duration using unlabeled, multimodal sensor data. IJCARS 14(6), 1089–1095 (2019)Google Scholar
  3. 3.
    Funke, I., Jenke, A., Mees, S.T., Weitz, J., Speidel, S., Bodenstedt, S.: Temporal coherence-based self-supervised learning for laparoscopic workflow analysis. In: Stoyanov, D., et al. (eds.) CARE/CLIP/OR 2.0/ISIC -2018. LNCS, vol. 11041, pp. 85–93. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01201-4_11CrossRefGoogle Scholar
  4. 4.
    Jayaraman, D., Grauman, K.: Slow and steady feature analysis: higher order temporal coherence in video. In: Proceedings of CVPR, pp. 3852–3861 (2016)Google Scholar
  5. 5.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, vol. 1 (2012)Google Scholar
  6. 6.
    Kukleva, A., Kuehne, H., Sener, F., Gall, J.: Unsupervised learning of action classes with continuous temporal embedding. In: Proceedings of CVPR, pp. 12066–12074 (2019)Google Scholar
  7. 7.
    Sener, F., Yao, A.: Unsupervised learning and segmentation of complex activities from video. In: CVPR, June 2018Google Scholar
  8. 8.
    Tran, D.T., Sakurai, R., Yamazoe, H., Lee, J.H.: Phase segmentation methods for an automatic surgical workflow analysis. IJBI (2017) Google Scholar
  9. 9.
    Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., De Mathelin, M., Padoy, N.: EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2016)CrossRefGoogle Scholar
  10. 10.
    Twinanda, A.P., Yengera, G., Mutter, D., Marescaux, J., Padoy, N.: RSDNet: learning to predict remaining surgery duration from laparoscopic videos without manual annotations. IEEE Trans. Med. Imaging 38(4), 1069–1078 (2018)CrossRefGoogle Scholar
  11. 11.
    Yengera, G., Mutter, D., Marescaux, J., Padoy, N.: Less is more: surgical phase recognition with less annotations through self-supervised pre-training of CNN-LSTM networks. arXiv preprint arXiv:1805.08569 (2018)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Dominik Rivoir
    • 1
    • 3
    Email author
  • Sebastian Bodenstedt
    • 1
  • Felix von Bechtolsheim
    • 2
  • Marius Distler
    • 2
  • Jürgen Weitz
    • 2
    • 3
  • Stefanie Speidel
    • 1
    • 3
  1. 1.Translational Surgical Oncology, National Center for Tumor Diseases (NCT)DresdenGermany
  2. 2.Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav CarusTechnische Universitüt DresdenDresdenGermany
  3. 3.Centre for Tactile Internet with Human-in-the-Loop (CeTI)TU DresdenDresdenGermany

Personalised recommendations