A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy

  • Xiaohui Zhang
  • Junchen Wang
  • Tianmiao Wang
  • Xuquan Ji
  • Yu Shen
  • Zhen Sun
  • Xuebin ZhangEmail author
Original Article


Purpose Video see-through augmented reality (VST-AR) navigation for laparoscopic partial nephrectomy (LPN) can enhance intraoperative perception of surgeons by visualizing surgical targets and critical structures of the kidney tissue. Image registration is the main challenge in the procedure. Existing registration methods in laparoscopic navigation systems suffer from limitations such as manual alignment, invasive external marker fixation, relying on external tracking devices with bulky tracking sensors and lack of deformation compensation. To address these issues, we present a markerless automatic deformable registration framework for LPN VST-AR navigation.


Dense stereo matching and 3D reconstruction, automatic segmentation and surface stitching are combined to obtain a larger dense intraoperative point cloud of the renal surface. A coarse-to-fine deformable registration is performed to achieve a precise automatic registration between the intraoperative point cloud and the preoperative model using the iterative closest point algorithm followed by the coherent point drift algorithm. Kidney phantom experiments and in vivo experiments were performed to evaluate the accuracy and effectiveness of our approach.


The average segmentation accuracy rate of the automatic segmentation was 94.9%. The mean target registration error of the phantom experiments was found to be 1.28 ± 0.68 mm (root mean square error). In vivo experiments showed that tumor location was identified successfully by superimposing the tumor model on the laparoscopic view.


Experimental results have demonstrated that the proposed framework could accurately overlay comprehensive preoperative models on deformable soft organs automatically in a manner of VST-AR without using extra intraoperative imaging modalities and external tracking devices, as well as its potential clinical use.


Surgical navigation Augmented reality Video see-through Dense 3D reconstruction Deformable registration 



This work was supported by the National Natural Science Foundation of China (Grant No. 61701014).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Informed consent was obtained from all individual participants included in the study.


  1. 1.
    Dimick JB, Ryan AM (2013) Taking a broader perspective on the benefits of minimally invasive surgery. JAMA Surg 148(7):648CrossRefGoogle Scholar
  2. 2.
    Gill IS, Kavoussi LR, Lane BR, Blute ML, Babineau D, Colombo JR Jr, Frank I, Permpongkosol S, Weight CJ, Kaouk JH, Kattan MW, Novick AC (2007) Comparison of 1800 laparoscopic and open partial nephrectomies for single renal tumors. J Urol 178(1):41–46CrossRefGoogle Scholar
  3. 3.
    Stoyanov D, Mylonas GP, Lerotic M, Chung AJ, Yang G (2008) Intra-operative visualizations: perceptual fidelity and human factors. J Disp Technol 4(4):491–501CrossRefGoogle Scholar
  4. 4.
    Bernhardt S, Nicolau SA, Soler L, Doignon C (2017) The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 37:66–90CrossRefGoogle Scholar
  5. 5.
    Nicolau S, Soler L, Mutter D, Marescaux J (2011) Augmented reality in laparoscopic surgical oncology. Surg Oncol 20(3):189–201CrossRefGoogle Scholar
  6. 6.
    Feuerstein M, Mussack T, Heining SM, Navab N (2008) Intraoperative laparoscope augmentation for port placement and resection planning in minimally invasive liver resection. IEEE Trans Med Imaging 27(3):355–369CrossRefGoogle Scholar
  7. 7.
    Mountney P, Fallert J, Nicolau S, Soler L, Mewes PW (2014) An augmented reality framework for soft tissue surgery. Med Image Comput Comput Assist Interv 17(1):423–431Google Scholar
  8. 8.
    Baumhauer M, Simpfendörfer T, Müller-Stich BP, Teber D, Gutt CN, Rassweiler J, Meinzer H-P, Wolf I (2008) Soft tissue navigation for laparoscopic partial nephrectomy. IJCARS 3(3–4):307–314Google Scholar
  9. 9.
    Wild E, Teber D, Schmid D, Simpfendörfer T, Müller M, Baranski AC, Kenngott H, Kopka K, Maier-Hein L (2016) Robust augmented reality guidance with fluorescent markers in laparoscopic surgery. IJCARS 11(6):899–907Google Scholar
  10. 10.
    Bernhardt S, Nicolau SA, Agnus V, Soler L, Doignon C, Marescaux J (2016) Automatic localization of endoscope in intraoperative CT image: a simple approach to augmented reality guidance in laparoscopic surgery. Med Image Anal 30:130–143CrossRefGoogle Scholar
  11. 11.
    Bernhardt S, Nicolau SA, Agnus V, Soler L, Doignon C, Marescaux J (2014) Automatic detection of endoscope in intraoperative CT image: application to AR guidance in laparoscopic surgery. ISBI. Google Scholar
  12. 12.
    Pessaux P, Diana M, Soler L, Piardi T, Mutter D, Marescaux J (2015) Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy. Langenbeck’s Arch Surg 400(3):381–385CrossRefGoogle Scholar
  13. 13.
    Souzaki R, Ieiri S, Uemura M, Ohuchida K, Tomikawa M (2013) An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images. J Pediatr Surg 48(12):2479–2483CrossRefGoogle Scholar
  14. 14.
    Su LM, Vagvolgyi BP, Agarwal R, Reiley CE, Taylor RH, Hager GD (2009) Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration. Urology 73(4):896–900CrossRefGoogle Scholar
  15. 15.
    Thompson S, Totz J, Song Y, Johnsen S, Stoyanov D (2015) Accuracy validation of an image guided laparoscopy system for liver resection. Med Imaging 9415:941509-1–941509-12Google Scholar
  16. 16.
    Marzano E, Piardi T, Soler L, Diana M, Mutter D, Marescaux J, Pessaux P (2013) Augmented reality-guided artery-first pancreatico-duodenectomy. J Gastrointest Surg 17(11):1980–1983CrossRefGoogle Scholar
  17. 17.
    Puerto Souza GA, Mariottini GL (2013) Toward long-term and accurate augmented-reality display for minimally-invasive surgery. In: ICRA, pp 5384–5389Google Scholar
  18. 18.
    Amir-Khalili A, Nosrati MS, Peyrat JM, Hamarneh G, Abugharbieh R (2013) Uncertainty-encoded augmented reality for robot assisted partial nephrectomy: a phantom study. In: AECAI@MICCAI, pp 182–191.
  19. 19.
    Cash DM, Miga MI, Sinha TK, Galloway RL, Chapman WC (2005) Compensating for intraoperative soft-tissue deformations using incomplete surface data and finite elements. IEEE Trans Med Imaging 24(11):1479–1491CrossRefGoogle Scholar
  20. 20.
    Hamarneh G, Amir-Khalili A, Nosrati M, Figueroa I, Kawahara J, Al-Alao O, Peyrat JM, Abi-Nahed J, Al-Ansari A, Abugharbieh R (2014) Towards multi-modal image-guided tumour identification in robot-assisted partial nephrectomy. In: MECBME, pp 159–162.
  21. 21.
    Haouchine N, Dequidt J, Peterlik I, Kerrien E, Berger MO, Cotin S (2013) Image-guided simulation of heterogeneous tissue deformation for augmented reality during hepatic surgery. In: ISMAR, pp 199–208.
  22. 22.
    Nakamura K, Naya Y, Zenbutsu S, Araki K, Cho S, Ohta S, Nihei N, Suzuki H, Ichikawa T, Igarashi T (2010) Surgical navigation using three-dimensional computed tomography images fused intraoperatively with live video. J Endourol 24(4):521–524CrossRefGoogle Scholar
  23. 23.
    Teber D, Guven S, Simpfendörfer T, Baumhauer M, Güven EO, Yencilek F, Gözen AS, Rassweiler J (2009) Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy preliminary in vitro and in vivo results. Eur Urol 56(2):332–338CrossRefGoogle Scholar
  24. 24.
    Kong S, Haouchine N, Soares R, Klymchenko A, Andreiuk B, Marques B, Shabat G, Piechaud T, Diana M, Cotin S, Marescaux J (2017) Robust augmented reality registration method for localization of solid organs’ tumors using CT-derived virtual biomechanical model and fluorescent fiducials. Surg Endosc 31(7):2863–2871CrossRefGoogle Scholar
  25. 25.
    Wild E, Teber D, Schmid D, Simpfendörfer T, Müller M, Baranski A, Kenngott H, Kopka K, Maier-Hein L (2016) Robust augmented reality guidance with fluorescent markers in laparoscopic surgery. Int J Comput Ass Rad 11(6):899–907Google Scholar
  26. 26.
    Chang PL, Stoyanov D, Davison AJ, Edwards PE (2013) Real-time dense stereo reconstruction using convex optimisation with a cost-volume for image-guided robotic surgery. MICCAI 8149:42–49Google Scholar
  27. 27.
    Totz J, Thompson S, Stoyanov D, Gurusamy K, Davidson BR, Hawkes DJ, Clarkson MJ (2014) Fast semi-dense surface reconstruction from stereoscopic video in laparoscopic surgery. IPCAI 8498:206–215Google Scholar
  28. 28.
    Stoyanov D, Darzi A, Yang GZ (2004) Dense 3D depth recovery for soft tissue deformation during robotically assisted laparoscopic surgery. MICCAI 3217:41–48Google Scholar
  29. 29.
    Chui H, Rangarajan A (2003) A new point matching algorithm for non-rigid registration. Comput Vis Image Und 89(2):114–141CrossRefGoogle Scholar
  30. 30.
    Myronenko A, Song X (2010) Point set registration: coherent point drift. TPAMI 32(12):2262–2275CrossRefGoogle Scholar
  31. 31.
    Hirschmuller H (2005) Accurate and efficient stereo processing by semi-global matching and mutual information. CVPR 2:807–814Google Scholar
  32. 32.
    Hirschmuller H (2008) Stereo processing by semiglobal matching and mutual information. IEEE Trans Pattern Anal Mach Intell 30(2):328–341CrossRefGoogle Scholar
  33. 33.
    Birchfield S, Tomasi C (2002) A pixel dissimilarity measure that is insensitive to image sampling. IEEE Trans Pattern Anal Mach Intell 20(4):401–406CrossRefGoogle Scholar
  34. 34.
    He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: ICCV, pp 2980–2988.
  35. 35.
    Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Und 110(3):346–359CrossRefGoogle Scholar
  36. 36.
    Fischler M, Bolles R (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395CrossRefGoogle Scholar
  37. 37.
    Richard H, Andrew Z (2015) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, CambridgeGoogle Scholar
  38. 38.
    Besl PJ, Mckay ND (1992) A method for registration of 3-d shapes. Proc SPIE Int Soc Opt Eng 14(3):239–256Google Scholar
  39. 39.
    Jolliffe IT (2002) Principal component analysis. J Mark Res 87(100):513. Google Scholar
  40. 40.
    Wang JC, Suenaga H, Hoshi K, Yang LJ, Kobayashi E, Sakuma I, Liao HG (2014) Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery. IEEE Trans Biomed Eng 61(4):1295–1304CrossRefGoogle Scholar
  41. 41.
    Myronenko A, Song X, Carreira-Perpinan MA (2007) Non-rigid point set registration: coherent point drift. In: Proceedings of advances in neural information processing systems, pp 1009–1016.
  42. 42.
    Zhang Z (2002) A flexible new technique for camera calibration. TPAMI 22(11):1330–1334CrossRefGoogle Scholar
  43. 43.
    Plantefeve R, Haouchine N, Radoux JP, Cotin S (2014) Automatic alignment of pre and intraoperative data using anatomical landmarks for augmented laparoscopic liver surgery. Lecture Notes in Computer Science, vol 8789. Springer, Berlin, pp 58–66Google Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  1. 1.School of Mechanical Engineering and AutomationBeihang UniversityBeijingChina
  2. 2.Beijing Advanced Innovation Center for Biomedical EngineeringBeihang UniversityBeijingChina
  3. 3.Department of Urology, Peking Union Medical College HospitalPeking Union Medical College and Chinese Academy of Medical SciencesBeijingChina

Personalised recommendations