Abstract
In this paper we present an autonomous camera holder robotic system for minimally invasive surgery (MIS). The proposed system is composed of a 7-DoF collaborative robot, i.e. Franka robot, holding the surgical camera and a motion capture system, i.e. Qualisys system, tracking online the surgical tools movements. The robot adapts its movements to continuously monitor the surgical gestures, based on the tools tips coordinates provided by the Qualisys system. The surgical camera is inserted into the patient’s body through a surgical device, i.e. trocar, generating a kinematic constraint commonly known as Remote Center of Motion (RCM) constraint. In order to preserve the patient safety, the RCM constraint is guaranteed by the control approach. Moreover, a compliance control law is implemented to smooth the robot movements as well as to reduce the efforts generated by the human-robot interactions. Robot Operating System (ROS) framework has been used to establish the communication between the robot and Qualisys, using the UDP protocol for data exchange.
Similar content being viewed by others
Keywords
1 Introduction
The use of robotized assistant systems for medical applications is growing rapidly in recent years [1]. In the context of minimally invasive surgery (MIS), researchers and companies are developing new robotic systems to join in the supply market, among others, the well-known Da Vinci surgical system [2].
Besides the functionalities provided by a fully teleoperated system, such as the Da Vinci system or other recently proposed systems [3], other different needs are evidenced by surgeons, where a robotic assistant could also provide a solution. For instance, in classical minimally invasive surgical procedures, the surgeon usually uses both hands to manipulate the surgical instruments (scissors, forceps, needle holders, … etc.) during the task execution. These instruments are inserted into the patient’s body through trocar devices placed at the desired incision points. In order to obtain a visual feedback of the surgical gestures, an endoscopy camera is also inserted into the patient’s body and is held by a medical staff assistant or a medical student [4]. During the procedure, the surgeon continuously gives orders to the assistant to correctly move the camera so that it correctly follows the surgical tips instruments. However, this method doesn’t filter assistant hand’s tremor, it generates a lack of precision, time delays in the surgical task execution as well as an increase of the stress suffered by the surgeon. Moreover, these difficulties can be significantly aggravated due to the lack of the assistant expertise. To overcome this problem, robotic solutions have been proposed over the past years. The Automated Endoscopic System for Optimal Positioning (AESOP) was a robotic camera holder developed by Computer Motion [5]. This robot provided different modes to be controlled, i.e. manual motion, joystick/foot pedal control, or voice control [6] and was tested in different laparoscopic surgeries [7]. Another proposed solution to replace the assistant is the EndoAssist camera holder robotic device, whose movements are directly controlled by the head movements of the surgeon, using an infrared device attached to his head [8]. It is worth mentioning that both the AESOP and the EndoAssist systems were discontinued due to a lack of commercial interest. Similarly, FreeHand is a lightweight robotic camera holder system, designed with ergonomic features. The direction of the camera movements is chosen by the surgeon through a head movement (using a headset) and the movement is activated using a footswitch [9]. More recently, EndoControl company produced ViKY, a 3-DoF robotized camera holder [10]. The robot is manually positioned over the trocar and fixed through a poly-articulated support. Surgeon can control the camera movements either by voice commands or through a footprint. In [11], a 7-DoF commercial robot is proposed as a camera holder, where gaze gestures of the surgeon are used to control the camera movements.
Although the solutions presented above replace the human assistant, surgeon is always requested to continuously send commands to the robot according to the desired camera movements. To cope with, an autonomous robot-assistant camera holder system for MIS is proposed in this paper, using a 7-DoF collaborative robot as the camera holder. Due to the kinematic constraint generated by the trocar device, creating a Remote Center of Motion (RCM) constraint [12], robot orientates the camera around the RCM point according to the instruments tips movements. Most of the studies carried out about the automatic instruments position identification concern the use of image recognition algorithms [13,14,15,16]. Unlike these methods, the presented work proposes to use a Motion Capture (MoCap) system composed of a set of 8 high resolution cameras, i.e. Qualisys system (https://www.qualisys.com), for the online pose instruments identification.
Several approaches have been proposed to consider the RCM constraint in the control approach of serial redundant robots, for instance, by exploiting the robot’s null-space [17] or by defining the task-space coordinates in terms of the RCM coordinates [18]. In this work, a cartesian compliance control approach has been implemented [19], where the origin of the task-space coordinates is defined along the camera axis, so that it delimits the segment of the camera that will be inserted into the patient’s body. Thus, a fixed desired position is defined in such a way that it coincides with the RCM constraint, whereas the desired orientation is automatically calculated to focus the camera towards the instrument tips. The zoom functionality is directly managed through the camera system. The main advantage of using a compliance control approach is to provide smooth robot movements, avoiding sudden changes of velocity, and to reduce the contact forces generated along the trocar.
This paper is organized as follows. The description of the robot-assistant platform is presented in Sect. 2. Then, the overall control approach allowing to determine the desired instrument tips position related to the robot reference frame as well as the torque control approach implemented in the robot are presented in Sect. 3. The last section presents the conclusions of the presented work.
2 Robot-Assistant Platform
The proposed robot-assistant platform is mainly composed of a 7-DoF Franka Emika robot and a MoCap Qualisys system. The proposed platform is presented in Fig. 1, in a surgical training environment. A potential surgeon executes training tasks with a pelvic trainer, whereas a 7-DoF collaborative robot, i.e. Franka Emika, holds and orientates the camera according to the instrument movements. During the task execution, the surgeon receives a visual feedback from the surgical camera, allowing him to keep visible his task execution.
The MoCap system is composed of 8 high resolution IR cameras allowing to identify the position of 4 markers fixed on each surgical instrument. The identification of these markers allowed to reconstruct the overall position and orientation of each instrument as a rigid-body, with respect to the Qualisys reference frame fixed to the base of the robot. Knowing the relative distances between the markers and the tip on each instrument, it is possible to identify in real time the position coordinates of each instrument tip. Figure 2 shows a screenshot of the instruments bodies reconstructed on real time by the MoCap system, where the relative reference frame for each instrument has been placed at its tip.
3 Control Approach
Communication between the Qualisys system and the Franka robot has been established using the Robot Operating System (ROS) framework, using the User Datagram Protocol (UDP) for data exchange, where the Franka controller and the Qualisys laptop run at 1 kHz and 0.1 kHz, respectively.
In the following, the mapping between the MoCap and Franka coordinates is described. Then, the compliance control approach implemented to guarantee smooth movements is depicted.
3.1 Coordinates Transformation
Figure 3 shows the reference frames fixed to each component of the platform. Concerning the camera holder robot, \( \left\{ F \right\} \) has been defined as the fixed reference frame attached to the base of the robot. Reference frame \( \left\{ C \right\} \) is fixed to the camera and its origin is located along the camera axis. Before starting the surgical procedure, a gravity compensation law is activated in the robot controller in order to allow a member of the medical staff to manually move the robot until the trocar and insert the camera into the patient’s body (represented by the pelvic trainer in Fig. 1), ensuring that the origin of \( \left\{ C \right\} \), coincides with the RCM generated by the trocar. In the next section, a cartesian control law for the robot is defined, where the task-space coordinates are related to the fixed reference frame \( \left\{ F \right\} \), and defined by the position coordinates of the origin of \( \left\{ C \right\} \) and its orientation.
The reference frame of the MoCap system, denoted by \( \left\{ Q \right\} \), has been fixed to the base of the robot. The homogeneous transformation matrix of \( \left\{ Q \right\} \) with respect to \( \left\{ F \right\} \) is defined as follows,
where \( P_{Q}^{F} \) is the constant position coordinates of the origin of \( \left\{ Q \right\} \) with respect to \( \left\{ F \right\} \).
The reference frames \( \left\{ R \right\} \) and \( \left\{ L \right\} \) are attached to the right-handed and left-handed instruments, respectively, with the origin located at each instrument tip. In order to simultaneously focus the camera towards the two instruments tips, a virtual focus target point \( P_{T}^{Q} \) has been calculated as the middle coordinate point between the two tips with respect to \( \left\{ Q \right\} \),
where \( P_{R}^{Q} \) and \( P_{L}^{Q} \) are the position coordinates of the origin of \( \left\{ R \right\} \) and \( \left\{ L \right\} \) with respect to \( \left\{ Q \right\} \), respectively. Then, the target point can be mapped to the robot reference frame \( \left\{ F \right\} \) through the transformation matrix \( T_{Q}^{F} \),
The desired camera orientation is such that the z-axis of \( \left\{ C \right\} \) is oriented towards the target point \( P_{T}^{F} \). Defining the target unit vector \( \hat{V}_{T}^{F} = \frac{{P_{T}^{F} - P_{C}^{F} }}{{\left\| {P_{T}^{F} - P_{C}^{F} } \right\|}} \), the error between the current and the actual camera orientation can be represented as an angle error \( \theta_{e} \) measured around an axis \( e_{axis}^{F} \), based on the axis-angle representation. Thus, the orientation error \( e_{o}^{F} \in {\Re }^{3} \) can be written as follows,
where \( \hat{z}_{C}^{F} \) is the unit vector of the z-axis in \( \left\{ C \right\} \), with respect to \( \left\{ F \right\} \).
Finally, the position error \( e_{P}^{F} \in {\Re }^{3} \) can be defined as the difference between the origin position of \( \left\{ C \right\} \) with respect to \( \left\{ F \right\} \), \( P_{C}^{F} \), and the RCM position \( P_{RCM}^{F} \) generated by the trocar,
3.2 Robot Control
The proposed camera holder robot, i.e. Franka Emika, is a 7-DoF (\( n = 7 \)) collaborative robot conceived to coexist and share a common workspace with human. Since the robot has torque-controlled features, a compliant control strategy [19] can be implemented. As explained before, the compliance feature allows to smooth robot movements, avoiding sudden gestures, as well as to reduce the intensity of the interaction forces at the insertion position of the camera into the patient’s body, i.e. trocar position. Thus, it is possible to define the torque control input \( T_{i} \) as follows:
The cartesian error \( e_{{_{X} }} \in {\Re }^{6} \) is composed of the orientation and position errors defined in (5) and (6), i.e. \( e_{{_{X} }} = \left[ {\begin{array}{*{20}c} {e_{p}^{F} } & {e_{o}^{F} } \\ \end{array} } \right]^{T} \). The current cartesian velocity is represented by \( \dot{X}_{c} \in {\Re }^{6} \). The torque input compensates the inertial and dynamic effects by including an estimation of the Coriolis and Centrifugal torques \( \hat{C}\left( {q_{c} ,\dot{q}_{c} } \right) \in {\Re }^{n} \), as well as the gravity compensation torques \( \hat{g}\left( {q_{c} } \right) \in {\Re }^{n} \), calculated according to the current joint position vector \( q_{c} \in {\Re }^{n} \).
The degree of compliance is regulated along each axis (in position and orientation) by the choice of the cartesian stiffness diagonal matrix \( K_{{p_{x} }} \) and the cartesian damping diagonal matrix \( K_{{d_{x} }} \).
Preliminary experimentations have been achieved in a pelvic trainer during simple exercises of pick and place, where the practicing had to move and insert the objects into a metallic stem. Some screenshot of the proposed tasks are provided in Fig. 4. As it can be observed, robot continuously orientates the camera so that the two instrument tips could always be focused.
4 Conclusion
In this paper, an autonomous Robot-Assistant Camera Holder for Minimally Invasive Surgery was presented. The proposed system is based on real time tracking of the surgical tool movements avoiding the continuous request of surgeon instructions, contrary to existing systems. An additional advantage yields on the use of a collaborative robot allowing the smooth motion given by the implemented compliant control. The presented innovative solution has been preliminary validated through experimental tests performed under pelvic trainer during simple exercises of pick and place. Future work will be focused on the validation of autonomous robot-assistant camera holder during a real suturing tasks on cadavers.
References
Hoeckelmann, M., Rudas, I.J., Fiorini, P., Kirchner, F., Haidegger, T.: Current capabilities and development potential in surgical robotics. Int. J. Adv. Robot. Syste. 12, 16 (2015)
Intuitive Surgical Inc. History – da Vinci surgery is born August 2012. http://www.intuitivesurgical.com/company/history/is_born.html. Accessed 10 Oct 2014
Ateş, G., Majani, R., Dede, Mİ.C.: Design of a teleoperation scheme with a wearable master for minimally invasive surgery. In: Carbone, G., Ceccarelli, M., Pisla, D. (eds.) New Trends in Medical and Service Robotics: Advances in Theory and Practice, pp. 45–53. Springer, Cham (2019)
Abbas, P., Holder-Haynes, J., Taylor, D.J., et al.: More than a camera holder: teaching surgical skills to medical students. J. Surg. Res. 195(2), 385–389 (2015)
Taylor, R.H., Menciassi A., Fichtinger G., Dario, P.: Medical robotics in computer-integrated surgery. In: Siciliano, B., Khatib, O. (eds.) Handbook of Robotics, Ch. 66, 2nd edn., pp. 1199–1222. Springer Cambridge (2008)
Allaf, M.E., et al.: Laparoscopic visual field: Voice versus foot pedal interfaces for control of the aesop robot. Surg. Endosc. 12, 1415–1418 (1998)
Kraft, B.M., Jäger, C., Kraft, K., Leibl, B.J., Bittner, R.: The AESOP robot system in laparoscopic surgery: Increased risk or advantage for surgeon and patient? Surg. Endosc. Other Int. Tech. 18, 1216–1223 (2004)
Gilbert, J.M.: The endoassist robotic camera holder as an aid to the introduction of laparoscopic colorectal surgery. Ann. Roy. Coll. Surg. Engl. 91(5), 389–393 (2009)
Stolzenburg, J., Franz, T., Kallidonis, P., Minh, D., Dietel, A., Hicks, J., Nicolaus, M., Al-Aown, A., Liatsikos, E.: Comparison of the FreeHand® robotic camera holder with human assistants during endoscopic extraperitoneal radical prostatectomy. BJU Int. 107, 970–974 (2011)
Voros, S., Haber, G.P., Menudet, J.F., Long, J.A., Cinquin, P.: ViKY robotic scope holder: initial clinical experience and preliminary results using instrument tracking. IEEE/ASME Trans. Mechatron. 6, 879–886 (2010)
Fujii, K., Salerno, A,. Sriskandarajah, K., Kwok, K., Shetty, K., Yang, G.: Gaze contingent cartesian control of a robotic arm for laparoscopic surgery. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, pp. 3582–3589 (2013)
Kuo, C.-H., Dai, J.S., Dasgupta, P.: Kinematic design considerations for minimally invasive surgical robots: an overview. Int. J. Med. Rob. Comput. Assist. Surg. 8(2), 127–145 (2012)
Nageotte, F., Zanne, P., Doignon, C., De Mathelin, M.: Visual servoing-based endoscopic path following for robot-assisted laparoscopic surgery. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2364–2369 (2006)
Wei, G.-Q., Arbter, K., Hirzinger, G.: Real-time visual servoing for laparoscopic surgery. IEEE Eng. Med. Biol. Mag. 16, 40–45 (1997)
Voros, S., Long, J., Cinquin, P.: Automatic localization of laparoscopic instruments for the visual servoing of an endoscopic camera holder. Med. Image Comput. Comput. Assist. Interv. 4190, 535–542 (2006)
Krupa, A., Gangloff, J., Doignon, C., Mathelin, M., Morel, G., Leroy, J., Soler, L., Marescaux, J.: Autonomous 3D positionning of surgical instruments in rototized laparoscopic surgery using visual servoing. IEEE Trans. Robot. Autom. 19(5), 842–853 (2003)
Sandoval, J., Poisson, G., Vieyres, P.: Improved dynamic formulation for decoupled cartesian admittance control and RCM constraint. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1124–1129. IEEE (2016)
Sandoval, J., Poisson, G., Vieyres, P.: A new kinematic formulation of the RCM constraint for redundant torque-controlled robots. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2017)
Ott, C.: Cartesian impedance control of redundant and flexible-joint robots. Springer-Verlag, Heidelberg (2008)
Acknowledgements
This research was funded by the region ‘‘Nouvelle-Aquitaine’’ (program HABISAN 2015-2020) with the financial participation of the European Union (FEDER/ERDF, European Regional Development Fund). This work was also sponsored by the French government research program Investissements d’avenir through the Robotex Equipment of Excellence (ANR-10-EQPX-44).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Sandoval, J., Laribi, M.A., Zeghloul, S. (2020). Autonomous Robot-Assistant Camera Holder for Minimally Invasive Surgery. In: Kuo, CH., Lin, PC., Essomba, T., Chen, GC. (eds) Robotics and Mechatronics. ISRM 2019. Mechanisms and Machine Science, vol 78. Springer, Cham. https://doi.org/10.1007/978-3-030-30036-4_42
Download citation
DOI: https://doi.org/10.1007/978-3-030-30036-4_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30035-7
Online ISBN: 978-3-030-30036-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)