Abstract
In the last year, we developed some multiple camera system to measure a surgical area and get several capturing results in our laboratory. In this paper, in order to evaluate the multiple camera system, we first check how to calibrate three RGB/Depth cameras based on many landmarks. Then, when a doctor uses a microscope for the microsurgery, he/she raises or lowers the camera. In this case, we evaluate depth changes at each pixel or the average of all pixels within depth image of each camera in distance. In the evaluation method, it is measure the depth values at five different distance. Through this evaluation, we study about the performance of our capturing system.
Keywords
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
In neurosurgical surgery, it is common to take an image of the head with CT/MRI in order to grasp the condition of the diseased part in advance before surgery. The image is showed on the display like LCD display etc. to confirm the surgical planning during surgery. However, it is often difficult to mapping the image and real condition of the diseased part. Therefore, it is needed to support surgery by mapping the image and real surgical area, which is like navigation system.
There are many methods for capturing a surgical area that includes multiple kinds of organs and/or several medical tools in an abdominal or laparoscopic surgery using different types of surgical operative procedures [1,2,3]. In past years, we designed and constructed several prototypes to capture the surgical area for supporting the neurosurgical surgery [4,5,6,7]. One of that used one robot and one camera to capture a wider visible surgical area. The unique characteristic of this prototype is that the camera and robotic slider are connected to a surgical bed. By this connection, even though a surgeon controls the surgical bed rotationally or translationally, the relative position and orientation between the camera and surgical area are completely fixed, and therefore our proposed position, orientation, and shape transcription algorithms [4,5,6] can be directly used in real surgeries. Another prototype used multiple cameras and two robots to capture more widely visible surgical area than that of before [7]. The characteristic of this prototype is that it can eliminate many types of occlusion in a surgical area by a surgeon’s hand, head, and/or microscope. However, it has not evaluated detailed performance yet.
In this paper, we introduce about the performance evaluation to the multiple camera systems. In order to evaluate the system, we first check how to calibrate three RGB/Depth cameras based on many landmarks. Then, when a doctor uses a microscope for the microsurgery, he/she raises or lowers the camera. In this case, we evaluate depth changes at each pixel or the average of all pixels within depth image of each camera in distance. In the evaluation method, it is measure the depth values at five different distance. Through this evaluation, we study about the performance of our capturing system.
2 Prototype Capturing System with Multiple Cameras
In this section, we introduce our prototype system to capture a visible surgical area by using multiple camera and two robotic sliders. Figure 1 shows the system overview.
Our prototype system set the cameras around over the surgical operation area by using a circle ring mount. Moreover, they can be able to set their vertical position by using two robotic sliders. Using this type of multiple camera system, if a diseased part cannot be caught by initial camera images, it can be captured by other cameras. As a result, our proposed position/orientation/shape transcription algorithms become active with the support of the captured data.
The cameras used in our system are Intel RealSence SR300 [8] (Fig. 2). It is able to capture RGB and Depth images. The principle to capture depth image of this camera is measuring a reflected structured pattern projected an object from each infrared light projector. The position of the reflected light pattern depends on the distance to the reflecting surface, determined through simple geometry. Hence, with bit of trigonometry, it is possible to reconstruct a three-dimensional (3D) scene. As shown in the explanation, a structured pattern of infrared light is projected to a target object (in our study, this is a human organ). If two or more cameras project structured patterns of infrared light simultaneously, corresponding confusion can occur, and thus, no depth images can be obtained by any of the cameras. To overcome this obstacle, we use multiple cameras whose depth images are controlled by time shearing to avoid any interference [7]. Figure 3 shows the time sharing schedule model of our system. It uses the number of capturing frames (Nc) and the number of skipping frames (Ns).
Our prototype system to capture an object is showed in Fig. 4. It moves up or down the ring on which three cameras are attached by using two robotic sliders. Then, it captures depth parameters at 30 points around an object. These points are enable to be arbitrarily defined.
To measure the depth value from the ring adequately, our system has a calibration method to match the positions of the three cameras in advance. Figure 4 shows the calibration markers. At first, marker board A/B (Fig. 5(a) and (b)) is used to measure position of each camera in global coordinate system. Then, point markers (Fig. 5(c)) is used to align each camera in global coordinate system. In this paper, we use two types of marker board, marker board A/B, to evaluate the measurement precision.
3 Comparative Experiments on Differences with/without Calibration
We conduct experiments to clarify the performance of calibration implementation. In this experiment, we compare the depth values from each camera with/without execution of calibration. It is measured distance values between brain model, which is set on our system, and the cameras. The point to measure in this experiment is average of 30 measurement points acquired by the system. It is measured at five different position of the ring. The distance of each position is placed at regular intervals of 100 mm.
Figure 6 shows the result of non-executing calibration procedure before measuring points. Figure 7 shows the result of executing calibration procedure before measuring it. In this calibration procedure, we use marker board A (Fig. 5(a)). Figures 6 and 7 are showed three cameras results. It is confirmed that the results of executing calibration procedure in Fig. 7 is enable to capture more points and to capture more stable than that of in Fig. 6. Especially at camera 1 (Figs. 6(b) and 7(b)), it is clear that the value of the most far position between models and camera is acquired stably.
4 Experiments by Using Small Calibration Board
For use in actual surgery, it is necessary to consider the size of the marker to be attached to the surgical. Therefore, another experiments were conducted as to whether difference in precision was obtained using the small marker, marker board B in Fig. 5(b). The experiment procedure is the same as before one. It is measured distance values between brain model, which is set on our system, and the cameras. The point to measure in this experiment is average of 30 measurement points acquired by the system. It is measured at five different position of the ring. The distance of each position is placed at regular intervals of 100 mm.
The results are showed in Fig. 8. As a result, it is understood that a substantially constant distance has been acquired. However, concerning the measurement value of the camera 1, fluctuation and a difference from the actual measurement value were observed.
Figure 9 is another result of this experiments. In this result, the time-sharing schedule, which is controls capture frames and skipping frames of the cameras, is changed different value from result in Fig. 8. As a result, it is understood that a substantially constant distance has been acquired. However, concerning the measurement value of the camera 1, fluctuation and a difference from the actual measurement value were observed.
5 Conclusions
In this paper, we evaluate our capturing system that is developed for neurosurgical navigation system presented in last year. A marker based calibration method is implemented on our system. In addition, it is studied about the performance evaluation. As a result, it is clear that the performance is good for capturing the object. Therefore, it is confirmed that our method is enable to use at real surgical room.
As future work, we will conducted experiments at real surgical room with shadow-less lamp.
References
Logan, W.C., Prashanth, D., William, C.C., Benoit, M.D., Robert, L.G., Michael, I.M.: Organ surface deformation measurement and analysis in open hepatic surgery: method and preliminary results from 12 clinical cases. IEEE Trans. Biomed. Eng. 58(8), 2280–2289 (2011). https://doi.org/10.1109/TBME.2011.2146782
Xu, A., Zhu, J.F., Zhang, D.: Development of a measurement system for laparoendo-scopic single-site surgery: reliability and repeatability of digital image correlation for measurement of surface deformations in SILS port. JSLS 18(3) (2014). https://doi.org/10.4293/JSLS.2014.00267, PMCID: PMC4154418
Kang, N., Lee, M.W., Rhee, T.: Simulating liver deformation during respiration using sparse local features. IEEE Comput. Graphics Appl. 32(5), 29–38 (2012). https://doi.org/10.1109/MCG.2012.65
Watanabe, K., Kayaki, M., Mizushino, K., Nonaka, M., Noborio, H.: A mechanical system directly attaching beside a surgical bed for measuring surgical area precisely by depth camera. In: Proceedings of the 10th MedViz Conference and the 6th Eurographics Workshop on Visual Computing for Biology and Medicine (EG VCBM), Bergen, Norway, pp. 105–108, 7–9 September 2016 (2016). ISBN 978-82-998920-7-0 (Printed), ISBN 978-82-998920-8-7 (Electronic)
Watanabe, K., Kayaki, M., Mizushino, K., Nonaka, M., Noborio, H.: Brain shift simulation controlled by directly captured surface points. In: Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2016), Category: Late Breaking Research Posters, Theme: BioMedical Imaging and Image Processing, Sessions: Ignite_Theme 2_Fr2, Poster Session III, Orlando Florida USA, 16–20 August 2016 (2016)
Watanabe, K., Kayaki, M., Mizushino, K., Nonaka, M., Noborio, H.: Capturing a brain shift directly by the depth camera kinect v2. In: Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2016), Category: Late Breaking Research Posters, Theme: Computational Systems & Synthetic Biology; Multiscale Modeling, Sessions: Ignite_Theme 4_Fr1, Poster Session II, Orlando Florida USA, 16–20 August 2016 (2016)
Nonaka, M., Watanabe, K., Noborio, H., Kayaki, M., Mizushino, K.: Capturing a surgical area using multiple depth cameras mounted on a robotic mechanical system. In: Marcus, A., Wang, W. (eds.) DUXU 2017. LNCS, vol. 10289, pp. 540–555. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58637-3_42
Intel RealSence SR300. https://software.intel.com/en-us/realsense/sr300
Acknowledgement
This work was supported by JSPS KAKENHI Grant Number JP17K00420.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Onishi, K., Tanaka, Y., Mizushino, K., Tachibana, K., Watanabe, K., Noborio, H. (2018). Calibration Experiences of Multiple RGB/Depth Visions for Capturing a Surgical Area. In: Kurosu, M. (eds) Human-Computer Interaction. Interaction in Context. HCI 2018. Lecture Notes in Computer Science(), vol 10902. Springer, Cham. https://doi.org/10.1007/978-3-319-91244-8_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-91244-8_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-91243-1
Online ISBN: 978-3-319-91244-8
eBook Packages: Computer ScienceComputer Science (R0)