Keywords

1 Introduction

In neurosurgical surgery, it is common to take an image of the head with CT/MRI in order to grasp the condition of the diseased part in advance before surgery. The image is showed on the display like LCD display etc. to confirm the surgical planning during surgery. However, it is often difficult to mapping the image and real condition of the diseased part. Therefore, it is needed to support surgery by mapping the image and real surgical area, which is like navigation system.

There are many methods for capturing a surgical area that includes multiple kinds of organs and/or several medical tools in an abdominal or laparoscopic surgery using different types of surgical operative procedures [1,2,3]. In past years, we designed and constructed several prototypes to capture the surgical area for supporting the neurosurgical surgery [4,5,6,7]. One of that used one robot and one camera to capture a wider visible surgical area. The unique characteristic of this prototype is that the camera and robotic slider are connected to a surgical bed. By this connection, even though a surgeon controls the surgical bed rotationally or translationally, the relative position and orientation between the camera and surgical area are completely fixed, and therefore our proposed position, orientation, and shape transcription algorithms [4,5,6] can be directly used in real surgeries. Another prototype used multiple cameras and two robots to capture more widely visible surgical area than that of before [7]. The characteristic of this prototype is that it can eliminate many types of occlusion in a surgical area by a surgeon’s hand, head, and/or microscope. However, it has not evaluated detailed performance yet.

In this paper, we introduce about the performance evaluation to the multiple camera systems. In order to evaluate the system, we first check how to calibrate three RGB/Depth cameras based on many landmarks. Then, when a doctor uses a microscope for the microsurgery, he/she raises or lowers the camera. In this case, we evaluate depth changes at each pixel or the average of all pixels within depth image of each camera in distance. In the evaluation method, it is measure the depth values at five different distance. Through this evaluation, we study about the performance of our capturing system.

2 Prototype Capturing System with Multiple Cameras

In this section, we introduce our prototype system to capture a visible surgical area by using multiple camera and two robotic sliders. Figure 1 shows the system overview.

Fig. 1.
figure 1

(a) Three cameras are steadily controlled by two robotic sliders in an up and down manner, (b) Three smaller cameras are located on a ring between two robotic sliders.

Our prototype system set the cameras around over the surgical operation area by using a circle ring mount. Moreover, they can be able to set their vertical position by using two robotic sliders. Using this type of multiple camera system, if a diseased part cannot be caught by initial camera images, it can be captured by other cameras. As a result, our proposed position/orientation/shape transcription algorithms become active with the support of the captured data.

The cameras used in our system are Intel RealSence SR300 [8] (Fig. 2). It is able to capture RGB and Depth images. The principle to capture depth image of this camera is measuring a reflected structured pattern projected an object from each infrared light projector. The position of the reflected light pattern depends on the distance to the reflecting surface, determined through simple geometry. Hence, with bit of trigonometry, it is possible to reconstruct a three-dimensional (3D) scene. As shown in the explanation, a structured pattern of infrared light is projected to a target object (in our study, this is a human organ). If two or more cameras project structured patterns of infrared light simultaneously, corresponding confusion can occur, and thus, no depth images can be obtained by any of the cameras. To overcome this obstacle, we use multiple cameras whose depth images are controlled by time shearing to avoid any interference [7]. Figure 3 shows the time sharing schedule model of our system. It uses the number of capturing frames (Nc) and the number of skipping frames (Ns).

Fig. 2.
figure 2

Intel RealSence SR300.

Fig. 3.
figure 3

Time sharing schedule model.

Our prototype system to capture an object is showed in Fig. 4. It moves up or down the ring on which three cameras are attached by using two robotic sliders. Then, it captures depth parameters at 30 points around an object. These points are enable to be arbitrarily defined.

Fig. 4.
figure 4

Capturing environment.

To measure the depth value from the ring adequately, our system has a calibration method to match the positions of the three cameras in advance. Figure 4 shows the calibration markers. At first, marker board A/B (Fig. 5(a) and (b)) is used to measure position of each camera in global coordinate system. Then, point markers (Fig. 5(c)) is used to align each camera in global coordinate system. In this paper, we use two types of marker board, marker board A/B, to evaluate the measurement precision.

Fig. 5.
figure 5

Calibration markers.

3 Comparative Experiments on Differences with/without Calibration

We conduct experiments to clarify the performance of calibration implementation. In this experiment, we compare the depth values from each camera with/without execution of calibration. It is measured distance values between brain model, which is set on our system, and the cameras. The point to measure in this experiment is average of 30 measurement points acquired by the system. It is measured at five different position of the ring. The distance of each position is placed at regular intervals of 100 mm.

Figure 6 shows the result of non-executing calibration procedure before measuring points. Figure 7 shows the result of executing calibration procedure before measuring it. In this calibration procedure, we use marker board A (Fig. 5(a)). Figures 6 and 7 are showed three cameras results. It is confirmed that the results of executing calibration procedure in Fig. 7 is enable to capture more points and to capture more stable than that of in Fig. 6. Especially at camera 1 (Figs. 6(b) and 7(b)), it is clear that the value of the most far position between models and camera is acquired stably.

Fig. 6.
figure 6

Experience results without calibration.

Fig. 7.
figure 7

Experience results with calibration.

4 Experiments by Using Small Calibration Board

For use in actual surgery, it is necessary to consider the size of the marker to be attached to the surgical. Therefore, another experiments were conducted as to whether difference in precision was obtained using the small marker, marker board B in Fig. 5(b). The experiment procedure is the same as before one. It is measured distance values between brain model, which is set on our system, and the cameras. The point to measure in this experiment is average of 30 measurement points acquired by the system. It is measured at five different position of the ring. The distance of each position is placed at regular intervals of 100 mm.

The results are showed in Fig. 8. As a result, it is understood that a substantially constant distance has been acquired. However, concerning the measurement value of the camera 1, fluctuation and a difference from the actual measurement value were observed.

Fig. 8.
figure 8

Experiments results by using small calibration board, marker board B.

Figure 9 is another result of this experiments. In this result, the time-sharing schedule, which is controls capture frames and skipping frames of the cameras, is changed different value from result in Fig. 8. As a result, it is understood that a substantially constant distance has been acquired. However, concerning the measurement value of the camera 1, fluctuation and a difference from the actual measurement value were observed.

Fig. 9.
figure 9

Another result by using small calibration board, marker board B.

5 Conclusions

In this paper, we evaluate our capturing system that is developed for neurosurgical navigation system presented in last year. A marker based calibration method is implemented on our system. In addition, it is studied about the performance evaluation. As a result, it is clear that the performance is good for capturing the object. Therefore, it is confirmed that our method is enable to use at real surgical room.

As future work, we will conducted experiments at real surgical room with shadow-less lamp.