Keywords

1 Introduction

Advances in image-scanning technology have led to vast improvements in the fetal evaluation. The use of magnetic resonance imaging (MRI) as an adjunct to ultrasound (US) in fetal imaging is becoming widespread in clinical practice.

This paper presents an interactive bidirectional actuated human-machine interface proposal developed by the combination of a haptic device system (force-feedback technology) and a non-invasive medical image technology as the MRI and/or US employed by a pregnant patient actively interacting with a virtual 3D model of her own fetus during the pregnancy.

This proposal is part of the fetus 3D, a project being developed in Brazil, by the collaboration of radiological sector of a private clinic with research university laboratories, where several technologies as virtual reality – VR, 3D printers, and more recently haptic 3D devices are combined with non-invasive image technologies for diagnostic purposes in fetal medicine as ultrasound, MRI and in some special cases, computerized tomography scanners.

For this proposal, the haptic device adopted was the touch 3D stylus system from the American company 3D Systems. In order to include the virtual human reference, a simple cursor was included on the interface screen. The 3D models of the fetuses tested on the experiment were modeled through the superimposition of the several slices obtained from the US and MRI equipment where the patient was scanned.

Once the 3D model was obtained, the virtual characteristics and the womb ambience were then created (coloring, shades, positioning, and other characteristics). Some physical responses were also developed using Open Haptics library and Unity 3D. These responses include the heart beating, skin pressure, textures and the collision of the cursor with the fetus.

Some tests with US and MRI exams were done using our proposal. We developed the cases considering the pathologies as our areas of interest to have a better visualization and interaction within it.

The structure of this document is described as follows. In Sect. 2, we describe some related work relevant for our proposal. Since our proposal consists of two main steps (reconstruction and environment set up), in Sect. 3 we describe in detail how the 3D model is obtained, and in Sect. 4 how to set up the environment considering the visualization and the haptic device interaction. Our test cases are presented and described in Sect. 5, considering a medical point of view and the processing of the data. In Sect. 6, we present our conclusions and possible future works.

2 Related Work

There are many applications of three-dimensional imaging techniques in fetal medicine. In general, two types of examinations are used in order to obtain images of the uterine cavity during pregnancy: US and MRI. For example, Werner et al. proposed in [1], the generation of 3D printed models for medical analysis. In the case of Virtual Reality (VR) there exists a large amount of methods which include different types of interaction [2]. In [3], dos Santos proposed an immersive fetal 3D visualization using the mentioned acquisition techniques. Combining these techniques is helpful to have a more accurate morphological analysis of the fetus. Werner et al. uses in [4] a correlation between them to detect typical lesions in the fetal nervous system, and in [5], the authors proposed a 3D model reconstruction using the same acquisition techniques to detect a malformation.

Haptics is a relatively new research field which is growing rapidly. The involved brain processes are partially understood and new sensory illusions were discovered recently [6]. In the field of medicine, several applications using this kind of technology were proposed such as clinical skill acquisition [7, 8]. Haptics technology was also used in fetal medicine to allow mothers to interact with the virtual representation of the fetus. Prattichizzo et al. proposed in [9] an application to allow this interaction using ultrasound imaging. They process the ultrasound exam, reconstruct a 3D surface (mesh) and then interact with the fetus using the haptic device. Severi et al. proposed the usage of this proposal to decrease maternal anxiety and salivary cortisol [10]. In fact, the last two works are the main references for our proposal.

3 Fetus 3D Model Reconstruction

Our proposal allows the user to interact with the surface of a virtual fetus generated from medical image acquisition, i.e. US and MRI. Generating a surface from this kind of data involves several topics of image processing and geometry processing, which are areas of computer science. The fidelity of the generated surface regarding the real fetus surface will define a better interaction with the user. For this reason, we opted to combine the mentioned acquisition techniques and obtain a more robust representation. In this section we will explain how we generate the fetus surface, represented by a polygon mesh (triangular mesh), following a semiautomatic process.

We receive as input two volumes (a set of continuous slices) with intensity values generated by US and MRI imaging. These values represent a physical response depending on the acquisition technique. In our case, we obtained these volumes in DICOM or NRRD file formats, which are file formats supported by most of MRI and US machines. We read these types of files using the loader and DICOM module of 3D Slicer [11], which is an open source software for medical image processing and visualization. In fact, we use this software for the entire model reconstruction pipeline. It is important to say that 3D Slicer was developed using the Insight Segmentation and Registration Toolkit (ITK) [12], a library used in several works of the medical area. Both platforms are part of the Kitware company.

Images generated using MRI are usually well defined without the presence of undesired noise. In counterpart, US images present speckle noise which is a challenging problem for the segmentation. In our proposal, we can perform an anisotropic volume denoising algorithm included in the filtering module of 3D Slicer (e.g. curvature anisotropic diffusion) to preserve details while removing noise.

To reduce the domain of the segmentation, obtaining a less complex input, we can define a region of interest delimited by a 3D bounding box, such that the final result is also restricted to it.

In the case of MRI images, it is possible to recognize different structures defined by the voxel intensity values. For example, the skin and the bones of the fetus have different values. So, it is important to segment all components that form the fetus body and differentiate them from external structures like the placenta, due that the image comes with mother’s body components.

In the case of US images, it is harder to recognize internal structures but it is possible to recognize the external shape of the fetus. We are interested in binarizing the volume, such that the shape is preserved and non-fetus components are excluded.

In both cases, to perform the segmentation, we use thresholding tools/removal and manual/assisted selections. This task is usually addressed by a designer with experience in medical imaging because this expertise is helpful to obtain a segmentation with high fidelity.

Once we have a segmented image (binarized image), where we can recognize the surface of the fetus, we reconstruct a polygon mesh (triangular mesh) using it. In specific, we use the marching cubes algorithm [13] to generate the mesh, and the λ-μ Taubin smoothing algorithm [14] to remove the noise of the reconstruction avoiding mesh shrinkage. These algorithms are implemented in the model maker module of 3D Slicer.

Then, for each image acquisition technique, we have a rough mesh that needs to be processed. Using mesh processing algorithms, we can obtain a better one which is closer to the ideal fetus surface. For these tasks we used MeshLab [15], which is an open source software for processing and editing 3D triangular meshes. As a first step of this subprocess, we perform a remeshing over the initial mesh, preserving the shape and generating a new regular mesh (regular triangles) which will be helpful to avoid distortion in the subsequent algorithms. Since in the reconstruction step we generate a rough mesh with probably millions of vertices, we perform a mesh decimation to reduce the complexity and avoid heavy processes in the next steps. It is important to take care in the decimation criteria because we do not want to remove important details for the later analysis. As the last step, we can smooth the mesh again in an interactive manner to obtain a better representation for visualization.

Finally, we have to align the clean meshes (one from US and other from MRI) and merge them to obtain the final result. We use the align tool of MeshLab which supports manual and automatic alignment using reference points. Both alignment techniques allow us to find the linear transformation that better fits one mesh to other. Once we have the two meshes aligned, we manually remove regions without coherence and merge close vertices generating a point cloud formed by the set of vertices of both meshes. Then we compute a mesh based on the point cloud and start a mesh processing pipeline similar to the previous one. This resulting mesh is then used in the virtual environment.

In the case that we have only one type of image acquisition we can avoid the alignment step and just use a single reconstructed mesh.

Figure 1 summarizes the 3D model reconstruction pipeline of a fetus.

Fig. 1.
figure 1

3D fetal model (28 weeks) reconstruction pipeline.

4 Virtual Environment and Interaction

In this section we will explain how we set up the environment and how the user will interact with the virtual fetus representation, visualizing and touching it.

Once we have a 3D polygon mesh representation of the fetus surface, we preprocess it before including it in the environment. The first thing we do, is an adaptive decimation, such that we highly reduce the number of polygons while maintaining a high number of them in the regions of interest. Otherwise, the visualization and collision detection for haptic device, will require higher computational time that can disrupt the interaction (user’s comfort with the application). These regions of interest that have a higher number of polygons are, for example, surface areas with a fetus pathology which need a more accurate and soft interaction than others (higher sensibility and likelihood possible in the touch). We perform this adaptive decimation using Blender [16].

After this, we have a model with less number of triangles which is easy to handle but has a really bad resolution for visualization. For this reason, using the original model, we generate a normal map (represented as a texture) for the decimated mesh. This normal map results from the projection of the normals of the original mesh onto the decimated mesh (normal baking). So, using this map, we can calculate the normals for each fragment in the rendering pipeline (normal mapping or bump mapping). With this technique, it is possible to visualize a low resolution mesh with high level of detail.

We set up the virtual environment using Unity 3D [17] due to its large amount of tools for visualization and interaction. We were aiming to generate empathy to the user through the connection with the virtual fetus. In this way, it is important to make the experience as realistic as possible. We know that the user maybe has no idea of how the fetus looks like inside the womb, and the nearest experience to this one, is the visualization of the volume implemented in ultrasound machines. So, the latter is a good reference for us to replicate it. We use a lighting scheme composed by three fill lights such that their positions and intensities simulates the mentioned volume rendering effect. Also, we added some rendering features such as: antialiasing to smooth the edges, ambient occlusion to give notion of ambient light, shadowing projection to help the user to identify where is the haptic cursor, and color grading to reach the same color as in the mentioned volume renderings (Fig. 2).

Fig. 2.
figure 2

Example of normal fetal face visualization using the proposed rendering features (27 weeks of gestation).

In addition to a static visualization, modern US machines are able to generate 4D images that give sense of movement. Trying to simulate this movement we introduced animations that are similar to the movements showed when using US machines. With this feature, the visualization is closer to the mentioned experience.

Another important thing that we need to set up, is the haptic device interaction. We use the Touch 3D Stylus as haptic device and a Unity plugin [18] that allows us to have access to the Open Haptics library [19]. This library allows us to manipulate some information and functionalities of the device. The main feature that we have to manage is the detection of collisions. When the fetus is virtually touched, the device must produce a force that gives touching sensation to the user in the real world. In addition to the collision, it is possible to define different materials and textures that give to the user different sensations generated by the vibration of the device. The surface should be smooth with a certain level of friction, avoiding a slippery sensation, and sufficiently soft to simulate the skin. Also, when the fetus skin is pressed the haptic device should generate a little damping.

Different areas of the fetal body can have different stiffness properties. For example, the forehead or other areas with prominent bones should be more rigid than other ones like the cheeks or the lips. So, we implemented a stiffness mapping to allow us to define the stiffness value of a point in the surface. We use this map to dynamically update the global properties of the object and simulate the definition of local properties (for each vertex of the mesh). This mapping is introduced as a texture, in a similar way to the normal mapping.

Another important feature is that the touching sensation should be synchronized with the visualization. So, if we press a soft region of the fetus and we feel that we are deforming it, the mesh used in the visualization should be deformed too. To obtain this effect, we developed a simple mesh deformation algorithm that reads the same local stiffness values used by the haptic device.

In addition to the mentioned features, we implemented an effect to simulate the fetus heart beating, such that when the user is touching an area near to the heart, the device starts the beating effect.

Figures 3 and 4 show examples of interactions with a fetus using the haptic device.

Fig. 3.
figure 3

Example of an interaction using a complete fetus model (28 weeks). (a) Screen visualization. (b) User interaction.

Fig. 4.
figure 4

Example of an interaction using a fetal face model (27 weeks). (a) Screen visualization. (b) User interaction.

5 Test Cases

We performed tests with real data acquired from exams done to patients. The test cases contain exams with healthy fetuses and fetuses presenting pathology. For all cases, in Fig. 5 we show the visualization of each one and the user cursor virtually touching it. Each of these cases will be explained in this section with a medical point of view and the respective data processing using our proposal.

Fig. 5.
figure 5

Visualization of all test cases. (a) Apert syndrome – MRI. (b) Apert syndrome – US. (c) Apert syndrome – MRI + US. (d) Cleft lip – US. (e) Normal fetus – MRI. (f) Normal fetal face – US. (g) Normal fetal body – MRI.

5.1 MRI (Apert Syndrome)

Prenatal US and MRI at 28 weeks of gestation allowed the identification of all phenotype of Apert syndrome. The 3D visualization allowed better understanding of fetal abnormalities by parents and medical team.

For this case, it was generated a MRI volume which has a good quality and the fetal shape is well defined. The main challenging problem in this case is that we have to preserve the details of the region containing the pathology. For this reason, we take special care when segmenting and processing the hands and feet. We performed the segmentation using two different layers, one for the region that contains the pathology and the other for the rest of the body. So, the segmentation process is executed independently over each differentiated region. The mesh processing steps are treated as in most cases considering the adaptive decimation for the interest regions (See Fig. 5a).

5.2 US (Apert Syndrome)

Three-dimensional (3D) US allows better assessment of face and extremities surface abnormalities and can be a useful adjunct to 2D US for parental counseling in Apert syndrome cases.

In this case we received as input an US volume with less speckle noise that allows us to use thresholding algorithms without too much of manual processing. As in much of ultrasound volumes, applying a simple thresholding produces floating fragments and holes. To address this problem, we use small component removal and hole filling. Then the rest of the process is executed normally (See Fig. 5b).

5.3 US Combined with MRI (Apert Syndrome)

3D virtual models from US and MRI scan data allows better understanding of fetal malformations.

Here we used the final result of the previous two cases, executing an automatic mesh alignment algorithm two match the corresponding shapes. Then we refine manually this alignment such that the meshes are as close as possible. MRI gives us the shape of the fetus body and ultrasound gives us a better approximation of the fetus face shape. With the point cloud generated by both meshes we reconstruct a new mesh which is processed like the others, removing non-aligned regions (See Fig. 5c).

5.4 US (Cleft Lip)

3D reconstruction of the fetal face from US allows an immersive real environment, improving the understanding of fetal congenital anomalies such as cleft lip by the parents and the medical team.

Here we received as input an US volume. As in a previous case the volume does not have too much speckle noise and the face of the fetus is relatively clear. Using a thresholding scheme gives us a good initial result with detailed enhancing of the region containing the pathology (See Fig. 5e).

5.5 MRI (Normal Fetal Body)

MRI provides additional information about fetal anatomy and conditions in situations where US cannot provide high-quality images.

In this case we received as input a MRI volume with high quality. The main difficulty is that the fetus is too close to the pouch, due that external structures can disrupt the segmentation. Several manual crops were done to define our target volume. We decided to segment it in independent parts and then merge all of them to generate the final model. The mesh processing steps are executed normally (See Fig. 5f).

5.6 US (Fetal Face)

US is currently the primary method for fetal assessment during pregnancy because it is patient friendly, useful and considered to be safe.

The input in this case was an US volume with more speckle noise than the other US cases. For this reason, we previously filtered the volume to have a better definition of the fetus shape without too much noise. Then we treated it as in the other ultrasound cases, with the difference that the fetus does not have a region of interest (See Fig. 5g).

5.7 MRI (Blind Pregnant Patient)

Maternal–fetal attachment (MFA) is defined as the extension in which women engage in behaviors that represent an affiliation and interaction with their unborn child. The ability to view a fetus as an independent being at an earlier point in pregnancy likely contributes to the MFA developing at a much earlier point in fetal development. Force-feedback technology would be very important to the assessment of MFA in blind pregnant women.

In this case the input was a MRI volume without too much artefacts. Due that the fetus was static during the exam, the borders of the volume are well defined but the contrast of the voxel values turns more difficult the segmentation. The other processes are executed normally (See Fig. 5h).

6 Conclusion and Future Work

As we can see in Figs. 5a, b and c, the model that was reconstructed using US and MRI, gives us a more detailed shape description which results in a better morphological analysis and a more faithful interaction.

The invited patients and physicians who tested the device, described that the overall level of engagement and realism was satisfactory, as an interesting experience to physically feel features such as the skin pressure and texture combined with the heartbeat of the fetus. The team aims to improve the proposal by testing different haptic devices, such as a haptic glove in order to provide as result a more realistic and friendly experience of touching the fetuses for parents to be and physicians.