Abstract
3D reconstruction of the fiber connectivity of the rat brain at microscopic scale enables gaining detailed insight about the complex structural organization of the brain. We introduce a new method for registration and 3D reconstruction of high- and ultra-high resolution ( 64 \(\upmu \)m and 1.3 \(\upmu \)m pixel size) histological images of a Wistar rat brain acquired by 3D polarized light imaging (3D-PLI). Our method exploits multi-scale and multi-modal 3D-PLI data up to cellular resolution. We propose a new feature transform-based similarity measure and a weighted regularization scheme for accurate and robust non-rigid registration. To transform the 1.3 \(\upmu \)m ultra-high resolution data to the reference blockface images a feature-based registration method followed by a non-rigid registration is proposed. Our approach has been successfully applied to 278 histological sections of a rat brain and the performance has been quantitatively evaluated using manually placed landmarks by an expert.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
Studying the brain fiber architecture and their functionality, like that of the rat brain, is important for understanding complex human brain organization. Conventional imaging methods include electron microscopy (EM), optical microscopy (OM), and diffusion magnetic resonance imaging (D-MRI). While D-MRI is limited in resolution, EM and OM often require some selective staining procedure of histological brain sections to reveal fiber connectivity. Recent advances in 3D polarized light imaging (3D-PLI, a specialized OM technique that utilizes the birefringence of nerve fibers) allows acquiring high- and ultra-high resolution images of fibrous brain tissues [5]. In addition, information about 3D fiber orientation can be obtained without staining. 3D-PLI data consists of different image modalities (Fig. 1, right): Transmittance map representing the extinction of polarized light when passing through the brain tissue, Retardation map showing the tissue’s (fiber’s) birefringence, as well as direction and inclination maps representing the local 3D fiber orientation. Blockface images are acquired during the sectioning procedure (Fig. 1, left) and constitute undistorted reference images for the acquired histological sections.
During the sectioning and mounting process brain tissue undergoes strong distortions. Thus, spatial coherence between sections is lost and hence image registration becomes an inevitable task. In previous work, 3D reconstruction of histological sections of the rat brain (e.g., [9,10,11]) was performed using rigid or affine registration (e.g. [4, 11]), which is generally not sufficient to cope with deformations in histological sections as mentioned in [9]. [10] used affine registration with subsequent diffeomorphic non-rigid registration employing mutual information. Compared to traditional histological data, 3D-PLI relies on unstained cryo-sections and is acquired at very different resolutions. This poses different challenges compared to traditional histological data. In previous work on the registration and 3D reconstruction of 3D-PLI data, high-resolution images (\(64\upmu \)m pixel size) were used in [1, 13] and ultra-high resolution (1.3 \(\upmu \)m pixel size) images in [2]. However, in [2] only rigid registration of the ultra-high resolution data to unregistered high-resolution images was performed, and the human brain was considered but not rat brain. [1, 3] used high-resolution human brain sections (64 \(\upmu \)m pixel size) for registration to reference blockface data of the same resolution. Note that human brain sections typically cover larger areas, contain more prominent structures, and include less image noise compared to the rat brain. Thus, registration of 3D-PLI data of the rat brain is more difficult. In [13], high-resolution 3D-PLI data (64 \(\upmu \)m) of the rat brain was first registered to the blockface data of same resolution and then transformed to a reference Waxholm space. However, in contrast to [13], we register high-resolution images with a section thickness of \(60~\upmu \)m to blockface images of 15.5 \(\upmu \)m pixel resolution, which is more challenging due to the large scale difference. Also, we subsequently register ultra-high resolution images (\(1.3~\upmu m\)) first to the registered high-resolution images (\(15.5~\upmu \)m after scaling) and then to upscaled reference blockface images at 1.3 \(\upmu \) m resolution using non-rigid registration (each image section has a size of about \(15000\times 12000\) pixels). In addition, whereas in [13] B-splines and a fluid model were used, respectively, we here use a more realistic deformation model based on Gaussian non-rigid body splines (GEBS) for non-rigid registration. None of the previous work provided a complete framework for ultra-high resolution 3D reconstruction of the rat brain from 3D-PLI.
In this contribution, we introduce a new method for multi-scale (both high- and ultra-high resolution data) and multi-modal registration of histological rat brain sections from 3D-PLI. The main contributions are: (1) registration of 3D-PLI data with three different spatial resolutions (1.3 \(\upmu \)m, 15.5 \(\upmu \)m, and 64 \(\upmu \)m pixel size), (2) correlation transform-based similarity metric for efficient and robust rigid registration, (3) introduction of a feature transform-based similarity metric and weighted regularization for non-rigid registration using a physically-based deformation model, (4) robust feature-based registration, and (5) a complete pipeline for 3D reconstruction.
2 Method
Our approach for 3D reconstruction of both high- and ultra-high resolution 3D-PLI data of the rat brain consists of several steps. High resolution images are first registered to their corresponding reference blockface images using rigid and non-rigid registration. Ultra-high resolution images are then registered both rigidly and non-rigidly to the corresponding sections of the reference blockface images.
2.1 Registration of High-Resolution 3D-PLI Data
To coherently align the high-resolution 3D-PLI data with the reference blockface images several registration steps are required. The high-resolution data is first coarsely registered using center-of-mass alignment, rigid registration, and then non-rigid registration using GEBS [8] in conjunction with a novel feature transform-based similarity measure and a weighted quadratic regularization.
Data Preparation and Coarse Registration. High resolution 3D-PLI sections of the rat brain are segmented from the original image data (see Fig. 2, left) as in [1]. For initial alignment we perform a scaling transformation for high-resolution images (\(64~\upmu \)m) and then align their center-of-mass (COM) with that of the reference blockface images (\(15.5~\upmu \)m, see Fig. 2, right).
We use a parametric registration model for coarse registration of 3D-PLI data. Let \(g_{1}(\mathbf x )\) and \(g_{2}(\mathbf x )\) with \(\mathbf x = (x, y):\varOmega \rightarrow \mathbb {R}, \varOmega \in \mathbb {R}^2\), be the reference blockface and the PLI image, respectively, and \(\mathcal {T}(\mathbf x \mid {\theta })\) be the transformation with the parameter vector \(\theta \) to be estimated. Then, the goal is to minimize the objective function \(\psi \) to obtain the optimal \(\hat{\theta }\):
We use a spline-based multi-resolution scheme for rigid registration based on [14]. In contrast to [14], where the sum of squared intensity differences (SSD) was employed, we propose using a correlation transform (CoT) of the image to deal with multi-modal data (see Fig. 1). Let \(P_\mathbf{{x}}\) be a patch of size \(7\times 7\) pixels centered at \(\mathbf {x}\), then the CoT is given by
where \(\mu \) and \(\sigma \) are the mean intensity and standard deviation, respectively within \(P_\mathbf{{x}}\) and \(\epsilon = 0.001\). For \(\psi \) in (1) we use the SSD between the computed CoT values for the blockface image \(\tilde{g}_{\small {1}}\) and the high-resolution image \(\tilde{g}_{2}\): \(\psi (\theta ) = \sum _\mathbf{x \in \varOmega } {\Big ({\tilde{g}_{{1}}{(\mathbf x )}- \tilde{g}_{2} \big (\mathcal {T}(\mathbf x \mid {\theta })\big )}\Big )^{2}}\). We minimize Eq. (1) using Levenberg-Marquardt optimization.
Non-rigid Registration. Non-linear distortions are often present in 3D-PLI data due to the cutting and mounting procedure. In addition, local deformations are introduced because of time delays between mounting and data acquisition. Since these deformations are the result of physical phenomena, a suitable physical deformation model should be used for non-rigid registration. In our approach, we use Gaussian elastic body splines (GEBS) which represent an analytic solution of the Navier equation from linear elasticity theory [7]: \(\mu \varDelta \mathbf u +(\lambda +\mu ) \, \nabla \left( \text {div}\,\mathbf u \right) +\mathbf f = \mathbf 0 \), where \(\lambda \) and \(\mu \) \(>0\) are the Lamé constants and \(\mathbf u \) is the deformation field under Gaussian forces \(\mathbf f \), and which has been derived in [8]. In [15], an intensity-based registration approach using GEBS was described, which, however is not suitable for multi-modal 3D-PLI data. Using a CoT-based similarity measure for non-rigid registration has disadvantages (see the red arrows in Fig. 3 which indicate that structure and intensity invariance are not well preserved). In this contribution, we introduce a feature transform-based (FeT) similarity measure, and a Gaussian weighted quadratic regularization. FeT better preserves the structure and intensity invariance and is thus better suited for non-rigid registration. FeT consists of: 1) a structure variability measure \(S_{var}\) defined by the trace of a covariance matrix \(\mathcal {C}\) for seven features: Position (x, y), absolute values of first and second order image derivatives (\(\mid \!\! g_{x}\!\!\mid \), \(\mid \!\!g_y\! \!\mid \), \(\mid \!\! g_{xx}\!\!\mid \), \(\mid \!\! g_{yy}\!\! \mid \)), and intensity difference \(\mid \! \! g(\mathbf{{x}})\! -\!g(\mathbf{{x}}_k)\! \!\mid \) for each \(\mathbf{{x}}_k\) within the patch \(P_\mathbf{{x}}\), and 2) a texture measure \(S_T\) based on cross-correlation between the pixels in \(P_\mathbf{{x}}\). A patch size size of \(5\times 5\) pixels was chosen after our experimental observations. The combined feature transform (FeT) is then designed as a weighted sum of the two components \(FeT = S_{var} + 0.5S_T\). Figure 3 (right) shows example results for FeT for blockface (\(FeT_{g_1}\)) and PLI images (\(FeT_{g_2}\)). It can be seen that boundaries and inner texture are quite similar for the multi-modal images. To preserve discontinuities of the deformation field, we use Gaussian weights \(f_{\sigma }\) for the quadratic regularization. We use the energy functional
where \(FeT_{g_1}\) and \(FeT_{g_2}\) are the feature transforms of the target and source images, respectively. The weighting factors \(\lambda _I, \lambda _E>0\) control the trade-off between the data term \(J_{data}\) and the two regularization terms (quadratic and elastic, in our case \(\lambda _I=0.25\), \(\lambda _E=0.25\)). Here, quadratic regularization will allow for smoother deformation field while elastic regularization will force for a realistic deformation and avoid any unnatural deformations. \( \mathbf u^I\) is the deformation field obtained by minimizing the SSD between the feature transforms with a weighted quadratic regularization (i.e. minimization of \(J_{Intensity}\)) using Levenberg-Marquardt optimization. The final deformation field \( \mathbf u\) is obtained using an analytic solution based on GEBS.
Figure 4 (left) reveals the result after rigid registration. Visual inspection shows a good alignment, however, misalignments are distinct along the corpus callosum (indicated by blue arrows), the hippocampus (cyan arrows), and along the borders of the cerebral cortex (black arrows). Using the new similarity measure for non-rigid registration, it can be observed in Fig. 4 that the misalignments in various regions have been tackled (see Fig. 4, middle and right).
2.2 Registration of Ultra-High Resolution 3D-PLI Data
Due to the large difference in spatial resolution between the blockface images and the ultra-high resolution images (factor of about 12) and arbitrary rotations we perform registration using a scale-space method for feature detection and matching. A Gaussian scale-space and a Hessian measure are used to detect features in the registered high-resolution and the ultra-high resolution (retardation map) images with subsequent feature matching based on FLANN [2]. Then, a similarity transformation (rigid and isotropic scaling) is computed using the matched features and a least squares approach. Unlike in [2], we use a fast bilateral filtering technique [12] to cope with the noise in the rat brain data and reduce false detections in feature extraction. In Fig. 5, examples for feature matching results are shown. Subsequently, a non-rigid registration (see Sect. 2.1) is used to cope with local deformations at \(15.5~\upmu \)m resolution (see Fig. 5, third column). Further, visible misalignments in Fig. 5 (third column) are corrected at a resolution of \(1.3~\upmu \)m using the proposed non-rigid registration method (see Eq. (3)) and coarse-to-fine energy minimization (we use 9 pyramid levels).
3 Experimental Results
We have evaluated the proposed method for the registration and 3D reconstruction of high- and ultra-high-resolution data of the rat brain (64 \(\upmu \)m and 1.3 \(\upmu \)m). Ground truth correspondences for three sections were determined manually by an expert (on average 25 and 46 landmarks for high- and ultra-high resolution sections, respectively). Table 1 shows the average target registration error (TRE). It can be seen that our proposed non-rigid registration method using the feature transform FeT yielded an overall improvement of about 4.1 pixels and 4.8 pixels compared to a previous non-rigid registration approach using mutual information [6] for high- and ultra-high resolution 3D-PLI data, respectively. Notably, our non-rigid registration method can deal with large deformations which is evident from the large overall improvements of 15.7 pixels and 10.9 pixels compared to rigid registration using CoT (\(\tilde{g}\)) for high-resolution and ultra-high resolution data, respectively. Also, Fig. 6 shows that our non-rigid method is able to cope with highly non-linear deformations present in our full rat brain data (in range from 0–132 pixels in magnitude).
Figure 7 (left, middle) shows 3D visualizations of registration results as a reconstructed 3D volume of 278 high-resolution image sections (transmittance maps, 15.5 \(\upmu \)m \(\times \) 15.5 \(\upmu \)m \(\times \) 16.7 mm). After rigid registration, misalignments are visible at locations indicated by arrows (black: Tissue boundary, blue: Corpus callossum and red: Caudate putamen) and a square in Fig. 7 (left). However, after non-rigid registration a coherent alignment can be observed (see Fig. 7, middle). A rendered 3D reconstructed volume of ultra-high resolution is shown in Fig. 7 (right) where the smooth green regions indicate coherent alignment of corpus callossum (retardation maps, 1.3 \(\upmu \)m \(\times \) 1.3 \(\upmu \)m \(\times \) 16.7 mm).
All the implementations are in C++ and we have used optimized C++ libraries for computing trace of covariance matrix to speed-up the FeT-based SSD metric minimization in our non-rigid approach. Additionally, entire framework, that is from high-resolution to ultra-high resolution reconstruction, is built as a parallel processing pipeline and optimized for speed-up in complete 3D reconstruction.
4 Conclusion
We have introduced a new multi-scale and multi-modal registration method for 3D reconstruction of both high-resolution and ultra-high resolution 3D-PLI histological images of a rat brain. The method comprises a novel feature transform-based similarity metric integrated in a physically-based non-rigid registration approach as well as a correlation transform-based similarity measure for robust rigid registration. Quantitative evaluations showed that our method improves the result compared to a previous multi-modal non-rigid registration approach and leads to a coherent 3D reconstruction.
References
Ali, S., et al.: Elastic registration of high-resolution 3D PLI data of the human brain. In: Proceedings of 14th IEEE International Symposium on Biomedical Imaging (ISBI), Melbourne, Australia, 18–21 April, pp. 1151–1155 (2017)
Ali, S., Rohr, K., Axer, M., Amunts, K., Eils, R., Wörz, S.: Registration of ultra-high resolution 3D PLI data of human brain sections to their corresponding high-resolution counterpart. In: Proceedings of 14th IEEE International Symposium on Biomedical Imaging (ISBI), Melbourne, Australia, 18–21 April, pp. 415–419 (2017)
Ali, S., Wörz, S., Amunts, K., Eils, R., Axer, M., Rohr, K.: Rigid and non-rigid registration of polarized light imaging data for 3D reconstruction of the temporal lobe of the human brain at micrometer resolution. NeuroImage 181, 235–251 (2018)
Arsigny, V., Pennec, X., Ayache, N.: Polyrigid and polyaffine transformations: a new class of diffeomorphisms for locally rigid or affine registration. In: Ellis, R.E., Peters, T.M. (eds.) MICCAI 2003. LNCS, vol. 2879, pp. 829–837. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39903-2_101
Axer, M., et al.: A novel approach to the human connectome: ultra-high resolution mapping of fiber tracts in the brain. NeuroImage 54(2), 1091–1101 (2011)
Biesdorf, A., Wörz, S., Kaiser, H.-J., Stippich, C., Rohr, K.: Hybrid spline-based multimodal registration using local measures for joint entropy and mutual information. In: Yang, G.-Z., Hawkes, D., Rueckert, D., Noble, A., Taylor, C. (eds.) MICCAI 2009. LNCS, vol. 5761, pp. 607–615. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04268-3_75
Chou, P., Pagano, N.: Elasticity - Tensor, Dyadic, and Engineering Approaches. Dover Publications Inc., Mineola (1992)
Kohlrausch, J., Rohr, K., Stiehl, H.: A new class of elastic body splines for nonrigid registration of medical images. J. Math. Imaging Vis. 23(3), 253–280 (2005)
Lebenberg, J., et al.: Validation of MRI-based 3D digital atlas registration with histological and autoradiographic volumes: an anatomofunctional transgenic mouse brain imaging study. NeuroImage 51, 1037–1046 (2010)
Majka, P., Wójcik, D.: Possum-a framework for three-dimensional reconstruction of brain images from serial sections. Neuroinformatics 14(3), 265–278 (2016)
Ourselin, S., Roche, A., Subsol, G., Pennec, X., Ayache, N.: Reconstructing a 3D structure from serial histological sections. Image Vis. Comput. 19(1), 25–31 (2001)
Paris, S., Durand, F.: A fast approximation of the bilateral filter using a signal processing approach. Internat. J. Comput. Vis. 81(1), 24–52 (2009)
Schubert, N.: 3D reconstructed cyto-, muscarinic M2 receptor, and fiber architecture of the rat brain registered to the waxholm space atlas. Front. in Neuroanat. 10, 51 (2016)
Thévenaz, P., Ruttimann, U., Unser, M.: A pyramid approach to subpixel registration based on intensity. IEEE Trans. Image Process. 7(1), 27–41 (1998)
Wörz, S., Rohr, K.: Spline-based hybrid image registration using landmark and intensity information based on matrix-valued non-radial basis functions. Int. J. Comput. Vis. 106(1), 76–92 (2014)
Acknowledgments
This project was funded by the Helmholtz Association through the Helmholtz Portfolio theme “Supercomputing and Modeling for the Human Brain” and by the European Union through the Horizon 2020 Research and Innovation Programme under Grant Agreement No. 7202070 (Human Brain Project SGA1).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Ali, S., Schober, M., Schlömer, P., Amunts, K., Axer, M., Rohr, K. (2018). Towards Ultra-High Resolution 3D Reconstruction of a Whole Rat Brain from 3D-PLI Data. In: Wu, G., Rekik, I., Schirmer, M., Chung, A., Munsell, B. (eds) Connectomics in NeuroImaging. CNI 2018. Lecture Notes in Computer Science(), vol 11083. Springer, Cham. https://doi.org/10.1007/978-3-030-00755-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-00755-3_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00754-6
Online ISBN: 978-3-030-00755-3
eBook Packages: Computer ScienceComputer Science (R0)