The application of multimodality medical image fusion based method to cerebral infarction
Abstract
A multimodality image fusion can process images of certain organs or issues which were collected from diverse medical imaging equipment. The fusion can extract complementary information and integrate into images with more comprehensive information. The multimodality image fusion can provide image that was combined with anatomical and physiological information for doctors and bring convenience for diagnosis. Basically, the thesis mainly studies the fusion of MRI and CT images, while taking the cerebral infractionsuffered patients’ images as example. Furthermore, T1 and DWI sequences are respectively carrying on wavelet fusion, pseudo color fusion, and α channel fusion. Meanwhile, the numerous image data will be objectively assessed and compared from several aspects such as information entropy, mutual information, the mean grads, and spatial frequency. By means of the observation and analysis, compared with original image, it can be figured out that fused image not only has richer details but also more clearly highlights the lesions of cerebral infarction.
Keywords
Multimodality image fusion Cerebral infarction Wavelet fusion Pseudo color fusion α channel fusion1 Introduction
The feature comparison of CT and MRI
Range  MRI Equipment  CT Scanner 

Soft tissue contrast  High  Low 
Calcification  Insensitive  Sensitive 
Bone marrow  Clear  Indistinct 
Cartilage  Clear  Indistinct 
Tendon  Without contrast agent  Need contrast agent 
Bone cortex lesions  Insensitive  Sensitive 
Bleeding  Displayable  Clear 
Bone artifact  Exist  Absent 
Cerebral infarction is caused by sudden decrease or cease of the blood flow of the partial brain artery. Brain CT scan shows the corresponding parts of hypodense, the boundary is relatively not clear with mass effect. However, brain MRI check can discover the cerebral infarction earlier, which can be interpreted as T1 shows low signals in lesion area at the onweighted image, while T2 shows high signals. MRI scan can discover the smaller cerebral infarction lesions. If doctors only judge the CT image and MRI image subjectively, the diagnosis result of the disease may not be precise. However, after the fusion operation of CT image and MRI image, the fused image of soft tissue and bone tissue which clearly and fully reflect the condition can be obtained. The image can provide comprehensive, effective, and reliable information for clinical studies, therefore boosting the diagnostic efficiency, as well as the reliability and accuracy [3].
Due to the modern human life style, the data shows that there is an increasing number of cerebrovascular diseases recently. Consequently, the boost of the image detection accuracy is particularly important. Cerebral infarction is a quite common type of cerebrovascular disease; hence, the deep research of early diagnosis of cerebral infarction can play a security role in human health and medical development.
The thesis takes cerebral infarction patients’ image information as example, while using three different fusion methods to operate patients’ T1MRI, DWIMRI, and CT image data. Finally the fusion results will be statistically evaluated and analyzed.
2 Materials and methods
2.1 Patients and image
Three cerebral infarction patients are selected in the research. Patient 1 (male, 61 years old) and patient 2 (male, 52 years old) had the CT and T1, MRI check. The magnetic field intensity is 1.5 T, while the parameters of EPSEDWIMRI are TR = 115 ms and TE = 5.0 s; the parameters of T1FlairMRI are TR = 20 ms and TE = 2.0 s. On the contrary, patient 3 (female, 65 years old) only had the CT and T1 MRI check, and the parameters of which are selected as above.
2.2 Preprocessing
2.2.1 Window width and window position adjustment
Window width is the range of CT values selected when displaying images. The structure of this range is 16 levels (gray scale) from white to black according to its density [4].
where W is the window width and L is the window position.
2.2.2 Gray scale mapping and equalization
The gray value range of DICOM format source image is 2000 +2000, so we need map simple grayscale to 0256.
where r and s represent the original histogram gray and the histogram corrected image gray, n is the sum of the pixels in the image, k is the number of pixels in the gray level r, L is the possible gray level in the image level total.
2.2.3 Image denoising
Common lowpass filters are the following: ideal lowpass filter, Butterworth filter, exponential filter, and ladder filter. In this paper, the algorithm of the bilateral filter is improved, and the “pulse” weight is added to the original spatial distance weight and tightness weight to deal with the impulsive noise that the original bilateral filter cannot do, while still having the original bilateral filter to keep the edge of the image part of the advantages of sharp. Eventually, this program can be more convenient and effective to restore at the same time by the Gaussian noise and impulse noise pollution of the image. The approximate algorithm is as follows:
where Ω(5 × 5 for example) is the neighborhood of point u (x, x) and m is the number of pixels outside the x point in the Ω neighborhood (m = 24).
 1.
when S≤ 5, you can determine the center point for the gray area;
 2.
When 6 < S≤18, it can be judged that the center point is the edge area;
 3.
When S > 19, the center point can be judged as impulse noise;
2.2.4 Image enhancement
 (A)
k > 1 increase the contrast, k < 1 to reduce the contrast.
 (B)
k = 1 to change the brightness.
 (C)
k = 1, b = 0 holds the original image, k = −1, b = 255, the original image is reversed.
2.3 Registration
2.3.1 Fundamental
In the formula, i, j represents the rows and columns of the image; (i, j) represent for coordinate value of the pixels; f stands for the twodimensional coordinate conversion; and g stands for a onedimensional grayscale transformation.
2.3.2 Affine transformation
The affine transformation is the most populate registration transformation [10], which is also the geometric transformation method that the thesis adopted. The affine transformation is linear, including transformation of translate, rotation, and scaling. The line is still mapped to line in this transformation; however, the length and angle of the line cannot be maintained. Affine transformation has four parameters. Due to the Eq. (2), the point (x, y) of one image can be mapped to the point (x, y) of another image. In the formula, s stands for scaling factor.
2.3.3 Powell algorithm
Powell algorithm as an optimization strategy for image registration in this paper is a multiparameter local optimization registration algorithm without the need to calculate the derivative. The essence of the optimization process is divided into n + 1 times and a onedimensional search composed of the iterative process. Firstly by n different conjugate direction of the search to obtain an extreme point is the initial value of this search. The next part is the searches for the extreme point of the direction to the line, to search for the extreme points of the stage. And then the last search direction replaces one time of the previous n times in a search, at the same time, algorithm starts the next iteration, until the function value is no longer reduced.
2.3.4 Maximum mutual information method
Maximum mutual information method makes the mutual information as the registration measurement of the medical image, which is proposed by Collignon and Viola. The registration accuracy is generally higher than that of segmentationbased registration method. Mutual information is a fundamental concept of information theory, which is being used to describe the statistical correlation between the two systems or how much information the system contains of the other information, which is normally represented by entropy. While the spatial positions of the two images achieve the same location, the mutual information of grayscale value of the corresponding pixel pairs reaches the maximum value [11].

Step 1: Read the image.

Step 2: Initial registration (rough registration). Optimizer selects Powell algorithm, and similarity measure selects the maximum mutual information method.

Step 3: Improve the registration accuracy by changing the optimizer step size and increasing the number of iterations.

Step 4: Use the initial conditions to improve the registration accuracy.
2.4 Fusion
2.4.1 Wavelet fusion
 (a)
The waveletweighted method
 (b)
The waveletweighted maximum method
Among them, ω_{k}(A) and ω_{k}(B) are weighted coefficients.
2.4.2 α channel fusion
where I stands for the fused image, F stands for the foreground image, and B stands for the background image. The value of α is 0 to 1. According to the image grayscale range of 0 to 255, the value of α is divided into 256 levels and each level reflects a transparency, such as white corresponds to the value of 1, in which the picture is opaque; however, black corresponds to 0, in which the picture is completely transparent. Because the value of α usually takes the power of 2, by contrast analysis, this paper controls the variable and sets the value as 0.5.
2.4.3 Pseudo color fusion
In the history of computer vision development, for color applications, the image is developed through black and white. However, in the field of medical imaging, the images which are obtained from the wellknown CT, MRI, PET, and other medical equipment are grayscale images. The grayscale images only use the value of the different gray to represent different details. However, color is an excellent descriptor. The color of the fusion image can not only save the useful information of each source image, but also the color difference ensures the details of the information source. Some researchers said that the image can be graygray fusion and color processing, but the experiment proved that the aforementioned operation can lead to image distortion and other hidden risks, although the operation is simple. How to apply this color to the image fusion is also a key problem [18, 19].
 (a)
Calculating the common part of images
 (b)
Calculating the unique parts of images
 (c)
Fusing images and color display
In the formula above, ∘ stands for the corresponding fusion operator; the function fuses the information from two images as one. For the sake of showing the pseudocolor image, it is necessary to enter the two images into the different color channel of RGB. Regarding the one channel left, the value can be entered as required (such as the superimposed image of CT and MRI or the edge image of either one) and also can be endowed with zero.
3 Research and results
3.1 Fusion results
Do the wavelet fusion, pseudo fusion and α channel fusion of the cerebral infarction patient’s two sequence: T1 and DWI (the original figures are Figs 2, 3, and 4 correspondingly) with the CT images, and results are shown as followed.
4 Discussion
 (a)
Analysis from the information entropy parameters
The information entropy reflects the integration of the detail expression of the image after fusion. The information entropy of waveletweighted fusion, α channel fusion, and pseudocolor fusion concentrates between 4.5 and 6. Moreover, the result of waveletweighted fusion is better than α channel fusion when parameter is 50%. In addition, both of them are better than basic pseudocolor fusion, while the DWIMRI and CT fusion is larger than that of T1MRI and CT fusion.
 (b)
Analysis from the mutual information parameters
Mutual information is a measurement of statistical correlation of the fusion image between two random variables. The mutual information of fused images and CT original images of the waveletweighted maximum fusion, α channel fusion, and pseudocolor fusion concentrates between 0.5 and 0.65. Regarding the α channel fusion, when the parameter is 50%, the result of which is better than basic pseudocolor fusion. In addition, both of them are better than wavelet fusion. However, the mutual information of fused images and T1MRI original images concentrates between 0.5 and 0.65. The results of basic pseudocolor fusion and α channel fusion are fairly the same, and both of them are better than wavelet fusion, while the DWIMRI and CT fusion is smaller than that of T1MRI and CT fusion.
 (c)
Analysis from mean grads
Mean grads, also called clarity, reflects the changes of the fusion image grayscale. The mean grads of waveletweighted maximum fusion and pseudocolor fusion are higher than those of α channel fusion, and the mean grads of wavelet fusion of the two formers are better than those of pseudocolor fusion, while the DWIMRI and CT fusion is smaller than that of T1MRI and CT fusion.
 (d)
Analysis from the spatial frequency parameters
The spatial frequency of the fusion images measures the degree of the richness of image detail information images. The spatial frequency parameters of wavelet fusion are higher than those of pseudocolor fusion, and both of them are better than α channel fusion, while the DWIMRI and CT fusion is smaller than that of T1MRI and CT fusion.
In general, the mean grads and spatial frequency of waveletweighted fusion is better than pseudocolor fusion, while pseudocolor fusion is better than α channel fusion. The information entropy, mean grads, and spatial frequency of waveletweighted fusion is better than α channel fusion, while α channel fusion is better than pseudocolor fusion. About the mutual information of fused images and CT original images, pseudocolor fusion is better than α channel fusion, while α channel fusion is better than waveletweighted fusion. About the mutual information of fused images and MRI original images, α channel fusion is better than pseudocolor fusion, while pseudocolor fusion is better than waveletweighted fusion.
The information entropy of DWIMRI and CT fusion is larger than that of T1MRI; however, parameters of the mutual information, mean grads, and spatial frequency are all smaller than those of T1MRI fusion results (Table 3).
The objective evaluation parameters of fusion results
IE  MI  MG  SF  

CT  MRI  
Wavelet Fusion  T1  5.4457  0.6373  1.5771  3.7874  14.1608 
DWI  5.5860  0.5449  0.4007  3.0359  11.1143  
αblending fusion  T1  5.3285  0.6496  2.1761  2.7557  9.8607 
DWI  5.3444  0.5517  0.4006  2.2567  7.8133  
False color fusion  T1  4.4925  0.6601  1.9267  3.2593  11.1544 
DWI  5.0747  0.5581  0.3962  2.7197  9.5375 
Main characters of the evaluation parameters
Parameters  Definition  Formula  Evaluation criterion 

Information entropy  The information entropy can be described as the capability of the detailperformance.  \( IE=\sum_{m=0}^{255}H\left({P}_m\right)=\sum_{m=0}^{255}{P}_m\cdot \log \left({P}_m\right) \) (where m and P_{m} stand for the grayscale value and the probability that the pixel appears in the image.)  The larger the entropy is, the richer the details are, and the better the quality of the fused image is. 
Mutual information  Mutual information is a measurement of statistical correlation between two random variables.  \( MI\left(A,B\right)=\sum_{m=0}^{255}{P}_{AB}(m)\log \frac{P_{AB}(m)}{P_A(m)\cdot {P}_B(m)} \) (where P_{A}(m), P_{B}(m), and P_{AB}(m), respectively, show the probability of mgrayscale among image A, image B, and the united of images A and B.)  The higher the MI is, the much information fused images can extract from the original image, and the better the fusion results are. 
Mean Grads  Mean grads, also called clarity, which reflects the changes of image grayscale.  \( MG=\frac{1}{M\times N}\sum_{i=1}^M\sum_{j=1}^N\sqrt{\varDelta xF{\left(i,j\right)}^2+\varDelta yF{\left(i,j\right)}^2} \) (where ΔxF(i, j) and ΔyF(i, j) denote the difference of F along X and Y directions.)  The higher the mean grads is, the richer the image grayscale can express, and the more clearly the image is. 
Space Frequency  The spatial frequency of images measures the degree of the richness of image detail information images  \( SF=\sqrt{RF^2+{CF}^2} \) (where RF and CF stand for the rowfrequency and columnfrequency, respectively)  The higher the spatial frequency is, the richer the image levels are, the higher the contrast is. Consequently, the better the fusion result is. 
5 Conclusions
The multimodality image fusion can process images of certain organs or issues which were collected from diverse medical imaging equipment. The proposed method mainly studies the fusion of MRI and CT images, while take the cerebral infractionsuffered patients’ images as example. T1 and DWI sequences are respectively carried on wavelet fusion, pseudo color fusion, and α channel fusion. The numerous image data will be objectively assessed and compared from several aspects such as information entropy, mutual information, the mean grads, and spatial frequency.
Notes
Funding
The authors acknowledge the Education Fund of the Education Department 110 of Liaoning (Grant: L20150171), the Ministry of Education Fundamental Research Project of National Seed Fund Project of China (Grant:N151904001), the National Nature Science Foundations of China (Grant:61302013).
About the authors
Yin Dai received the Ph.D. degree from the Northeastern University of computer department. She is now a lecturer of SinoDutch Biomedical and Information Engineering School, Northeastern University. Her research is mainly on computeraided diagnosis and medical image processing.
ZiXia Zhou is currently working toward the Ph.D. degree in the Department of Electronic Engineering, in Fudan University. She received the B.S.degree in 2016 from Northeastern University. Her research interests are in medical image processing.
Lu Xu is currently working toward the M.S. degree from Biomedical Science and Medical Engineering School, Beihang University. She received the B.S. degree in 2017 from Northeastern University. Her research interests are in medical image processing and deep learning.
Authors’ contributions
YD did the main work. ZZ and LX did the experiments.
Competing interests
The authors declare that they have no competing interest.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.R. Stokking, I.G. Zubal, M.A. Viergever, Display of fused images: methods, interpretation, and diagnostic improvements. Seminar. Nucl. Med. 33(3), 219–227 (2003)CrossRefGoogle Scholar
 2.G.M. Rojas, U. Raff, Image fusion in neuroradiology: three clinical examples including MRI of Parkinson disease. Comput. Med. Imag. Grap. 31(1), 17–27 (2007)CrossRefGoogle Scholar
 3.G. Shruti, K. Ushah Kiran, R. Mohan, Multilevel Medical Image Fusion Using Segmented Image by Level set Evolution with Region Competition (Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 2005), pp. 1–4Google Scholar
 4.W. Quangui, Application of window technology in CT diagnosis. Pract. Med. J. 18(03), 286 (2011)Google Scholar
 5.L. Liguo, Effects of CT image factors. Journal of Medical Science 30(0307), 02 (2014)Google Scholar
 6.L. Wang, Research and Development of MultiPhase Tissue 3D Visualization System Based on Medical Image (Hebei University of Technology, Tianjin, 2009)Google Scholar
 7.Z. Weijian, The basic principle and medical application of XCT. Acad. Forum 099(30), 188–193 (2010)Google Scholar
 8.K. Xu, Medical Image Enhancement Processing and Analysis (Jilin University, Changchun, 2006)Google Scholar
 9.D.W. Townsend, Multimodality imaging of structure and function. Phys. Med. Biol. 53(4), R1–R39 (2008)MathSciNetCrossRefGoogle Scholar
 10.W.R. Crum, L.D. Griffin, D.L.G. Hill, D.J. Hawkes, Zen and the art of medical image registration: Correspondence, homology, and quality. Neuro Image 20(3), 1425–1437 (2003)Google Scholar
 11.J. Tsao, Interpolation artifacts in multimodality image registration based on maximization of mutual information. IEEE T Med. Imaging 22(7), 854–864 (2003)CrossRefGoogle Scholar
 12.J. Zhang, Z. Zhou, J. Teng, T. Li, in 2nd International Conference on Biomedical Engineering and Informatics. Fusion algorithm of functional images and anatomical images based on wavelet transform (IEEE press, Tianjin, 2009), pp. 215–219Google Scholar
 13.W. Ge, L. Gao, Multimodality medical image fusion algorithm based on nonseparable wavelet. Appl. Res. Comput. 26(5), 1965–1967 (2009)Google Scholar
 14.S.G. Mallat, A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 11(7), 674–693 (1989)CrossRefGoogle Scholar
 15.L. Yang, B. Guo, W. Ni, Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing 72(1), 203–211 (2008)CrossRefGoogle Scholar
 16.R.C. Gonnzalez, R.E. Woods, Digital image processing. (Publishing House of Electronics Industry, Beijing, 2011)Google Scholar
 17.A. Nishie, A.H. Stolpen, M. Obuchi, Evaluation of locally recurrent pelvic malignancy: Performance of T2 and diffusionweighted MRI with image fusion[J]. J. Magn. Reson. Imaging 28, 705–713 (2008)CrossRefGoogle Scholar
 18.P. Bhargavi, H. Bindu, A Novel Medical Image Fusion with Color Transformation. Int. Conference Comput. Commun. Inform. 01, 08–10 (2015)Google Scholar
 19.J. Xiaoyu, Multiimage fusion based on false color. J. Beijing Inst. Technol. 17(5), 645–649 (1997)Google Scholar
 20.T. Porter, T. Duff, Compositing digital images. Comput. Graph. 18, 253–259 (1984)CrossRefGoogle Scholar
 21.A.R. Smith, Alpha and the history of digital compositing. Microsoft Technical Memo 24(2010), 235238 (1995)Google Scholar
 22.A. Toet, J.M. Valeton, L.J. van Ruyven, Merging thermal and visual images by a contrast pyramid. Opt. Eng. 28, 287789–287792 (1989)CrossRefGoogle Scholar
 23.M. Ignotte, A multiresolution markovian fusion model for the color visualizatioin of hyperspectral image. IEEE T Geosci Remote. 48(12), 4236–4247 (2010)CrossRefGoogle Scholar
 24.G. Piella, New quality measures for image fusion, The 7th International Conference on Information Fusion. Opt. Eng. (1) 542–546 (2004)Google Scholar
 25.Liu Cheng, Wang Xingwu. Medical imaging diagnosis. People’s Medical Publishing HouseGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.