Self Similarity Image Registration Based on Reorientation of the Hessian
The modality independent neighbourhood descriptor (MIND) is a local registration metric that is based on the principle of self-similarity. However, the metric requires recalculation of the self similarity during registration as this inherently changes during image deformation. We propose a self similarity registration method based on the Hessian (HE) that efficiently deals with the recalculation issue. The representation of the local self-similarity via the Hessian enables keeping it up to date during deformation. As such, the registration procedure is efficient and not prone to fall in local minima. We have shown that reorienting the hessian gives a significant improvement (p<0.05) over leaving the reorientation out. Our technique also has a better performance over the existing MIND method on the DIR-Lab dataset as well as on abdominal MRI datasets albeit not significant. Ultimately, we will use the technique to quantify Crohn’s disease severity based on the relative contrast enhancement in registered images.
KeywordsImage registration hessian reorientation Crohn’s disease
Unable to display preview. Download preview PDF.
- 5.Gorbunova, V., Lo, P., Ashraf, H., Dirksen, A., Nielsen, M., de Bruijne, M.: Weight Preserving Image Registration for Monitoring Disease Progression in Lung CT. In: Metaxas, D., Axel, L., Fichtinger, G., Székely, G. (eds.) MICCAI 2008, Part II. LNCS, vol. 5242, pp. 863–870. Springer, Heidelberg (2008)CrossRefGoogle Scholar
- 6.Song, G., Tustison, N., Avants, B., Gee, J.C.: Lung CT Image Registration Using Diffeomorphic Transformation Models. In: van Ginneken, B., Murphy, K., Heimann, T., Pekar, V., Deng, X. (eds.) Medical Image Analysis for the Clinic: A Grand Challenge, pp. 23–32. CreateSpace, Charleston (2010)Google Scholar
- 13.Heinrich, M.P., Jenkinson, M., Gleeson, F.V., Brady, S.M., Schnabel, J.A.: Deformable Multimodal Registration with Gradient Orientation Based on Structure Tensors. Annals of the British Machine Vision Association 5(2), 1–11 (2011)Google Scholar