Automated CNN-Based Reconstruction of Short-Axis Cardiac MR Sequence from Real-Time Image Data
We present a methodology for reconstructing full-cycle respiratory and cardiac gated short-axis cine MR sequences from real-time MR data. For patients who are too ill or otherwise incapable of consistent breath holds, real-time MR sequences are the preferred means of acquiring cardiac images, but suffer from inferior image quality compared to standard short-axis sequences and lack cardiac ECG gating. To construct a sequence from real-time images which, as close as possible, replicates the characteristics of short-axis series, the phase of the cardiac cycle must be estimated for each image and the left ventricle identified, to be used as a landmark for slice re-alignment. Our method employs CNN-based deep learning to segment the left ventricle in the real-time sequence, which is then used to estimate the pool volume and thus the position of each image in the cardiac cycle. We then use manifold learning to account for the respiratory cycle so as to select images of the best quality at expiration. From these images a selection is made to automatically reconstruct a single cardiac cycle, and the images and segmentations are then aligned. The aligned pool segmentations can then be used to calculate volume over time and thus volume-based biomarkers.
KeywordsAutomatic segmentation Real time cardiac imaging Image-based motion correction
This research was partly supported by the National Institute for Health Research (NIHR) Biomedical Research Centre (BRC) at Guy’s and St Thomas’ NHS Foundation Trust. Views expressed are those of the authors and not necessarily of the NHS, the NIHR, or the Dept. of Health.
- 6.He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR abs/1502.01852 (2015)Google Scholar
- 7.He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. CoRR abs/1603.05027 (2016)Google Scholar
- 8.Kerfoot, E., et al.: Eidolon: visualization and computational framework for multi-modal biomedical data analysis. In: Zheng, G., Liao, H., Jannin, P., Cattin, P., Lee, S.-L. (eds.) MIAR 2016. LNCS, vol. 9805, pp. 425–437. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-43775-0_39CrossRefGoogle Scholar
- 9.Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, vol. 1, pp. 1097–1105. NIPS 2012, Curran Associates Inc., USA (2012)Google Scholar
- 12.Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. CoRR abs/1505.04597 (2015)Google Scholar
- 14.Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. chap. Learning Internal Representations by Error Propagation, pp. 318–362. MIT Press, Cambridge (1986)Google Scholar
- 15.Simard, P.Y., Steinkraus, D., Platt, J.: Best practices for convolutional neural networks applied to visual document analysis. Institute of Electrical and Electronics Engineers, Inc. August 2003Google Scholar
- 16.Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual u-net. CoRR abs/1711.10684 (2017)Google Scholar