Abstract
The 3D ultrasound (US) entrance inspires a multitude of automated prenatal examinations. However, studies about the structuralized description of the whole fetus in 3D US are still rare. In this paper, we propose to estimate the 3D pose of fetus in US volumes to facilitate its quantitative analyses in global and local scales. Given the great challenges in 3D US, including the high volume dimension, poor image quality, symmetric ambiguity in anatomical structures and large variations of fetal pose, our contribution is three-fold. This is the first work about 3D pose estimation of fetus in the literature. We aim to extract the skeleton of whole fetus and assign different segments/joints with correct torso/limb labels. We propose a self-supervised learning (SSL) framework to finetune the deep network to form visually plausible pose predictions. Specifically, we leverage the landmark-based registration to effectively encode case-adaptive anatomical priors and generate evolving label proxy for supervision. To enable our 3D network perceive better contextual cues with higher resolution input under limited computing resource, we further adopt the gradient check-pointing (GCP) strategy to save GPU memory and improve the prediction. Extensively validated on a large 3D US dataset, our method tackles varying fetal poses and achieves promising results. 3D pose estimation of fetus has potentials in serving as a map to provide navigation for many advanced studies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bai, W., et al.: Semi-supervised learning for network-based cardiac MR image segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 253–260. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66185-8_29
Baumgartner, C.F., et al.: SonoNet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound. IEEE TMI 36(11), 2204–2215 (2017)
Chen, T., Xu, B., Zhang, C., Guestrin, C.: Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174 (2016)
Chen, Y., Shen, C., et al.: Adversarial PoseNet: a structure-aware convolutional network for human pose estimation. In: ICCV, pp. 1212–1221 (2017)
Huang, R., Noble, J.A., Namburete, A.I.L.: Omni-supervised learning: scaling up to large unlabelled medical datasets. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 572–580. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_65
Liu, J., et al.: Feature boosting network for 3D pose estimation. IEEE TPAMI (2019)
Namburete, A.I., et al.: Fully-automated alignment of 3D fetal brain ultrasound to a canonical reference space using multi-task learning. MedIA 46, 1–14 (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Salimans, T., Bulatov, Y.: Saving memory using gradient-checkpointing. https://github.com/openai/gradient-checkpointing/
Wang, G., Li, W., et al.: Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE TMI 37(7), 1562–1573 (2018)
Wu, L., et al.: Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation. In: ISBI, pp. 663–666. IEEE (2017)
Xu, Z., et al.: Less is more: simultaneous view classification and landmark detection for abdominal ultrasound images. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 711–719. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_79
Yang, X., Yu, L., et al.: Towards automated semantic segmentation in prenatal volumetric ultrasound. IEEE TMI 38(1), 180–193 (2019)
Acknowledgments
The work in this paper was supported by the grant from Research Grants Council of Hong Kong SAR (Project No. CUHK14225616), National Natural Science Foundation of China (Project No. U1813204) and Shenzhen Peacock Plan (No. KQTD2016053112051497, KQJSCX20180328095606003).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, X. et al. (2019). FetusMap: Fetal Pose Estimation in 3D Ultrasound. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11768. Springer, Cham. https://doi.org/10.1007/978-3-030-32254-0_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-32254-0_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32253-3
Online ISBN: 978-3-030-32254-0
eBook Packages: Computer ScienceComputer Science (R0)