Abstract
Unsupervised domain adaptation is very useful for medical image segmentation. Previous works mainly considered the situation with one source domain and one target domain. However in practice, multi-source and/or multi-target domains are generally available. Instead of implementing adaptation one by one, in this work we study how to achieve multiple domain alignment simultaneously to improve the segmentation performance of domain adaptation. We use the VAE framework to transform all domains into a common feature space, and estimate their corresponding distributions. By mixing domains and minimizing the distribution distance, the proposed framework extracts domain-invariant features. We verified the method on multi-sequence cardiac MR images for unsupervised segmentation. Results experimentally demonstrated that mixing target domains together could improve the segmentation accuracy, when the label distributions of mixed target domains are closer to that of the source domain than each unmixed target domain. Compared to state-of-the-art methods, the proposed framework obtained promising results.
This work was funded by the National Natural Science Foundation of China (grant no. 61971142, 62111530195 and 62011540404).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Karamitsos, T.D., Francis, J.M., Myerson, S., Selvanayagam, J.B., Neubauer, S.: The role of cardiovascular magnetic resonance imaging in heart failure. J. Am. Coll. Cardiol. 54(15), 1407–1424 (2009)
Petitjean, C., Dacher, J.-N.: A review of segmentation methods in short axis cardiac MR images. Med. Image Anal. 15(2), 169–184 (2011)
Raina, R., Battle, A., Lee, H., Packer, B., Ng, A.Y.: Self-taught learning: transfer learning from unlabeled data. In: Proceedings of the 24th International Conference on Machine Learning, pp. 759–766. ACM (2007)
Shimodaira, H.: Improving predictive inference under covariate shift by weighting the log-likelihood function. J. Stat. Plann. Inference 90(2), 227–244 (2000)
Csurka, G.: A comprehensive survey on domain adaptation for visual applications. In: Csurka, G. (ed.) Domain Adaptation in Computer Vision Applications. ACVPR, pp. 1–35. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58347-1_1
Dou, Q., et al.: PnP-AdaNet: plug-and-play adversarial domain adaptation network at unpaired cross-modality cardiac segmentation. IEEE Access 7, 99065–99076 (2019)
Zhang, Z., Yang, L., Zheng, Y.: Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9242–9251 (2018)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ser. ICML’17, pp. 214–223. JMLR.org (2017)
Kamnitsas, K., et al.: Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 597–609. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_47
Chen, C., Dou, Q., Chen, H., Qin, J., Heng, P.: Synergistic image and feature adaptation: towards cross-modality domain adaptation for medical image segmentation. In: The Thirty-Third AAAI Conference on Artificial Intelligence, pp. 865–872 (2019)
Ouyang, C., Kamnitsas, K., Biffi, C., Duan, J., Rueckert, D.: Data efficient unsupervised domain adaptation for cross-modality image segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 669–677. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_74
Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. CoRR arXiv:1312.6114 (2013)
Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: maximizing for domain invariance. CoRR arXiv:1412.3474 (2014)
Zhuang, X.: Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE Trans. Pattern Anal. Mach. Intell. 41(12), 2933–2946 (2019)
Zhuang, X.: Multivariate mixture model for cardiac segmentation from multi-sequence MRI. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 581–588. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_67
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Wu, F., Li, L., Zhuang, X. (2022). Multi-modality Cardiac Segmentation via Mixing Domains for Unsupervised Adaptation. In: Puyol Antón, E., et al. Statistical Atlases and Computational Models of the Heart. Multi-Disease, Multi-View, and Multi-Center Right Ventricular Segmentation in Cardiac MRI Challenge. STACOM 2021. Lecture Notes in Computer Science(), vol 13131. Springer, Cham. https://doi.org/10.1007/978-3-030-93722-5_20
Download citation
DOI: https://doi.org/10.1007/978-3-030-93722-5_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93721-8
Online ISBN: 978-3-030-93722-5
eBook Packages: Computer ScienceComputer Science (R0)