A Skip-Connected 3D DenseNet Networks with Adversarial Training for Volumetric Segmentation
In this paper, we propose a novel end-to-end adversarial training on volumetric brain segmentation architecture that allows to enforce long-range spatial label contiguity and label consistency. The proposed network consists of two networks: generator and discriminator. The generator network allows to take volumetric image as input and provides a volumetric probability map for each tissue. Then, the discriminator network learns to differentiate ground-truth maps from the probability maps of generator network. We design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground-truth segmentation distribution with the consideration of the spatial information on voxel level, which makes it difficult to learn the discriminator. In order to overcome it, the proposed discriminator provides a 3D confidence map which indicates corresponding regions of the probability maps close to the ground-truth. Based on the 3D confidence map information, the generator network will refine prediction output close to the ground-truth maps in a high-order structure.
KeywordsAdversarial training Brain segmentation Deep convolutional networks
This research was supported partly by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2018R1C1B6007472). This research was supported partly by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2018-2018-0-01798) supervised by the IITP (Institute for Information & communications Technology Promotion).
- 1.Bui, T.D., Shin, J., Moon, T.: 3D densely convolution networks for volumetric segmentation. arXiv preprint arXiv:1709.03199 (2017)
- 2.Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
- 4.Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
- 5.Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, vol. 7 (2017)
- 6.Hung, W.C., Tsai, Y.H., Liou, Y.T., Lin, Y.Y., Yang, M.H.: Adversarial learning for semi-supervised semantic segmentation. arXiv preprint arXiv:1802.07934 (2018)
- 8.Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
- 10.Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408 (2016)
- 13.Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
- 14.Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar