Semi-supervised Learning by Disentangling and Self-ensembling over Stochastic Latent Space

  • Prashnna Kumar Gyawali
  • Zhiyuan LiEmail author
  • Sandesh Ghimire
  • Linwei Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)


The success of deep learning in medical imaging is mostly achieved at the cost of a large labeled data set. Semi-supervised learning (SSL) provides a promising solution by leveraging the structure of unlabeled data to improve learning from a small set of labeled data. Self-ensembling is a simple approach used in SSL to encourage consensus among ensemble predictions of unknown labels, improving generalization of the model by making it more insensitive to the latent space. Currently, such an ensemble is obtained by randomization such as dropout regularization and random data augmentation. In this work, we hypothesize – from the generalization perspective – that self-ensembling can be improved by exploiting the stochasticity of a disentangled latent space. To this end, we present a stacked SSL model that utilizes unsupervised disentangled representation learning as the stochastic embedding for self-ensembling. We evaluate the presented model for multi-label classification using chest X-ray images, demonstrating its improved performance over related SSL models as well as the interpretability of its disentangled representations.


Semi-supervised learning Self-ensembling Disentangled representation learning 



This work is supported by NSF CAREER ACI-1350374 and NIH NHLBI R15HL140500.


  1. 1.
    Battler, A., et al.: The initial chest x-ray in acute myocardial infarction. Prediction of early and late mortality and survival. Circulation 61(5), 1004–1009 (1980)CrossRefGoogle Scholar
  2. 2.
    Chartsias, A., et al.: Factorised spatial representation learning: application in semi-supervised myocardial segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 490–498. Springer, Cham (2018). Scholar
  3. 3.
    Ghimire, S., Gyawali, P.K., Dhamala, J., Sapp, J.L., Horacek, M., Wang, L.: Improving generalization of deep networks for inverse reconstruction of image sequences. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 153–166. Springer, Cham (2019). Scholar
  4. 4.
    Irvin, J., et al.: CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In: AAAI (2019)Google Scholar
  5. 5.
    Kawaguchi, K., Bengio, Y., Verma, V., Kaelbling, L.P.: Generalization in machine learning via analytical learning theory. arXiv preprint arXiv:1802.07426 (2018)
  6. 6.
    Kingma, D.P., Mohamed, S., Rezende, D.J., Welling, M.: Semi-supervised learning with deep generative models. In: NeurIPS (2014)Google Scholar
  7. 7.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2013)Google Scholar
  8. 8.
    Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. In: ICLR (2017)Google Scholar
  9. 9.
    Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRefGoogle Scholar
  10. 10.
    Madani, A., Moradi, M., Karargyris, A., Syeda-Mahmood, T.: Semi-supervised learning with generative adversarial networks for chest x-ray classification with ability of data domain adaptation. In: ISBI (2018)Google Scholar
  11. 11.
    Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Prashnna Kumar Gyawali
    • 1
  • Zhiyuan Li
    • 1
    Email author
  • Sandesh Ghimire
    • 1
  • Linwei Wang
    • 1
  1. 1.Rochester Institute of TechnologyRochesterUSA

Personalised recommendations