Realistic Breast Mass Generation Through BIRADS Category

  • Hakmin Lee
  • Seong Tae Kim
  • Jae-Hyeok Lee
  • Yong Man RoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)


Generating realistic breast masses is a highly important task because the large-size database of annotated breast masses is scarcely available. In this study, a novel realistic breast mass generation framework using the characteristics of the breast mass (i.e. BIRADS category) has been devised. For that purpose, the visual-semantic BIRADS description for characterizing breast masses is embedded into the deep network. The visual-semantic description is encoded together with image features and used to generate the realistic masses according the visual-semantic description. To verify the effectiveness of the proposed method, two public mammogram datasets were used. Qualitative and quantitative experimental results have shown that the realistic breast masses could be generated according to the BIRADS category.


Lesion generation BIRADS description Generative adversarial network 



This work was supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2017-0-01779, A machine learning and statistical inference framework for explainable artificial intelligence).


  1. 1.
    Chuquicusma, M.J., et al.: How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis. In: IEEE 15th International Symposium on Biomedical Imaging. IEEE (2018)Google Scholar
  2. 2.
    Frid-Adar, M., et al.: Synthetic data augmentation using GAN for improved liver lesion classification. In: IEEE 15th International Symposium on Biomedical Imaging. IEEE (2018)Google Scholar
  3. 3.
    D’Orsi, C.J.: ACR BI-RADS atlas: breast imaging reporting and data system. 2013. American College of Radiology (2013)Google Scholar
  4. 4.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (2014)Google Scholar
  5. 5.
    Bojanowski, P., et al.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017)CrossRefGoogle Scholar
  6. 6.
    Chung, J., et al.: Empirical evaluation of gated recurrent neural networks on sequence modeling. In: NIPS 2014 Workshop on Deep Learning (2014)Google Scholar
  7. 7.
    Heath, M., et al.: The digital database for screening mammography. In: Proceedings of the 5th International Workshop on Digital Mammography. Medical Physics Publishing (2000)Google Scholar
  8. 8.
    Moreira, I.C., et al.: Inbreast: toward a full-field digital mammographic database. Acad. Radiol. 19(2), 236–248 (2012)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Reed, S., et al.: Learning deep representations of fine-grained visual descriptions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  10. 10.
    Chen, X., et al.: Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Advances in Neural Information Processing Systems (2016)Google Scholar
  11. 11.
    Karssemeijer, N., te Brake, G.M.: Detection of stellate distortions in mammograms. IEEE Trans. Med. Imaging 15(5), 611–619 (1996)CrossRefGoogle Scholar
  12. 12.
    Chan, H.P., et al.: Characterization of masses in digital breast tomosynthesis: Comparison of machine learning in projection views and reconstructed slices. Med. Phys. 37(7Part1), 3576–3586 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Hakmin Lee
    • 1
  • Seong Tae Kim
    • 2
  • Jae-Hyeok Lee
    • 1
  • Yong Man Ro
    • 1
    Email author
  1. 1.Image and Video Systems Lab, School of Electrical EngineeringKAISTDaejeonRepublic of Korea
  2. 2.Computer Aided Medical ProceduresTechnical University of MunichMunichGermany

Personalised recommendations