Advertisement

Feature2Mass: Visual Feature Processing in Latent Space for Realistic Labeled Mass Generation

  • Jae-Hyeok Lee
  • Seong Tae Kim
  • Hakmin Lee
  • Yong Man RoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11134)

Abstract

This paper deals with a method for generating realistic labeled masses. Recently, there have been many attempts to apply deep learning to various bio-image computing fields including computer-aided detection and diagnosis. In order to learn deep network model to be well-behaved in bio-image computing fields, a lot of labeled data is required. However, in many bioimaging fields, the large-size of labeled dataset is scarcely available. Although a few researches have been dedicated to solving this problem through generative model, there are some problems as follows: (1) The generated bio-image does not seem realistic; (2) the variation of generated bio-image is limited; and (3) additional label annotation task is needed. In this study, we propose a realistic labeled bio-image generation method through visual feature processing in latent space. Experimental results have shown that mass images generated by the proposed method were realistic and had wide expression range of targeted mass characteristics.

Keywords

Feature processing in latent space Image synthesis Bio-image generation Medical mass generation 

Notes

Acknowledgement

This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2017-0-01778, Development of Explainable Human-level Deep Machine Learning Inference Framework).

References

  1. 1.
    Ben-Cohen, A., Klang, E., Raskin, S.P., Amitai, M.M., Greenspan, H.: Virtual PET images from CT data using deep convolutional networks: initial results. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 49–57. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68127-6_6CrossRefGoogle Scholar
  2. 2.
    Cheng, J.Z., et al.: Computer-aided diagnosis with deep learning architecture: applications to breast lesions in us images and pulmonary nodules in CT scans. Sci. Rep. 6, 24454 (2016)CrossRefGoogle Scholar
  3. 3.
    Chuquicusma, M.J., Hussein, S., Burt, J., Bagci, U.: How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 240–244. IEEE (2018)Google Scholar
  4. 4.
    Costa, P., et al.: Towards adversarial retinal image synthesis. arXiv preprint arXiv:1701.08974 (2017)
  5. 5.
    D’Orsi, C.J.: ACR BI-RADS atlas: breast imaging reporting and data system. American College of Radiology (2013)Google Scholar
  6. 6.
    Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., Greenspan, H.: GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. arXiv preprint arXiv:1803.01229 (2018)
  7. 7.
    Frid-Adar, M., Klang, E., Amitai, M., Goldberger, J., Greenspan, H.: Synthetic data augmentation using GAN for improved liver lesion classification. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 289–293. IEEE (2018)Google Scholar
  8. 8.
    Gordon, M., et al.: Segmentation of inner and outer bladder wall using deep-learning convolutional neural network in CT urography. In: Medical Imaging 2017: Computer-Aided Diagnosis, vol. 10134, p. 1013402. International Society for Optics and Photonics (2017)Google Scholar
  9. 9.
    Kitchen, A., Seah, J.: Deep generative adversarial neural networks for realistic prostate lesion MRI synthesis. arXiv preprint arXiv:1708.00129 (2017)
  10. 10.
    Nie, D., et al.: Medical image synthesis with context-aware generative adversarial networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 417–425. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_48CrossRefGoogle Scholar
  11. 11.
    Roth, H.R., et al.: Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE Trans. Med. Imaging 35(5), 1170–1181 (2016)CrossRefGoogle Scholar
  12. 12.
    Tsehay, Y.K., et al.: Convolutional neural network based deep-learning architecture for prostate cancer detection on multiparametric magnetic resonance images. In: Medical Imaging 2017: Computer-Aided Diagnosis, vol. 10134, p. 1013405. International Society for Optics and Photonics (2017)Google Scholar
  13. 13.
    Zhang, W., et al.: Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage 108, 214–224 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jae-Hyeok Lee
    • 1
  • Seong Tae Kim
    • 1
  • Hakmin Lee
    • 1
  • Yong Man Ro
    • 1
    Email author
  1. 1.School of Electrical EngineeringKAISTDaejeonRepublic of Korea

Personalised recommendations