Advertisement

Segmentation of Fetal Adipose Tissue Using Efficient CNNs for Portable Ultrasound

  • Sagar Vaze
  • Ana I. L. Namburete
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11076)

Abstract

Adipose tissue mass has been shown to have a strong correlation with fetal nourishment, which has consequences on health in infancy and later life. In rural areas of developing nations, ultrasound has the potential to be the key imaging modality due to its portability and cost. However, many ultrasound image analysis algorithms are not compatibly portable, with many taking several minutes to compute on modern CPUs.

The contributions of this work are threefold. Firstly, by adapting the popular U-Net, we show that CNNs can achieve excellent results in fetal adipose segmentation from ultrasound images. We then propose a reduced model, U-Ception, facilitating deployment of the algorithm on mobile devices. The U-Ception network provides a 98.4% reduction in model size for a 0.6% reduction in segmentation accuracy (mean Dice coefficient). We also demonstrate the clinical applicability of the work, showing that CNNs can be used to predict a trend between gestational age and adipose area.

Notes

Acknowledgements

The authors are grateful for support from the Royal Academy of Engineering under the Engineering for Development Research Fellowship scheme, and the INTERGROWTH-21\(^\text {st}\) Consortium for provision of 3D fetal US image data.

References

  1. 1.
    Rueda, S., et al.: Evaluation and comparison of current fetal ultrasound image segmentation methods for biometric measurements: a grand challenge. IEEE Trans. Med. Imaging 33(4), 797–813 (2014)CrossRefGoogle Scholar
  2. 2.
    Symonds, M.E., Mostyn, A., Pearce, S., Budge, H., Stephenson, T.: Endocrine and nutritional regulation of fetal adipose tissue development. J. Endocrinol. 179(3), 293–299 (2003)CrossRefGoogle Scholar
  3. 3.
    UNICEF: Joint Malnutrition Estimates 2017 - UNICEF Data and AnalyticsGoogle Scholar
  4. 4.
    Lloyd-Fox, S., et al.: Functional near infrared spectroscopy (fNIRS) to assess cognitive function in infants in rural Africa. Nat. Sci. Rep. 4, 1–8 (2014)Google Scholar
  5. 5.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  6. 6.
    Rueda, S., Knight, C.L., Papageorghiou, A.T., Alison Noble, J.: Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step. Med. Image Anal. 26(1), 30–46 (2015)CrossRefGoogle Scholar
  7. 7.
    Chalana, V., Winter, T.C., Cyr, D.R., Haynor, D.R., Kim, Y.: Automatic fetal head measurements from sonographic images. Acad. Radiol. 3(8), 628–635 (1996)CrossRefGoogle Scholar
  8. 8.
    Pathak, S.D., Chalana, V., Kim, Y.: Interactive automatic fetal head measurements from ultrasound images using multimedia computer technology. Ultrasound Med. Biol. 23(5), 665–673 (1997)CrossRefGoogle Scholar
  9. 9.
    Lu, W., Tan, J., Floyd, R.: Automated fetal head detection and measurement in ultrasound images by iterative randomized hough transform. Ultrasound Med. Biol. 31(7), 929–936 (2005)CrossRefGoogle Scholar
  10. 10.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  11. 11.
    Ravishankar, H., Venkataramani, R., Thiruvenkadam, S., Sudhakar, P., Vaidya, V.: Learning and incorporating shape models for semantic segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 203–211. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66182-7_24CrossRefGoogle Scholar
  12. 12.
    Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: CVPR (2016)Google Scholar
  13. 13.
    Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. In: ICLR (2016)Google Scholar
  14. 14.
    Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning Workshop (2015)Google Scholar
  15. 15.
    Jin, J., Dundar, A., Culurciello, E.: Flattened convolutional neural networks for feedforward acceleration. In: ICLR (2015)Google Scholar
  16. 16.
    Sifre, L., Mallat, S.: Rigid-motion scattering for image classification. Ecole Polytechnique, CMAP. Ph.D. thesis (2014)Google Scholar
  17. 17.
    Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: CVPR (2016)Google Scholar
  18. 18.
    Chen, W., et al.: Compressing neural networks with the hashing trick. ICML (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Institute of Biomedical Engineering, Department of Engineering ScienceUniversity of OxfordOxfordUK

Personalised recommendations