Abstract
Model quantization is leveraged to reduce the memory consumption and the computation time of deep neural networks. This is achieved by representing weights and activations with a lower bit resolution when compared to their high precision floating point counterparts. The suitable level of quantization is directly related to the model performance. Lowering the quantization precision (e.g. 2 bits), reduces the amount of memory required to store model parameters and the amount of logic required to implement computational blocks, which contributes to reducing the power consumption of the entire system. These benefits typically come at the cost of reduced accuracy. The main challenge is to quantize a network as much as possible, while maintaining the performance accuracy. In this work, we present a quantization method for the U-Net architecture, a popular model in medical image segmentation. We then apply our quantization algorithm to three datasets: (1) the Spinal Cord Gray Matter Segmentation (GM), (2) the ISBI challenge for segmentation of neuronal structures in Electron Microscopic (EM), and (3) the public National Institute of Health (NIH) dataset for pancreas segmentation in abdominal CT scans. The reported results demonstrate that with only 4 bits for weights and 6 bits for activations, we obtain 8 fold reduction in memory requirements while loosing only \(2.21\%\), \(0.57\%\) and \(2.09\%\) dice overlap score for EM, GM and NIH datasets respectively. Our fixed point quantization provides a flexible trade-off between accuracy and memory requirement, which is not provided by previous quantization methods for U-Net (Our code is released at https://github.com/hossein1387/U-Net-Fixed-Point-Quantization-for-Medical-Image-Segmentation).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Miotto, R., et al.: Deep learning for healthcare: review, opportunities and challenges. Brief. Bioinform. 19, 1236–1246 (2017)
Thaler, S., Menkovski, V.: The role of deep learning in improving healthcare. Data Science for Healthcare, pp. 75–116. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-05249-2_3
Hubara, I., et al.: Quantized neural networks: training neural networks with low precision weights and activations. JMLR 18, 6869–6898 (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Prados, F., et al.: Spinal cord grey matter segmentation challenge. NeuroImage 152, 312–329 (2017)
Cardona, A., et al.: An integrated micro-and macroarchitectural analysis of the drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol. 8, e1000502 (2010)
Roth, H.R., et al.: DeepOrgan: multi-level deep convolutional networks for automated pancreas segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9349, pp. 556–564. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24553-9_68
Pham, D.L., et al.: Current methods in medical image segmentation. Annu. Rev. Biomed. Eng. 2, 315–337 (2000)
Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)
Shen, D., et al.: Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 19, 221–248 (2017)
Honari, S., et al.: Recombinator networks: learning coarse-to-fine feature aggregation. In: CVPR (2016)
Badrinarayanan, V., et al.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. TPAMI 39, 2481–2495 (2017)
Noh, H., et al.: Learning deconvolution network for semantic segmentation. In: ICCV (2015)
Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)
Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_29
Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49
Zhou, A., et al.: Incremental network quantization: Towards lossless CNNs with low-precision weights. CoRR (2017)
Courbariaux, M., et al.: BinaryConnect: training deep neural networks with binary weights during propagations. In: NeurIPS (2015)
Xu, X., et al.: Quantization of fully convolutional networks for accurate biomedical image segmentation. In: CVPR (2018)
Heinrich, M.P., et al.: TernaryNet: Faster deep model inference without GPUs for medical 3D segmentation using sparse and binary convolutions. CoRR (2018)
Hinton, G., et al.: Neural networks for machine learning, video lectures. Coursera (2012)
Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from overfitting. JMLR 15, 1929–1958 (2014)
Tang, W., et al.: How to train a compact binary neural network with high accuracy? In: AAAI (2017)
Krizhevsky, A., et al.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25. Curran Associates, Inc. (2012)
Deuermeyer, D., Andrey, Z., Amy, R., Fritz, B.: Release notes for intel® distribution of openvino™ toolkit (2019). Accessed 13 June 2019
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
AskariHemmat, M. et al. (2019). U-Net Fixed-Point Quantization for Medical Image Segmentation. In: Zhou, L., et al. Large-Scale Annotation of Biomedical Data and Expert Label Synthesis and Hardware Aware Learning for Medical Imaging and Computer Assisted Intervention. LABELS HAL-MICCAI CuRIOUS 2019 2019 2019. Lecture Notes in Computer Science(), vol 11851. Springer, Cham. https://doi.org/10.1007/978-3-030-33642-4_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-33642-4_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-33641-7
Online ISBN: 978-3-030-33642-4
eBook Packages: Computer ScienceComputer Science (R0)