Advertisement

LVC-Net: Medical Image Segmentation with Noisy Label Based on Local Visual Cues

  • Yucheng ShuEmail author
  • Xiao Wu
  • Weisheng Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

CNN-based deep architecture has been successfully applied to medical image semantic segmentation task because of its effective feature learning mechanism. However, due to the lack of semantic guidance, such supervised learning model may be susceptible to annotation noise. In order to address this problem, we propose a novel medical image segmentation algorithm based on automatic label error correction. Firstly, local visual saliency regions, namely the Local Visual Cues (LVCs), are captured from low-level feature channels. Then, a deformable spatial transformation module is integrated into our LVC-Net to build visual connections between the predictions and LVCs. By combining noisy labels with image LVCs, a novel loss function is proposed based on their intrinsic spatial relationship. Our method can effectively suppress the influence of label noise by utilizing potential visual guidance during the learning process, thereby generate better semantic segmentation results. Comparative experiment on hip x-ray image segmentation task demonstrate that our algorithm achieves significant improvement over state-of-the-arts in the presences of noisy label.

Notes

Acknowledgments

This research was funded in part by National Key R&D Program of China 2016YFC1000307-3, National Natural Science Foundation of China 61801068 & 61906024, Natural Science Foundation of Chongqing cstc2016jcyjA0407, Scientific and Technological Research Program of Chongqing Education Commission KJ1600419. The authors would like to thank Prof. Guoxin Nan for providing the data.

References

  1. 1.
    Yu, C., Wang, J., Peng, C., Gao, C., Gang, Y., Sang, N.: Learning a discriminative feature network for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1857–1866 (2018)Google Scholar
  2. 2.
    Kaul, C., Manandhar, S., Pears, N.: FocusNet: an attention-based fully convolutional network for medical image segmentation. arXiv preprint arXiv:1902.03091 (2019)
  3. 3.
    Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571 (2016)Google Scholar
  4. 4.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  5. 5.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  6. 6.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)CrossRefGoogle Scholar
  7. 7.
    Alom, M.Z., Hasan, M., Yakopcic, C., Taha, T.M., Asari, V.K.: Recurrent residual convolutional neural network based on U-net (R2U-net) for medical image segmentation. arXiv preprint arXiv:1802.06955 (2018)
  8. 8.
    Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)
  9. 9.
    Lu, Z., Fu, Z., Xiang, T., Han, P., Wang, L., Gao, X.: Learning from weak and noisy labels for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(3), 486–500 (2016)CrossRefGoogle Scholar
  10. 10.
    Khoreva, A., Benenson, R., Hosang, J., Hein, M., Schiele, B.: Simple does it: weakly supervised instance and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 876–885 (2017)Google Scholar
  11. 11.
    Acuna, D., Kar, A., Fidler, S.: Devil is in the edges: learning semantic boundaries from noisy annotations. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 11075–11083 (2019)Google Scholar
  12. 12.
    Liu, Y., Cheng, M., Hu, X., Wang K., Bai, X.: Richer convolutional features for edge detection. In: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, pp. 5872–5881 (2017)Google Scholar
  13. 13.
    Dai, J., et al.: Deformable convolutional networks. In: IEEE International Conference on Computer Vision, pp. 764–773 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Chongqing University of Posts and TelecommunicationsChongqingChina

Personalised recommendations