Synergistic attention U-Net for sublingual vein segmentation

  • Tingxiao YangEmail author
  • Yuichiro Yoshimura
  • Akira Morita
  • Takao Namiki
  • Toshiya Nakaguchi
Original Article


The tongue is one of the most sensitive organs of the human body. The changes in the tongue indicate the changes of the human state. One of the features of the tongue, which can be used to inspect the blood circulation of human, is the shape information of the sublingual vein. Therefore, this paper aims to segment the sublingual vein from the RGB images of the tongue. In traditional segmentation network training based on deep learning, the resolution of the input image is generally resized to save training costs. However, the size of the sublingual vein is much smaller than the size of the tongue relative to the entire image. The resized inputs are likely to cause the network to fail to capture target information for the smaller segmentation and produce an “all black” output. This study first pointed out that the training of the segmentation of the sublingual vein compared to the tongue segmentation is much more difficult through a small dataset. At the same time, we also compared the effects of different input sizes on small sublingual segmentation. In response to the problems that arise, we propose a synergistic attention network. By dismembering the entire encoder–decoder framework and updating the parameters synergistically, the proposed network can not only improve the convergence speed of training process, but also avoid the problem of falling into the optimal local solution and maintains the stability of training without increasing the training cost and additional regional auxiliary labels.


Tongue Sublingual veins Segmentation Synergistic Attention Deep learning 



  1. 1.
    TensorFlow v1.13 stable official API, tf.train.AdamOptimizer. Accessed 20 Apr 2019
  2. 2.
    Tensorlayer v1.11.1 official API, source code for tensorlayer.cost-dice_coe. Accessed 20 Apr 2019
  3. 3.
    He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: The IEEE conference on computer vision and pattern recognition (CVPR), pp. 770–778.
  4. 4.
    Carvalho T, De Rezende ER, Alves MT, Balieiro FK, Sovat RB (2017) Exposing computer generated images by eye’s region classification via transfer learning of vgg19 cnn. In: 2017 16th IEEE international conference on machine learning and applications (ICMLA). IEEE, pp 866–870Google Scholar
  5. 5.
    Chen F, Zhang D, Wu J, Zhang B (2017) Computerized analysis of tongue sub-lingual veins to detect lung and breast cancers. In: 2017 3rd IEEE international conference on computer and communications (ICCC). IEEE, pp 2708–2712Google Scholar
  6. 6.
    Chen L, Fei H, Xiao Y, He J, Li H (2017) Why batch normalization works? a buckling perspective. In: 2017 IEEE international conference on information and automation (ICIA). IEEE, pp 1184–1189Google Scholar
  7. 7.
    Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder–decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV), pp 801–818Google Scholar
  8. 8.
    Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1251–1258Google Scholar
  9. 9.
    David Z, Wangmeng Z (2016) Medical biometrics: computerized TCM data analysis. World Scientific, SingaporeGoogle Scholar
  10. 10.
    Dong C, Loy CC, He K, Tang X (2014) Learning a deep convolutional network for image super-resolution. In: European conference on computer vision. Springer, pp 184–199Google Scholar
  11. 11.
    Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680Google Scholar
  12. 12.
    Han D (2013) Comparison of commonly used image interpolation methods. In: Proceedings of the 2nd international conference on computer science and electronics engineering. Atlantis PressGoogle Scholar
  13. 13.
    He K, Gkioxari G, Dollár P, Girshick RB (2017) Mask r-cnn. 2017 IEEE international conference on computer vision (ICCV), pp 2980–2988Google Scholar
  14. 14.
    Isola P, Zhu JY, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134Google Scholar
  15. 15.
    Lo LC, Chen YF, Chen WJ, Cheng TL, Chiang JY (2012) The study on the agreement between automatic tongue diagnosis system and traditional Chinese medicine practitioners. Evid Based Complement Altern Med 2012:505063. Google Scholar
  16. 16.
    Lo LC, Chiang JY, Cheng TL, Shieh PS (2012) Visual agreement analyses of traditional Chinese medicine: a multiple-dimensional scaling approach. Evid Based Complement Altern Med 2012:516473. Google Scholar
  17. 17.
    Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440Google Scholar
  18. 18.
    Nakaguchi T, Takeda K, Ishikawa Y, Oji T, Yamamoto S, Tsumura N, Ueda K, Nagamine K, Namiki T, Miyake Y (2015) Proposal for a new noncontact method for measuring tongue moisture to assist in tongue diagnosis and development of the tongue image analyzing system, which can separately record the gloss components of the tongue. Biomed Res Int 2015:249609. Google Scholar
  19. 19.
    Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241Google Scholar
  20. 20.
    Rush AM, Harvard S, Chopra S, Weston J (2017) A neural attention model for sentence summarization. In: ACLWeb. Proceedings of the 2015 conference on empirical methods in natural language processingGoogle Scholar
  21. 21.
    Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958MathSciNetzbMATHGoogle Scholar
  22. 22.
    Su D, Willis P (2004) Image interpolation by pixel-level data-dependent triangulation. In: Computer graphics forum, vol 23. Wiley Online Library, pp 189–201Google Scholar
  23. 23.
    Woo S, Lee CL (2018) Decision boundary formation of deep convolution networks with ReLU. In: 2018 IEEE 16th international conference on dependable, autonomic and secure computing, 16th international conference on pervasive intelligence and computing, 4th international conferenceon big data intelligence and computing and cyber science and technology congress (DASC/PiCom/DataCom/CyberSciTech). IEEE, pp 885–888Google Scholar
  24. 24.
    Xie X, Han X, Liao Q, Shi G (2017) Visualization and pruning of SSD with the base network VGG16. In: Su D, Willis P (eds) Proceedings of the 2017 international conference on deep learning technologies. ACM, pp 90–94Google Scholar
  25. 25.
    Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R, Zemel R, Bengio Y (2015) Show, attend and tell: Neural image caption generation with visual attention. In: International conference on machine learning, pp 2048–2057Google Scholar
  26. 26.
    Zaheer R, Shaziya H (2018) Gpu-based empirical evaluation of activation functions in convolutional neural networks. In: 2018 2nd international conference on inventive systems and control (ICISC). IEEE, pp 769–773Google Scholar
  27. 27.
    Zhang GG, Lee W, Bausell B, Lao L, Handwerger B, Berman B (2005) Variability in the traditional Chinese medicine (TCM) diagnoses and herbal prescriptions provided by three TCM practitioners for 40 patients with rheumatoid arthritis. J Altern Complement Med 11(3):415–421CrossRefGoogle Scholar
  28. 28.
    Zhao H, Shi J, Qi X, Wang X, Jia J (2017) Pyramid scene parsing network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2881–2890Google Scholar

Copyright information

© International Society of Artificial Life and Robotics (ISAROB) 2019

Authors and Affiliations

  • Tingxiao Yang
    • 1
    Email author
  • Yuichiro Yoshimura
    • 2
  • Akira Morita
    • 3
  • Takao Namiki
    • 3
  • Toshiya Nakaguchi
    • 2
  1. 1.Graduate School of Science and TechnologyChiba UniversityChibaJapan
  2. 2.Center for Frontier Medical EngineeringChiba UniversityChibaJapan
  3. 3.Graduate School of MedicineChiba UniversityChibaJapan

Personalised recommendations