Multi-layer cube sampling for liver boundary detection in PET–CT images

Scientific Paper

Abstract

Liver metabolic information is considered as a crucial diagnostic marker for the diagnosis of fever of unknown origin, and liver recognition is the basis of automatic diagnosis of metabolic information extraction. However, the poor quality of PET and CT images is a challenge for information extraction and target recognition in PET–CT images. The existing detection method cannot meet the requirement of liver recognition in PET–CT images, which is the key problem in the big data analysis of PET–CT images. A novel texture feature descriptor called multi-layer cube sampling (MLCS) is developed for liver boundary detection in low-dose CT and PET images. The cube sampling feature is proposed for extracting more texture information, which uses a bi-centric voxel strategy. Neighbour voxels are divided into three regions by the centre voxel and the reference voxel in the histogram, and the voxel distribution information is statistically classified as texture feature. Multi-layer texture features are also used to improve the ability and adaptability of target recognition in volume data. The proposed feature is tested on the PET and CT images for liver boundary detection. For the liver in the volume data, mean detection rate (DR) and mean error rate (ER) reached 95.15 and 7.81% in low-quality PET images, and 83.10 and 21.08% in low-contrast CT images. The experimental results demonstrated that the proposed method is effective and robust for liver boundary detection.

Keywords

Boundary detection Multi-layer PET–CT Feature extraction Classification 

Notes

Acknowledgements

We are very grateful to Dr. Li Huo from the Peking Union Medical College Hospital for providing the datasets used in this paper.

Funding

This work was supported by the National Key R&D Program of China (2017YFC0112000), and the National Science Foundation Program of China (61672099, 81627803, 61501030, 61527827).

Compliance with Ethical Standards

Conflict of interest

The authors declare that they have no competing interests.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. 1.
    Hersch EC, Oh RC (2014) Prolonged febrile illness and fever of unknown origin in adults. Am Fam Physician 90(2):91–96PubMedGoogle Scholar
  2. 2.
    Akkasaligar PT, Biradar S (2014) Classification of medical ultrasound images of kidney. Int J Comput Appl 3:24–28Google Scholar
  3. 3.
    Li GZ, Yang J, Liu GP, Xue L (2004) Feature selection for multi-class problems using support vector machines. In Pacific Rim International Conference on Artificial Intelligence, vol 3157. Springer, Berlin, pp 292–300Google Scholar
  4. 4.
    Adegoke BO (2014) Review of feature selection methods in medical image processing. Iosrjen Org 4(1):01–05Google Scholar
  5. 5.
    Thakare V (2013) Survey on image texture classification techniques. Int J Adv Technol 4:97–104CrossRefGoogle Scholar
  6. 6.
    Gadkari D (2004) Image quality analysis using GLCM. University of Central Florida, OrlandoGoogle Scholar
  7. 7.
    Villalobos-Castaldi FM, Felipe-Riverón EM, Sánchez-Fernández LP (2010) A fast, efficient and automated method to extract vessels from fundus images. J Vis 13(3):263–270CrossRefGoogle Scholar
  8. 8.
    Zulpe NS, Pawar VP (2012) GLCM textural features for brain tumor classification. Int J Comput Sci Issues 9(3), 354–359Google Scholar
  9. 9.
    Park S, Kim B, Lee J, Goo JM, Shin YG (2011) GGO nodule volume-preserving nonrigid lung registration using GLCM texture analysis. IEEE Trans Biomed Eng 58(10):2885–2894CrossRefPubMedGoogle Scholar
  10. 10.
    Iscan Z (2009) Medical image segmentation with transform and moment based features and incremental supervised neural network. Digit Signal Proc 19(5):890–901CrossRefGoogle Scholar
  11. 11.
    Nosrati M, Karimi R, Hariri M, Malekian K (2013) Edge detection techniques in processing digital images: investigation of canny algorithm and gabor method. World Appl Program 3(3):116–121.Google Scholar
  12. 12.
    Shin KY, Park YH, Nguyen DT, Park KR (2014) Finger-vein image enhancement using a fuzzy-based fusion method with Gabor and Retinex filtering. Sensors 14(2):3095CrossRefPubMedPubMedCentralGoogle Scholar
  13. 13.
    Aach T, Kaup A, Mester R (1995) On texture analysis: Local energy transforms versus quadrature filters. Sig Process 45(2):173–181CrossRefGoogle Scholar
  14. 14.
    Jain AK, Farrokhnia F (1991) Unsupervised texture segmentation using Gabor filters. Pattern Recognit 24(12):1167–1186CrossRefGoogle Scholar
  15. 15.
    Soares JVB, Leandro JJG, Cesar RM, Jelinek HF, Cree MJ (2006) Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Trans Med Imaging 25(9):1214–1222CrossRefPubMedGoogle Scholar
  16. 16.
    Yang X, Rossi PJ, Jani AB, Mao H, Curran WJ, Liu T (2016) 3D Transrectal Ultrasound (TRUS) prostate segmentation based on optimal feature learning framework. In: Medical Imaging 2016: Image Processing. International Society for Optics and Photonics, p 97842FGoogle Scholar
  17. 17.
    Nabizadeh N, Kubat M (2015) Brain tumors detection and segmentation in MR images: Gabor wavelet vs. statistical features. Comput Electr Eng 45(C):286–301CrossRefGoogle Scholar
  18. 18.
    Ojala T, Harwood I (1996) A comparative study of texture measures with classification based on feature distributions. Pattern Recognit 29(1):51–59CrossRefGoogle Scholar
  19. 19.
    Verma M, Raman B (2016) Local tri-directional patterns: a new texture feature descriptor for image retrieval. Digit Signal Proc 51:62–72CrossRefGoogle Scholar
  20. 20.
    Zhang Z, Lyons M, Schuster M, Akamatsu S (1998) Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron. In: Proceedings of IEEE international conference on automatic face and gesture recognition, pp 454–459, 1998Google Scholar
  21. 21.
    Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556Google Scholar

Copyright information

© Australasian College of Physical Scientists and Engineers in Medicine 2018

Authors and Affiliations

  • Xinxin Liu
    • 1
  • Jian Yang
    • 1
  • Shuang Song
    • 1
  • Hong Song
    • 2
  • Danni Ai
    • 1
  • Jianjun Zhu
    • 1
  • Yurong Jiang
    • 1
  • Yongtian Wang
    • 1
  1. 1.Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and ElectronicsBeijing Institute of TechnologyBeijingChina
  2. 2.School of SoftwareBeijing Institute of TechnologyBeijingChina

Personalised recommendations