Advertisement

Journal of Medical Systems

, 43:329 | Cite as

Stomach Deformities Recognition Using Rank-Based Deep Features Selection

  • Muhammad Attique Khan
  • Muhammad SharifEmail author
  • Tallha Akram
  • Mussarat Yasmin
  • Ramesh Sunder Nayak
Image & Signal Processing
  • 72 Downloads
Part of the following topical collections:
  1. Recent Advances in Deep Learning for Biomedical Signal Processing, Health Informatics and Computer Vision

Abstract

Doctor utilizes various kinds of clinical technologies like MRI, endoscopy, CT scan, etc., to identify patient’s deformity during the review time. Among set of clinical technologies, wireless capsule endoscopy (WCE) is an advanced procedures used for digestive track malformation. During this complete process, more than 57,000 frames are captured and doctors need to examine a complete video frame by frame which is a tedious task even for an experienced gastrologist. In this article, a novel computerized automated method is proposed for the classification of abdominal infections of gastrointestinal track from WCE images. Three core steps of the suggested system belong to the category of segmentation, deep features extraction and fusion followed by robust features selection. The ulcer abnormalities from WCE videos are initially extracted through a proposed color features based low level and high-level saliency (CFbLHS) estimation method. Later, DenseNet CNN model is utilized and through transfer learning (TL) features are computed prior to feature optimization using Kapur’s entropy. A parallel fusion methodology is opted for the selection of maximum feature value (PMFV). For feature selection, Tsallis entropy is calculated later sorted into descending order. Finally, top 50% high ranked features are selected for classification using multilayered feedforward neural network classifier for recognition. Simulation is performed on collected WCE dataset and achieved maximum accuracy of 99.5% in 21.15 s.

Keywords

Colorectal cancer WCE Saliency estimation Deep features selection Features fusion 

Notes

Compliance with ethical standards

Conflict of interest

All authors have no conflict of interest and contribute equally in this work for results compilation and other technical support.

Ethical approval (for animals)

Not Applicable.

Ethical approval (for human)

The datasets which are used in this work are publically available such as PH2, ISBI 2016 and ISBI 2017.

Informed consent

Not Applicable.

References

  1. 1.
    Siegel, R. L., Miller, K. D., and Jemal, A., Cancer statistics, 2017. CA Cancer J. Clin. 67:7–30, 2017.CrossRefPubMedGoogle Scholar
  2. 2.
    Fu, Y., Zhang, W., Mandal, M., and Meng, M. Q.-H., Computer-aided bleeding detection in WCE video. IEEE journal of biomedical and health informatics 18:636–642, 2014.CrossRefPubMedGoogle Scholar
  3. 3.
    Iddan, G., Meron, G., Glukhovsky, A., and Swain, P., Wireless capsule endoscopy. Nature 405:417, 2000.CrossRefPubMedGoogle Scholar
  4. 4.
    Mergener, K., Update on the use of capsule endoscopy. Gastroenterol. Hepatol. 4:107, 2008.Google Scholar
  5. 5.
    Liaqat, A., Khan, M. A., Shah, J. H., Sharif, M., Yasmin, M., and Fernandes, A. S. L., Automated ulcer and bleeding classification from Wce images using multiple features fusion and selection. Journal of Mechanics in Medicine and Biology 18:1850038, 2018.CrossRefGoogle Scholar
  6. 6.
    Nasir, M., Attique Khan, M., Sharif, M., Lali, I. U., Saba, T., and Iqbal, T., An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach. Microsc. Res. Tech. 81:528–543, 2018.CrossRefPubMedGoogle Scholar
  7. 7.
    Khan, M. A., Akram, T., Sharif, M., Shahzad, A., Aurangzeb, K., Alhussein, M. et al., An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification. BMC Cancer 18:638, 2018.CrossRefPubMedPubMedCentralGoogle Scholar
  8. 8.
    Sharif, M., Khan, M. A., Iqbal, Z., Azam, M. F., Lali, M. I. U., and Javed, M. Y., Detection and classification of citrus diseases in agriculture based on optimized weighted segmentation and feature selection. Comput. Electron. Agric. 150:220–234, 2018.CrossRefGoogle Scholar
  9. 9.
    Sharif, M., Tanvir, U., Munir, E. U., Khan, M. A., and Yasmin, M., Brain tumor segmentation and classification by improved binomial thresholding and multi-features selection. J. Ambient. Intell. Humaniz. Comput.:1–20, 2018.Google Scholar
  10. 10.
    Akram, T., Khan, M. A., Sharif, M., and Yasmin, M., Skin lesion segmentation and recognition using multichannel saliency estimation and M-SVM on selected serially fused features. J. Ambient. Intell. Humaniz. Comput.:1–20, 2018.Google Scholar
  11. 11.
    Nur, N., and Tjandrasa, H., Exudate segmentation in retinal images of diabetic retinopathy using saliency method based on region. In: Journal of Physics: Conference Series, 2018, 012110.Google Scholar
  12. 12.
    Chatterjee, S., Dey, D., and Munshi, S., Optimal selection of features using wavelet fractal descriptors and automatic correlation bias reduction for classifying skin lesions. Biomedical signal processing and control 40:252–262, 2018.CrossRefGoogle Scholar
  13. 13.
    Sharif, M., Khan, M. A., Faisal, M., Yasmin, M., and Fernandes, S. L., A framework for offline signature verification system: Best features selection approach. Pattern Recogn. Lett., 2018.Google Scholar
  14. 14.
    Faris, H., Hassonah, M. A., Ala’M, A.-Z., Mirjalili, S., and Aljarah, I., A multi-verse optimizer approach for feature selection and optimizing SVM parameters based on a robust system architecture. Neural Comput. & Applic. 30:2355–2369, 2018.CrossRefGoogle Scholar
  15. 15.
    Kaur, T., Saini, B. S., and Gupta, S., A novel feature selection method for brain tumor MR image classification based on the fisher criterion and parameter-free bat optimization. Neural Comput. & Applic. 29:193–206, 2018.CrossRefGoogle Scholar
  16. 16.
    Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K., Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016.Google Scholar
  17. 17.
    He, K., Zhang, X., Ren, S., and Sun, J., Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, 770–778.Google Scholar
  18. 18.
    Simonyan, K., and Zisserman, A., Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.Google Scholar
  19. 19.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D. et al., Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, 1–9.Google Scholar
  20. 20.
    Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, 779–788.Google Scholar
  21. 21.
    Fernandes, S. L., Rajinikanth, V., and Kadry, S., A hybrid framework to evaluate breast abnormality using infrared thermal images. IEEE Consumer Electronics Magazine 8:31–36, 2019.CrossRefGoogle Scholar
  22. 22.
    Fernandes, S. L., Gurupur, V. P., Lin, H., and Martis, R. J., A novel fusion approach for early lung cancer detection using computer aided diagnosis techniques. Journal of Medical Imaging and Health Informatics 7:1841–1850, 2017.CrossRefGoogle Scholar
  23. 23.
    Khan, S. A., Nazir, M., Khan, M. A., Saba, T., Javed, K., Rehman, A. et al., Lungs nodule detection framework from computed tomography images using support vector machine. Microsc. Res. Tech. 82:1256–1266, 2019.CrossRefPubMedGoogle Scholar
  24. 24.
    Saba, T., Khan, M. A., Rehman, A., and Marie-Sainte, S. L., Region extraction and classification of skin cancer: A heterogeneous framework of deep CNN features fusion and reduction. J. Med. Syst. 43:289, 2019.CrossRefPubMedGoogle Scholar
  25. 25.
    Khan, M. A., Akram, T., Sharif, M., Saba, T., Javed, K., Lali, I. U. et al., Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microsc. Res. Tech. 82:741–763, 2019.CrossRefPubMedGoogle Scholar
  26. 26.
    Afza, F., Khan, M. A., Sharif, M., and Rehman, A., Microscopic skin laceration segmentation and classification: A framework of statistical normal distribution and optimal feature selection. Microsc. Res. Tech. 82:1471–1488, 2019.CrossRefPubMedGoogle Scholar
  27. 27.
    Rajinikanth, V., Madhavaraja, N., Satapathy, S. C., and Fernandes, S. L., Otsu's multi-thresholding and active contour snake model to segment dermoscopy images. Journal of Medical Imaging and Health Informatics 7:1837–1840, 2017.CrossRefGoogle Scholar
  28. 28.
    Naz, I., Muhammad, N., Yasmin, M., Sharif, M., Shah, J. H., and Fernandes, S. L., Robust discrimination of leukocytes protuberant types for early diagnosis of leukemia. Journal of Mechanics in Medicine and Biology 19:1950055, 2019.CrossRefGoogle Scholar
  29. 29.
    Fernandes, S. L., Tanik, U. J., Rajinikanth, V., and Karthik, K. A., A reliable framework for accurate brain image examination and treatment planning based on early diagnosis support for clinicians. Neural Comput. & Applic.:1–12, 2019.Google Scholar
  30. 30.
    Amin, J., Sharif, M., Yasmin, M., and Fernandes, S. L., Big data analysis for brain tumor detection: Deep convolutional neural networks. Futur. Gener. Comput. Syst. 87:290–297, 2018.CrossRefGoogle Scholar
  31. 31.
    Khan, M. A., Lali, I. U., Rehman, A., Ishaq, M., Sharif, M., Saba, T. et al., Brain tumor detection and classification: A framework of marker-based watershed algorithm and multilevel priority features selection. Microsc. Res. Tech. 82:909–922, 2019.CrossRefPubMedGoogle Scholar
  32. 32.
    Khan, M. A., Rashid, M., Sharif, M., Javed, K., and Akram, T., Classification of gastrointestinal diseases of stomach from WCE using improved saliency-based method and discriminant features selection. Multimed. Tools Appl.:1–28, 2019.Google Scholar
  33. 33.
    Sharif, M., Attique Khan, M., Rashid, M., Yasmin, M., Afza, F., and Tanik, U. J., Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images. Journal of Experimental & Theoretical Artificial Intelligence:1–23, 2019.Google Scholar
  34. 34.
    Acharya, U. R., Fernandes, S. L., WeiKoh, J. E., Ciaccio, E. J., Fabell, M. K. M., Tanik, U. J. et al., Automated detection of Alzheimer’s disease using brain MRI images–a study with various feature extraction techniques. J. Med. Syst. 43:302, 2019.CrossRefPubMedGoogle Scholar
  35. 35.
    Rajinikanth, V., Thanaraj, K. P., Satapathy, S. C., Fernandes, S. L., and Dey, N., Shannon’s entropy and watershed algorithm based technique to inspect ischemic stroke wound. In: Smart Intelligent Computing and Applications. Springer, 2019, 23–31.Google Scholar
  36. 36.
    Khan, M. A., Javed, M. Y., Sharif, M., Saba, T., and Rehman, A., Multi-model deep neural network based features extraction and optimal selection approach for skin lesion classification. In: 2019 International Conference on Computer and Information Sciences (ICCIS), 2019, 1–7.Google Scholar
  37. 37.
    Sivakumar, P., and Kumar, B. M., A novel method to detect bleeding frame and region in wireless capsule endoscopy video. Clust. Comput.:1–7, 2018.Google Scholar
  38. 38.
    Yuan, Y., Wang, J., Li, B., and Meng, M. Q.-H., Saliency based ulcer detection for wireless capsule endoscopy diagnosis. IEEE Trans. Med. Imaging 34:2046–2057, 2015.CrossRefPubMedGoogle Scholar
  39. 39.
    Charfi, S., and El Ansari, M., Computer-aided diagnosis system for ulcer detection in wireless capsule endoscopy videos. In: 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), 2017, 1–5.Google Scholar
  40. 40.
    Suman, S., Malik, A. S., Pogorelov, K., Riegler, M., Ho, S. H., Hilmi, I. et al., Detection and classification of bleeding region in WCE images using color feature. In: Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing, 2017, 17.Google Scholar
  41. 41.
    Sainju, S., Bui, F. M., and Wahid, K. A., Automated bleeding detection in capsule endoscopy videos using statistical features and region growing. J. Med. Syst. 38:25, 2014.CrossRefPubMedGoogle Scholar
  42. 42.
    Zhang, X., Zhao, S., and Xie, L., Infinite curriculum learning for efficiently detecting gastric ulcers in WCE images. arXiv preprint arXiv:1809.02371, 2018.Google Scholar
  43. 43.
    Fan, S., Xu, L., Fan, Y., Wei, K., and Li, L., Computer-aided detection of small intestinal ulcer and erosion in wireless capsule endoscopy images. Phys. Med. Biol. 63:165001, 2018.CrossRefPubMedGoogle Scholar
  44. 44.
    Hajabdollahi, M., Esfandiarpoor, R., Soroushmehr, S., Karimi, N., Samavi, S., and Najarian, K., Segmentation of bleeding regions in wireless capsule endoscopy images an approach for inside capsule video summarization. arXiv preprint arXiv:1802.07788, 2018.Google Scholar
  45. 45.
    Xing, X., Jia, X., and Meng, M.-H., Bleeding detection in wireless capsule endoscopy image video using Superpixel-color histogram and a subspace KNN classifier. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018, 1–4.Google Scholar
  46. 46.
    Maghsoudi, O. H., and Alizadeh, M., Feature based framework to detect diseases, tumor, and bleeding in wireless capsule endoscopy. arXiv preprint arXiv:1802.02232, 2018.Google Scholar
  47. 47.
    Koulaouzidis, A., Iakovidis, D. K., Karargyris, A., and Plevris, J. N., Optimizing lesion detection in small-bowel capsule endoscopy: From present problems to future solutions. Expert review of gastroenterology & hepatology 9:217–235, 2015.CrossRefGoogle Scholar
  48. 48.
    Fulkerson, B., Vedaldi, A., and Soatto, S., Class segmentation and object localization with superpixel neighborhoods. In: Computer Vision, 2009 IEEE 12th International Conference on, 2009, 670–677.Google Scholar
  49. 49.
    Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q., Densely connected convolutional networks. In: CVPR, 2017, 3.Google Scholar
  50. 50.
    Yin, X., Yu, X., Sohn, K., Liu, X., and Chandraker, M., Feature transfer learning for deep face recognition with long-tail data. arXiv preprint arXiv:1803.09014, 2018.Google Scholar
  51. 51.
    Rashid, M., Khan, M. A., Sharif, M., Raza, M., Sarfraz, M. M., and Afza, F., Object detection and classification: A joint selection and fusion strategy of deep convolutional neural network and SIFT point features. Multimed. Tools Appl. 78:15751–15777, 2019.CrossRefGoogle Scholar
  52. 52.
    Raja, N., Rajinikanth, V., Fernandes, S. L., and Satapathy, S. C., Segmentation of breast thermal images using Kapur's entropy and hidden Markov random field. Journal of Medical Imaging and Health Informatics 7:1825–1829, 2017.CrossRefGoogle Scholar
  53. 53.
    Heidari, A. A., Faris, H., Aljarah, I., and Mirjalili, S., An efficient hybrid multilayer perceptron neural network with grasshopper optimization. Soft. Comput.:1–18, 2018.Google Scholar
  54. 54.
    Khan, M. A., Akram, T., Sharif, M., Javed, M. Y., Muhammad, N., and Yasmin, M., An implementation of optimized framework for action classification using multilayers neural network on selected fused features. Pattern. Anal. Applic.:1–21, 2018.Google Scholar
  55. 55.
    Lavanya, D., and Rani, K. U., Performance evaluation of decision tree classifiers on medical datasets. Int. J. Comput. Appl. 26:1–4, 2011.Google Scholar
  56. 56.
    Khan, M. A., Sharif, M., Javed, M. Y., Akram, T., Yasmin, M., and Saba, T., License number plate recognition system using entropy-based features selection approach with SVM. IET Image Process. 12:200–209, 2017.CrossRefGoogle Scholar
  57. 57.
    Sharmila, R., and Velaga N. R., A weighted k-NN based approach for corridor level travel-time prediction, 2019.Google Scholar
  58. 58.
    Adeel, A., Khan, M. A., Sharif, M., Azam, F., Umer, T., and Wan, S., Diagnosis and recognition of grape leaf diseases: An automated system based on a novel saliency approach and canonical correlation analysis based multiple features fusion. Sustainable Computing: Informatics and Systems, 2019.  https://doi.org/10.1016/j.suscom.2019.08.002.
  59. 59.
    Suman, S., Hussin, F. A., Malik, A. S., Ho, S. H., Hilmi, I., Leow, A. H.-R. et al., Feature selection and classification of ulcerated lesions using statistical analysis for WCE images. Appl. Sci. 7:1097, 2017.CrossRefGoogle Scholar
  60. 60.
    Kundu, A., Bhattacharjee, A., Fattah, S., and Shahnaz, C., An automatic ulcer detection scheme using histogram in YIQ domain from wireless capsule endoscopy images. In: Region 10 Conference, TENCON 2017-2017 IEEE, 2017, 1300–1303.Google Scholar
  61. 61.
    Yuan, Y., Li, B., and Meng, M. Q.-H., WCE abnormality detection based on saliency and adaptive locality-constrained linear coding. IEEE Trans. Autom. Sci. Eng. 14:149–159, 2017.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  • Muhammad Attique Khan
    • 1
  • Muhammad Sharif
    • 3
    Email author
  • Tallha Akram
    • 4
  • Mussarat Yasmin
    • 3
  • Ramesh Sunder Nayak
    • 2
  1. 1.Department of CS&EHITEC UniversityTaxilaPakistan
  2. 2.Department of CSCOMSATS University IslamabadIslamabadPakistan
  3. 3.Department of E&CECOMSATS University IslamabadIslamabadPakistan
  4. 4.Information ScienceCanara Engineering CollegeMangaluruIndia

Personalised recommendations