Advertisement

Computer-Assisted Nuclear Atypia Scoring of Breast Cancer: a Preliminary Study

  • Ziba GandomkarEmail author
  • Patrick C. Brennan
  • Claudia Mello-Thoms
Article
  • 31 Downloads

Abstract

Inter-pathologist agreement for nuclear atypia scoring of breast cancer is poor. To address this problem, previous studies suggested some criteria for describing the variations appearance of tumor cells relative to normal cells. However, these criteria were still assessed subjectively by pathologists. Previous studies used quantitative computer-extracted features for scoring. However, application of these tools is limited as further improvement in their accuracy is required. This study proposes COMPASS (COMputer-assisted analysis combined with Pathologist’s ASSessment) for reproducible nuclear atypia scoring. COMPASS relies on both cytological criteria assessed subjectively by pathologists as well as computer-extracted textural features. Using machine learning, COMPASS combines these two sets of features and output nuclear atypia score. COMPASS’s performance was evaluated using 300 images for which expert-consensus derived reference nuclear pleomorphism scores were available, and they were scanned by two scanners from different vendors. A personalized model was built for three pathologists who gave scores to six atypia-related criteria for each image. Leave-one-out cross validation (LOOCV) was used. COMPASS was trained and tested for each pathologist separately. Percentage agreement between COMPASS and the reference nuclear scores was 93.8%, 92.9%, and 93.1% for three pathologists. COMPASS’s performance in nuclear grading was almost identical for both scanners, with Cohen’s kappa ranging from 0.80 to 0.86 for different pathologists and different scanners. Independently, the images were also assessed by two experienced senior pathologists. Cohen’s kappa of COMPASS was comparable to the Cohen’s kappa for two senior pathologists (0.79 and 0.68).

Keywords

Breast Breast cancer Microscopy Nuclear atypia grading Nuclear pleomorphism grading Pattern recognition 

Notes

Acknowledgments

We would like to thank Mitosis-Atypia challenge organizers who collected the data utilized in the study and kindly provided us the access to their dataset after the challenge time. We also acknowledge the University of Sydney HPC Service at the University of Sydney for providing the high-performance computing resources that have contributed to the research results reported within this paper.

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Mook S, Schmidt MK, Rutgers EJ, van de Velde AO, Visser O, Rutgers SM, Armstrong N, van’t Veer LJ, Ravdin PM: Calibration and discriminatory accuracy of prognosis calculation for breast cancer with the online adjuvant! Program: A hospital-based retrospective cohort study. Lancet Oncol 10(11):1070–1076, 2009CrossRefGoogle Scholar
  2. 2.
    Elston CW, Ellis IO: Pathological prognostic factors in breast cancer. I. The value of histological grade in breast cancer: Experience from a large study with long-term follow-up. Histopathology 19(5):403–410, 1991CrossRefGoogle Scholar
  3. 3.
    Bueno-de-Mesquita JM, Nuyten D, Wesseling J, van Tinteren H, Linn S, van De Vijver M: The impact of inter-observer variation in pathological assessment of node-negative breast cancer on clinical risk assessment and patient selection for adjuvant systemic treatment. Ann Oncol 21(1):40–47, 2009CrossRefGoogle Scholar
  4. 4.
    Frierson, Jr HF, Wolber RA, Berean KW, Franquemont DW, Gaffey MJ, Boyd JC, Wilbur DC: Interobserver reproducibility of the Nottingham modification of the bloom and Richardson histologic grading scheme for infiltrating ductal carcinoma. Am J Clin Pathol 103(2):195–198, 1995CrossRefGoogle Scholar
  5. 5.
    Harvey JM, de Klerk NH, Sterrett GF: Histological grading in breast cancer: Interobserver agreement, and relation to other prognostic factors including ploidy. Pathology 24(2):63–68, 1992CrossRefGoogle Scholar
  6. 6.
    Longacre TA, Ennis M, Quenneville LA, Bane AL, Bleiweiss IJ, Carter BA, Catelano E, Hendrickson MR, Hibshoosh H, Layfield LJ: Interobserver agreement and reproducibility in classification of invasive breast carcinoma: An NCI breast cancer family registry study. Mod Pathol 19(2):195–207, 2006CrossRefGoogle Scholar
  7. 7.
    Meyer JS, Alvarez C, Milikowski C, Olson N, Russo I, Russo J, Glass A, Zehnbauer BA, Lister K, Parwaresch R: Breast carcinoma malignancy grading by Bloom–Richardson system vs proliferation index: Reproducibility of grade and advantages of proliferation index. Mod Pathol 18(8):1067–1078, 2005CrossRefGoogle Scholar
  8. 8.
    Paradiso A, Ellis I, Zito F, Marubini E, Pizzamiglio S, Verderio P: Short-and long-term effects of a training session on pathologists’ performance: The INQAT experience for histological grading in breast cancer. J Clin Pathol 62(3):279–281, 2009CrossRefGoogle Scholar
  9. 9.
    Adams AL, Chhieng DC, Bell WC, Winokur T, Hameed O: Histologic grading of invasive lobular carcinoma: Does use of a 2-tiered nuclear grading system improve interobserver variability? Ann Diagn Pathol 13(4):223–225, 2009CrossRefGoogle Scholar
  10. 10.
    Gandomkar Z, Brennan PC, Mello-Thoms C: Computer-based image analysis in breast pathology. J Pathol Inform 7:43, 2016CrossRefGoogle Scholar
  11. 11.
    Cosatto E, Miller M, Graf HP, Meyer JS. Grading nuclear pleomorphism on histological micrographs. InPattern Recognition, 2008. ICPR 2008. 19th International Conference on 2008 Dec 8 (pp. 1-4). IEEE.Google Scholar
  12. 12.
    Khan AM, Sirinukunwattana K, Rajpoot N: A global covariance descriptor for nuclear atypia scoring in breast histopathology images. IEEE J Biomed Health Inform 19(5):1637–1647, 2015CrossRefGoogle Scholar
  13. 13.
    Dunne B, Going J: Scoring nuclear pleomorphism in breast cancer. Histopathology 39(3):259–265, 2001CrossRefGoogle Scholar
  14. 14.
    Zhang R, Chen H-j, Wei B, Zhang H-y, Pang Z-g, Zhu H, Zhang Z, Fu J, Bu H: Reproducibility of the Nottingham modification of the Scarff-Bloom-Richardson histological grading system and the complementary value of Ki-67 to this system. Chin Med J (Engl Ed) 123(15):1976, 2010Google Scholar
  15. 15.
    Racoceanu D, Capron F: Semantic integrative digital pathology: Insights into microsemiological semantics and image analysis scalability. Pathobiology 83(2–3):148–155, 2016CrossRefGoogle Scholar
  16. 16.
    Saha K, Raychaudhuri G, Chattopadhyay BK, Das I: Comparative evaluation of six cytological grading systems in breast carcinoma. J Cytol 30(2):87–93, 2013CrossRefGoogle Scholar
  17. 17.
    Abati A, McKee G: Grading of breast carcinoma in fine-needle aspiration cytology. Diagn Cytopathol 19(2):153–154, 1998CrossRefGoogle Scholar
  18. 18.
    Robinson I, McKee G, Kissin M: Typing and grading breast carcinoma on fine-needle aspiration: Is this clinically useful information? Diagn Cytopathol 13(3):260–265, 1995CrossRefGoogle Scholar
  19. 19.
    Macenko M, Niethammer M, Marron JS, Borland D, Woosley JT, Guan X, Schmitt C, and Thomas NE, A method for normalizing histology slides for quantitative analysis. pp. 1107–1110Google Scholar
  20. 20.
    Al-Kofahi Y, Lassoued W, Lee W, Roysam B: Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans Biomed Eng 57(4):841–852, 2010CrossRefGoogle Scholar
  21. 21.
    Irshad H: Automated mitosis detection in histopathology using morphological and multi-channel statistics features. J Pathol Inform 4:10, 2013CrossRefGoogle Scholar
  22. 22.
    Gandomkar Z, Brennan PC, Mello-Thoms C: Determining image processing features describing the appearance of challenging mitotic figures and miscounted nonmitotic objects. J Pathol Inform 8:34, 2017CrossRefGoogle Scholar
  23. 23.
    Shahriari B, Swersky K, Wang Z, Adams RP, De Freitas N: Taking the human out of the loop: A review of Bayesian optimization. Proc IEEE 104(1):148–175, 2016CrossRefGoogle Scholar
  24. 24.
    Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP: SMOTE: Synthetic minority over-sampling technique. J Artif Intell Res 16:321–357, 2002CrossRefGoogle Scholar
  25. 25.
    Viera AJ, Garrett JM: Understanding interobserver agreement: The kappa statistic. Fam Med 37(5):360–363, 2005Google Scholar
  26. 26.
    Hanley JA, McNeil BJ: The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143(1):29–36, 1982CrossRefGoogle Scholar
  27. 27.
    Mirza AN, Mirza NQ, Vlastos G, Singletary SE: Prognostic factors in node-negative breast cancer: A review of studies with sample size more than 200 and follow-up more than 5 years. Ann Surg 235(1):10–26, 2002CrossRefGoogle Scholar
  28. 28.
    Gandomkar Z, Brennan PC, Mello-Thoms C. A framework for distinguishing benign from malignant breast histopathological images using deep residual networks. In14th International Workshop on Breast Imaging (IWBI 2018), International Society for Optics and Photonics, Vol. 10718, p. 107180U, 2018.Google Scholar
  29. 29.
    Cireşan DC, Giusti A, Gambardella LM, and Schmidhuber J, Mitosis detection in breast cancer histology images with deep neural networks. pp. 411–418Google Scholar
  30. 30.
    Gandomkar Z, Brennan PC, Mello-Thoms C: MuDeRN: Multi-category classification of breast histopathological image using deep residual networks. Artif Intell Med 88:14–24, 2018CrossRefGoogle Scholar
  31. 31.
    Gandomkar Z, Tay K, Brennan PC, and Mello-Thoms C, A model based on temporal dynamics of fixations for distinguishing expert radiologists’ scanpaths. p. 1013606Google Scholar
  32. 32.
    Gandomkar Z, Tay K, Brennan PC, Mello-Thoms C: Recurrence quantification analysis of radiologists’ scanpaths when interpreting mammograms. Med Phys 45:3052–3062, 2018CrossRefGoogle Scholar
  33. 33.
    Gandomkar Z, Tay K, Ryder W, Brennan PC, and Mello-Thoms C, Predicting radiologists’ true and false positive decisions in reading mammograms by using gaze parameters and image-based features. p. 978715Google Scholar
  34. 34.
    Gandomkar Z, Tay K, Ryder W, Brennan PC, Mello-Thoms C: iCAP: An individualized model combining gaze parameters and image-based features to predict radiologists’ decisions while Reading mammograms. IEEE Trans Med Imaging 36(5):1066–1075, 2017CrossRefGoogle Scholar

Copyright information

© Society for Imaging Informatics in Medicine 2019

Authors and Affiliations

  1. 1.Discipline of Medical Imaging and Radiation Sciences, Medical Image Optimisation and Perception Group (MIOPeG)The University of SydneySydneyAustralia
  2. 2.Carver College of Medicine, Department of RadiologyUniversity of IowaIowa CityUSA

Personalised recommendations