Advertisement

A Bayesian Network to Assist Mammography Interpretation

  • Daniel L. Rubin
  • Elizabeth S. Burnside
  • Ross Shachter
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 70)

Summary

Mammography is a vital screening test for breast cancer because early diagnosis is the most effective means of decreasing the death rate from this disease. However, interpreting the mammographic images and rendering the correct diagnosis is challenging. The diagnostic accuracy of mammography varies with the expertise of the radiologist interpreting the images, resulting in significant variability in screening performance. Radiologists interpreting mammograms must manage uncertainties arising from a multitude of findings. We believe that much of the variability in mammography diagnostic performance arises from heuristic errors that radiologists make in managing these uncertainties. We developed a Bayesian network that models the probabilistic relationships between breast diseases, mammographic findings and patient risk factors. We have performed some preliminary evaluations in test cases from a mammography atlas and in a prospective series of patients who had biopsy confirmation of the diagnosis. The model appears useful for clarifying the decision about whether to biopsy abnormalities seen on mammography, and also can help the radiologist correlate histopathologic findings with the mammographic abnormalities observed. Our preliminary experience suggests that this model may help reduce variability and improve overall interpretive performance in mammography.

Key words

Mammography Diagnosis Breast cancer Bayesian networks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Greenlee, R.T., M.B. Hill-Harmon, T. Murray, and M. Thun (2001). Cancer statistics, 2001. Cancer Journal for Clinicians, 51, 15–36.Google Scholar
  2. [2]
    Baker, L.H. (1982). Breast Cancer Detection Demonstration Project: five-year summary report. Cancer Journal for Clinicians, 32, 194–225.Google Scholar
  3. [3]
    Houn, F., M.L. Elliott, and J.L. McCrohan (1995). The Mammography Quality Standards Act of 1992. History and philosophy. Radiology Clinics of North America, 33, 1059–1065.Google Scholar
  4. [4]
    Pisano, E.D., et al. (2000). Has the Mammography Quality Standards Act affected the mammography quality in North Carolina? American Journal of Roentgenology, 174, 1089–1091.PubMedGoogle Scholar
  5. [5]
    Sickles, E.A., D.E. Wolverton, and K.E. Dee (2002). Performance parameters for screening and diagnostic mammography: specialist and general radiologists. Radiology, 224, 861–869.PubMedGoogle Scholar
  6. [6]
    Ciccone, G., P. Vineis, A. Frigerio, and N. Segnan (1992). Inter-observer and intra-observer variability of mammogram interpretation: a field study. European Journal of Cancer, 28A, 1054–1058.PubMedGoogle Scholar
  7. [7]
    Elmore, J.G., et al. (2002). Screening mammograms by community radiologists: variability in false-positive rates. Journal of the National Cancer Institute, 94, 1373–1380.PubMedGoogle Scholar
  8. [8]
    Elmore, J.G., C.K. Wells, C.H. Lee, D.H. Howard, and A.R. Feinstein (1994). Variability in radiologists’ interpretations of mammograms. New England Journal of Medicine, 331, 1493–1499.CrossRefPubMedGoogle Scholar
  9. [9]
    Sirovich, B.E. and H.C. Sox, Jr. (1999). Breast cancer screening. Surgery Clinics of North America, 79, 961–990.Google Scholar
  10. [10]
    Harris, R. (1997). Variation of benefits and harms of breast cancer screening with age. Journal of the National Cancer Institute Monographs, 139–143.Google Scholar
  11. [11]
    Christiansen, C.L., et al. (2000). Predicting the cumulative risk of false-positive mammograms. Journal of the National Cancer Institute, 92, 1657–1666.CrossRefPubMedGoogle Scholar
  12. [12]
    Brown, M.L., F. Houn, E.A. Sickles, and L.G. Kessler (1995). Screening mammography in community practice: positive predictive value of abnormal findings and yield of follow-up diagnostic procedures. American Journal of Roentgenology, 165, 1373–1377.PubMedGoogle Scholar
  13. [13]
    American College of Radiology (1998). Breast Imaging Reporting and Data System (BI-RADS). American College of Radiology, Reston, VAGoogle Scholar
  14. [14]
    Swets, J.A., et al. (1991). Enhancing and evaluating diagnostic accuracy. Medical Decision Making, 11, 9–18.PubMedGoogle Scholar
  15. [15]
    Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, San Mateo, CA.Google Scholar
  16. [16]
    Colditz, G.A., et al. (1995). The use of estrogens and progestins and the risk of breast cancer in postmenopausal women. New England Journal of Medicine, 332, 1589–1593.CrossRefPubMedGoogle Scholar
  17. [17]
    Ries, L.A.G. and National Cancer Institute (U.S.). Division of Cancer Prevention and Control. (1997). SEER Cancer Statistics Review, 1973–1996. National Cancer Institute, Bethesda, MD.Google Scholar
  18. [18]
    Slattery, M.L. and R.A. Kerber (1993). A comprehensive evaluation of family history and breast cancer risk. The Utah Population Database. Journal of the American Medical Association, 270, 1563–1568.CrossRefPubMedGoogle Scholar
  19. [19]
    Monsees, B.S. (1995). Evaluation of breast microcalcifications. Radiology Clinics of North America, 33, 1109–1121.Google Scholar
  20. [20]
    Evans, W.P. (1995). Breast masses. Appropriate evaluation. Radiology Clinics of North America, 33, 1085–1108.Google Scholar
  21. [21]
    Howard, R.A. and J.E. Matheson (1984). Influence diagrams. In The Principles and Applications of Decision Analysis, R.A. Howard and J.E. Matheson, Eds., Strategic Decisions Group, Menlo Park, CA.Google Scholar
  22. [22]
    Shachter, R.D. (1986). Evaluating influence diagrams. Operations Research, 34, 871–882.MathSciNetGoogle Scholar
  23. [23]
    Tabár, L. and P.B. Dean (1983). Teaching Atlas of Mammography. Thieme Medical Publishers, New York.Google Scholar
  24. [24]
    Berg, W.A., et al. (1996). Lessons from mammographic-histopathologic correlation of large-core needle breast biopsy. Radiographics, 16, 1111–1130.PubMedGoogle Scholar
  25. [25]
    Ioffe, O.B., W.A. Berg, S.G. Silverberg, and D. Kumar (1998). Mammographic-histopathologic correlation of large-core needle biopsies of the breast. Modern Pathology, 11, 721–727.PubMedGoogle Scholar
  26. [26]
    Liberman, L., et al. (2000). Imaging-histologic discordance at percutaneous breast biopsy. Cancer, 89, 2538–2546.CrossRefPubMedGoogle Scholar
  27. [27]
    Jackman, R.J., et al. (1999). Stereotactic, automated, large-core needle biopsy of nonpalpable breast lesions: false-negative and histologic underestimation rates after long-term follow-up. Radiology, 210, 799–805.PubMedGoogle Scholar
  28. [28]
    Lee, C.H., L.E. Philpotts, L.J. Horvath, and I. Tocino (1999). Follow-up of breast lesions diagnosed as benign with stereotactic core-needle biopsy: frequency of mammographic change and false-negative rate. Radiology, 212, 189–194.PubMedGoogle Scholar
  29. [29]
    Liberman, L. (2000). Centennial dissertation. Percutaneous imaging-guided core breast biopsy: state of the art at the millennium. American Journal of Roentgenology, 174, 1191–1199.PubMedGoogle Scholar
  30. [30]
    Philpotts, L.E., N.A. Shaheen, D. Carter, R.C. Lange, and C.H. Lee (1999). Comparison of rebiopsy rates after stereotactic core needle biopsy of the breast with 11-gauge vacuum suction probe versus 14-gauge needle and automatic gun. American Journal of Roentgenology, 172, 683–687.PubMedGoogle Scholar
  31. [31]
    Burbank, F. (1997). Stereotactic breast biopsy: comparison of 14-and 11-gauge Mammotome probe performance and complication rates. American Surgeon, 63, 988–995.PubMedGoogle Scholar
  32. [32]
    Sickles, E.A. (1995). Management of probably benign breast lesions. Radiology Clinics of North America, 33, 1123–1130.Google Scholar
  33. [33]
    Kahn, C.E., Jr., L.M. Roberts, K. Wang, D. Jenks, and P. Haddawy (1995). Preliminary investigation of a Bayesian network for mammographic diagnosis of breast cancer. Proceedings of the Annual Symposium on Computing Applied to Medical Care, 208–212.Google Scholar
  34. [34]
    Baker, J.A., P.J. Kornguth, J.Y. Lo, M.E. Williford, and C.E. Floyd, Jr. (1995). Breast cancer: prediction with artificial neural network based on BI-RADS standardized lexicon. Radiology, 196, 817–822.PubMedGoogle Scholar
  35. [35]
    Jiang, Y., et al. (1999). Improving breast cancer diagnosis with computer-aided diagnosis. Academic Radiology, 6, 22–33.CrossRefPubMedGoogle Scholar
  36. [36]
    Beam, C.A., P.M. Layde, and D.C. Sullivan (1996). Variability in the interpretation of screening mammograms by US radiologists. Findings from a national sample. Archives of Internal Medicine, 156, 209–213.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science + Business Media, Inc. 2005

Authors and Affiliations

  • Daniel L. Rubin
    • 1
  • Elizabeth S. Burnside
    • 2
  • Ross Shachter
    • 3
  1. 1.Stanford Medical InformaticsStanford UniversityStanford
  2. 2.Department of RadiologyUniversity of WisconsinMadison
  3. 3.Department of Management Science and EngineeringStanford UniversityStanford

Personalised recommendations