Advertisement

Applying Densely Connected Convolutional Neural Networks for Staging Osteoarthritis Severity from Plain Radiographs

  • Berk Norman
  • Valentina Pedoia
  • Adam Noworolski
  • Thomas M. Link
  • Sharmila Majumdar
Article

Abstract

Osteoarthritis (OA) classification in the knee is most commonly done with radiographs using the 0–4 Kellgren Lawrence (KL) grading system where 0 is normal, 1 shows doubtful signs of OA, 2 is mild OA, 3 is moderate OA, and 4 is severe OA. KL grading is widely used for clinical assessment and diagnosis of OA, usually on a high volume of radiographs, making its automation highly relevant. We propose a fully automated algorithm for the detection of OA using KL gradings with a state-of-the-art neural network. Four thousand four hundred ninety bilateral PA fixed-flexion knee radiographs were collected from the Osteoarthritis Initiative dataset (age = 61.2 ± 9.2 years, BMI = 32.8 ± 15.9 kg/m2, 42/58 male/female split) for six different time points. The left and right knee joints were localized using a U-net model. These localized images were used to train an ensemble of DenseNet neural network architectures for the prediction of OA severity. This ensemble of DenseNets’ testing sensitivity rates of no OA, mild, moderate, and severe OA were 83.7, 70.2, 68.9, and 86.0% respectively. The corresponding specificity rates were 86.1, 83.8, 97.1, and 99.1%. Using saliency maps, we confirmed that the neural networks producing these results were in fact selecting the correct osteoarthritic features used in detection. These results suggest the use of our automatic classifier to assist radiologists in making more accurate and precise diagnosis with the increasing volume of radiographic image being taken in clinic.

Keywords

Osteoarthritis Radiographs Neural networks Machine learning 

Notes

Funding Information

This project was supported by Grant Numbers P50 AR060752 (SM), R01AR046905 (SM), K99AR070902 (VP), and R61AR073552 (SM/VP) from the National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institutes of Health, United States of America (NIH-NIAMS) and Arthritis Foundation, Trial 6157. The Titan X Pascal used for this research was donated by the NVIDIA Corporation.

Compliance with Ethical Standards

Disclaimer

This content is solely the responsibility of the authors and does not necessarily reflect the views of the NIH-NIAMS or Arthritis Foundation.

References

  1. 1.
    Roos EM, Roos HP, Lohmander LS, Ekdahl C, Beynnon BD: Knee Injury and osteoarthritis outcome score (KOOS)—development of a self-administered outcome measure. J Orthop Sports Phys Ther 28(2):88–96, 1998CrossRefGoogle Scholar
  2. 2.
    Ackerman I: Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC). Aust J Physiother 55(3):213, 2009CrossRefGoogle Scholar
  3. 3.
    Pelletier JP, Cooper C, Peterfy C, Reginster JY, Brandi ML, Bruyère O, Chapurlat R, Cicuttini F, Conaghan PG, Doherty M, Genant H, Giacovelli G, Hochberg MC, Hunter DJ, Kanis JA, Kloppenburg M, Laredo JD, McAlindon T, Nevitt M, Raynauld JP, Rizzoli R, Zilkens C, Roemer FW, Martel-Pelletier J, Guermazi A: What is the predictive value of MRI for the occurrence of knee replacement surgery in knee osteoarthritis? Ann Rheum Dis 72(10):1594–1604, 2013CrossRefGoogle Scholar
  4. 4.
    Kohn MD, Sassoon AA, Fernando ND: Classifications in brief: Kellgren-Lawrence classification of osteoarthritis. Clin Orthop Relat Res 474(8):1886–1893, 2016CrossRefGoogle Scholar
  5. 5.
    Kellgren JH, Lawrence JS: Radiological assessment of osteo-arthrosis. Ann Rheum Dis 16(4):494–502, 1957CrossRefGoogle Scholar
  6. 6.
    Gunther KP, Sun Y: Reliability of radiographic assessment in hip and knee osteoarthritis. Osteoarthr Cartil 7(2):239–246, 1999CrossRefGoogle Scholar
  7. 7.
    Kallenberg M, Petersen K, Nielsen M, Ng AY, Diao P, Igel C, Vachon CM, Holland K, Winkel RR, Karssemeijer N, Lillholm M: Unsupervised deep learning applied to breast density segmentation and mammographic risk scoring. IEEE Trans Med Imaging 35(5):1322–1331, 2016CrossRefGoogle Scholar
  8. 8.
    Lee H, Grosse R, Ranganath R, Ng AY: Unsupervised learning of hierarchical representations with convolutional deep belief networks. Commun ACM 54(10):95–103, 2011CrossRefGoogle Scholar
  9. 9.
    Li XG, Hong CF, Yang YN, Wu XH: Deep neural networks for syllable based acoustic modeling in Chinese speech recognition. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (Apsipa), 2013Google Scholar
  10. 10.
    Hinton G, Deng L, Yu D, Dahl G, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T, Kingsbury B: Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process Mag 29(6):82–97, 2012CrossRefGoogle Scholar
  11. 11.
    Pan J, Liu C, Wang ZG, Hu Y, Jiang H: Investigation of Deep Neural Networks (Dnn) for Large Vocabulary Continuous Speech Recognition: Why Dnn Surpasses Gmms in Acoustic Modeling. 2012 8th International Symposium on Chinese Spoken Language Processing, 2012, pp 301–305Google Scholar
  12. 12.
    Leung MKK, Delong A, Alipanahi B, Frey BJ: Machine learning in genomic medicine: a review of computational problems and data sets. Proc IEEE 104(1):176–197, 2016CrossRefGoogle Scholar
  13. 13.
    LeCun Y, Bengio Y, Hinton G: Deep learning. Nature 521(7553):436–444, 2015CrossRefGoogle Scholar
  14. 14.
    Bengio Y, Lee H: Editorial introduction to the neural networks special issue on deep learning of representations. Neural Netw 64:1–3, 2015CrossRefGoogle Scholar
  15. 15.
    Norman B, Pedoia V, Majumdar S: Use of 2D U-net convolutional neural networks for automated cartilage and meniscus segmentation of knee MR imaging data to determine relaxometry and morphometry. Radiology:172322, 2018Google Scholar
  16. 16.
    Gao Huang ZL, Kilian Q: Weinberger, Laurens van der Maaten. Densely connected convolutional networks. ARXIV;eprint arXiv:1608.06993:12, 2016Google Scholar
  17. 17.
    Cheng JAB, Mark JL: The relative performance of ensemble methods with deep convolutional neural networks for image classification. arXiv:170401664, 2017Google Scholar
  18. 18.
    Karen Simonyan AV, Zisserman A: Deep inside convolutional networks: visualising image classification models and saliency maps. ARXIV;1312.6034v2, 2014Google Scholar
  19. 19.
    Joseph Antony KM, Connor NEO, Moran K: Quantifying radiographic knee osteoarthritis severity using deep convolutional neural networks. Pattern Recogn:1195–1200, 2016Google Scholar
  20. 20.
    UCSF. Multicenter Osteoarthritis Study (MOST)Google Scholar
  21. 21.
    Wright RW, Group M: Osteoarthritis classification scales: interobserver reliability and arthroscopic correlation. J Bone Joint Surg Am 96(14):1145–1151, 2014CrossRefGoogle Scholar
  22. 22.
    Gossec L, Jordan JM, Mazzuca SA, Lam MA, Suarez-Almazor ME, Renner JB, Lopez-Olivo MA, Hawker G, Dougados M, Maillefert JF, OARSI-OMERACT task force "total articular replacement as outcome measure in OA": Comparative evaluation of three semi-quantitative radiographic grading techniques for knee osteoarthritis in terms of validity and reproducibility in 1759 X-rays: report of the OARSI-OMERACT task force. Osteoarthr Cartil 16(7):742–748, 2008CrossRefGoogle Scholar

Copyright information

© Society for Imaging Informatics in Medicine 2018

Authors and Affiliations

  • Berk Norman
    • 1
  • Valentina Pedoia
    • 1
  • Adam Noworolski
    • 1
  • Thomas M. Link
    • 1
  • Sharmila Majumdar
    • 1
  1. 1.Department of Radiology and Biomedical Imaging and Center for Digital Health InnovationSan FranciscoUSA

Personalised recommendations