Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks
The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78–0.89). At the maximum Youden index (sensitivity + specificity−1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs.
KeywordsSmall bowel obstruction Machine learning Artificial neural networks Digital image processing Deep learning
Compliance with ethical standards
No funding was received for this study.
Conflict of interest
The authors declare that they have no conflict of interest.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. For this type of study formal consent is not required.
This retrospective study was approved by our institutional research ethics board. Informed consent was not required.
- 4.Goodfellow I, Bengio Y, Courville A (2016) Deep learning. Cambridge: MIT PressGoogle Scholar
- 7.Razavian AS, Azizpour H, Sullivan J, Carlsson S (2014) CNN features off-the-shelf: an astounding baseline for recognition, in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, pp 512–519Google Scholar
- 8.Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? arXiv:14111792[cs]
- 11.Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, Kudlur M, Levenberg J, Monga R, Moore S, Murray DG, Steiner B, Tucker P, Vasudevan V, Warden P, Wicke M, Yu Y, Zheng X (2016) TensorFlow: a system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, Berkeley, CA, USA, pp 265–283Google Scholar
- 12.Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2015) Rethinking the inception architecture for computer vision, arXiv:151200567[cs]
- 13.R Core Team (2017) R: a language and environment for statistical computing. Vienna: R Foundation for Statistical ComputingGoogle Scholar
- 20.Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps, arXiv:13126034[cs]
- 21.Zeiler MD, Fergus R (2013) Visualizing and understanding convolutional networks, arXiv:13112901[cs]