Advertisement

Small Defect Detection Using Convolutional Neural Network Features and Random Forests

  • Xinghui DongEmail author
  • Chris J. Taylor
  • Tim F. Cootes
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11132)

Abstract

We address the problem of identifying small abnormalities in an imaged region, important in applications such as industrial inspection. The goal is to label the pixels corresponding to a defect with a minimum of false positives. A common approach is to run a sliding-window classifier over the image. Recent Fully Convolutional Networks (FCNs), such as U-Net, can be trained to identify pixels corresponding to abnormalities given a suitable training set. However in many application domains it is hard to collect large numbers of defect examples (by their nature they are rare). Although U-Net can work in this scenario, we show that better results can be obtained by replacing the final softmax layer of the network with a Random Forest (RF) using features sampled from the earlier network layers. We also demonstrate that rather than just thresholding the resulting probability image to identify defects it is better to compute Maximally Stable Extremal Regions (MSERs). We apply the approach to the challenging problem of identifying defects in radiographs of aerospace welds.

Keywords

Defect detection Non-destructive evaluation CNN Local features Random Forests 

Notes

Acknowledgement

This work is supported by the Engineering and Physical Sciences Research Council (EPSRC) (No. EP/L022125/1). The Titan Xp used for this research was donated by the NVIDIA Corporation.

References

  1. 1.
    Boaretto, N., Centeno, T.: Automated detection of welding defects in pipelines from radiographic images DWDI. NDT & E Int. 86, 7–13 (2017)CrossRefGoogle Scholar
  2. 2.
    Breiman, L.: Random forests. Mach Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  3. 3.
    Chen, F., Jahanshahi, M.R.: NB-CNN: deep learning-based crack detection using convolutional neural network and Naïve Bayes data fusion. IEEE Trans. Industr. Electron. 65(5), 4392–4400 (2018)CrossRefGoogle Scholar
  4. 4.
    Chen, J., Banerjee, S., Grama, A., Scheirer, W.J., Chen, D.Z.: Neuron segmentation using deep complete bipartite networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017, Part II. LNCS, vol. 10434, pp. 21–29. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_3CrossRefGoogle Scholar
  5. 5.
    Ciresan, D., Gambardella, L., Giusti, A., Schmidhuber, J.: Deep neural networks segment neuronal membranes in electron microscopy images. In: Proceedings of NIPS, pp. 2852–2860 (2012)Google Scholar
  6. 6.
    Dong, X., Dong, J., Zhou, H., Sun, J., Tao, D.: Automatic chinese postal address block location using proximity descriptors and cooperative profit random forests. IEEE Trans. Industr. Electron. 65(5), 4401–4412 (2018)CrossRefGoogle Scholar
  7. 7.
    Dong, X., Taylor, C.J., Cootes, T.F.: Automatic inspection of aerospace welds using x-ray images. In: Proceedings of International Conference on Pattern Recognition (2018)Google Scholar
  8. 8.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2014)Google Scholar
  9. 9.
    Goldin, D.Q., Kanellakis, P.C.: On similarity queries for time-series data: constraint specification and implementation. In: Montanari, U., Rossi, F. (eds.) CP 1995. LNCS, vol. 976, pp. 137–153. Springer, Heidelberg (1995).  https://doi.org/10.1007/3-540-60299-2_9CrossRefGoogle Scholar
  10. 10.
    Hariharan, B., Arbelez, P., Girshick, R., Malik, J.: Hypercolumns for object segmentation and fine-grained localization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  11. 11.
    Kehoe, A., Parker, G.: An intelligent knowledge based approach for the automated radiographic inspection of castings. NDT & E Int. 25(1), 23–36 (1992)CrossRefGoogle Scholar
  12. 12.
    Kingma, D., Adam, J.: A method for stochastic optimization (2017). arXiv:1412.6980
  13. 13.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)CrossRefGoogle Scholar
  14. 14.
    Liao, T., Li, Y.: An automated radiographic ndt system for weld inspection: Part ll - flaw detection. NDT & E Int. 31(3), 183–192 (1998)CrossRefGoogle Scholar
  15. 15.
    Lindner, C., Bromiley, P., Ionita, M., Cootes, T.: Robust and accurate shape model matching using random forest regression-voting. IEEE Trans. PAMI 37(9), 862–1874 (2015)CrossRefGoogle Scholar
  16. 16.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  17. 17.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. In: Proceedings of British Machine Vision Conference, pp. 384–396 (2002)Google Scholar
  19. 19.
    Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 512–519 (2014)Google Scholar
  20. 20.
    Ren, R., Hung, T., Tan, K.: A generic deep-learning-based approach for automated surface inspection. IEEE Trans. Cybern. 48(3), 929–940 (2018)CrossRefGoogle Scholar
  21. 21.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015, Part III. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  22. 22.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale visual recognition. In: Proceedings of International Conference on Learning Representations (2015)Google Scholar
  23. 23.
    Taylor, L., Nitschke, G.: Improving deep learning using generic data augmentation (2017). arXiv:1708.06020
  24. 24.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of CVPR, pp. 511–518 (2001)Google Scholar
  25. 25.
    Wang, G., Liao, T.: Automatic identification of different types of welding defects in radiographic images. NDT & E Int. 35, 519–528 (2002)CrossRefGoogle Scholar
  26. 26.
    Wang, Y., Sun, Y., Lv, P., Wang, H.: Detection of line weld defects based on multiple thresholds and support vector machine. NDT & E Int. 41, 517–524 (2008)CrossRefGoogle Scholar
  27. 27.
    Yazid, H., Arof, H., Yazid, H., Ahmad, S., Mohamed, A., Ahmad, F.: Discontinuities detection in welded joints based on inverse surface thresholding. NDT & E Int. 44, 563–570 (2011)CrossRefGoogle Scholar
  28. 28.
    Zhang, Y., Yang, L., Chen, J., Fredericksen, M., Hughes, D.P., Chen, D.Z.: Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017, Part III. LNCS, vol. 10435, pp. 408–416. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_47CrossRefGoogle Scholar
  29. 29.
    Zhao, X., He, Z., Zhang, S.: Defect detection of castings in radiography images using a robust statistical feature. J. Opt. Soc. Am. A 31(1), 196–205 (2014)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Xinghui Dong
    • 1
    Email author
  • Chris J. Taylor
    • 1
  • Tim F. Cootes
    • 1
  1. 1.Centre for Imaging SciencesThe University of ManchesterManchesterUK

Personalised recommendations