Advertisement

Deep Convolutional Neural Networks Based Framework for Estimation of Stomata Density and Structure from Microscopic Images

  • Swati BhugraEmail author
  • Deepak MishraEmail author
  • Anupama AnupamaEmail author
  • Santanu ChaudhuryEmail author
  • Brejesh LallEmail author
  • Archana ChughEmail author
  • Viswanathan ChinnusamyEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11134)

Abstract

Analysis of stomata density and its configuration based on scanning electron microscopic (SEM) image of a leaf surface, is an effective way to characterize the plant’s behaviour under various environmental stresses (drought, salinity etc.). Existing methods for phenotyping these stomatal traits are often based on manual or semi-automatic labeling and segmentation of SEM images. This is a low-throughput process when large number of SEM images is investigated for statistical analysis. To overcome this limitation, we propose a novel automated pipeline leveraging deep convolutional neural networks for stomata detection and its quantification. The proposed framework shows a superior performance in contrast to the existing stomata detection methods in terms of precision and recall, 0.91 and 0.89 respectively. Furthermore, the morphological traits (i.e. length & width) obtained at stomata quantification step shows a correlation of 0.95 and 0.91 with manually computed traits, resulting in an efficient and high-throughput solution for stomata phenotyping.

Keywords

High-throughput phenotyping Deep convolutional neural networks Stomata counting Stomata quantification 

Notes

Acknowledgments

This work is supported by National Agricultural Science Fund (NASF) under Indian Council of Agricultural Research (ICAR), Delhi, India [Phenomics of moisture deficit stress tolerance and nitrogen use efficiency in rice and wheat- Phase II]. The authors are thankful to the Department of Textile Technology, Indian Institute of Technology Delhi (IIT Delhi) for the SEM facility.

Supplementary material

478828_1_En_31_MOESM1_ESM.pdf (1.7 mb)
Supplementary material 1 (pdf 1731 KB)

References

  1. 1.
    Arteta, C., Lempitsky, V., Noble, J.A., Zisserman, A.: Interactive object counting. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 504–518. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10578-9_33CrossRefGoogle Scholar
  2. 2.
    Bhugra, S., Mishra, D., Anupama, A., Chaudhury, S., Lall, B., Chugh, A.: Automatic quantification of stomata for high-throughput plant phenotyping (2018). (Accepted at ICPR18)Google Scholar
  3. 3.
    Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016)CrossRefGoogle Scholar
  4. 4.
    Dow, G.J., Bergmann, D.C., Berry, J.A.: An integrated model of stomatal development and leaf physiology. New Phytol. 201(4), 1218–1226 (2014)CrossRefGoogle Scholar
  5. 5.
    Eisele, J.F., Fäßler, F., Bürgel, P.F., Chaban, C.: A rapid and simple method for microscopy-based stomata analyses. PloS One 11(10), e0164576 (2016)CrossRefGoogle Scholar
  6. 6.
    Girshick, R.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448. IEEE (2015)Google Scholar
  7. 7.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  8. 8.
    He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 346–361. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10578-9_23CrossRefGoogle Scholar
  9. 9.
    Jayakody, H., Liu, S., Whitty, M., Petrie, P.: Microscope image based fully automated stomata detection and pore measurement method for grapevines. Plant Methods 13(1), 94 (2017)CrossRefGoogle Scholar
  10. 10.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  11. 11.
    Laga, H., Shahinnia, F., Fleury, D.: Image-based plant stomata phenotyping. In: 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), pp. 217–222. IEEE (2014)Google Scholar
  12. 12.
    Lempitsky, V., Zisserman, A.: Learning to count objects in images. In: Advances in Neural Information Processing Systems, pp. 1324–1332 (2010)Google Scholar
  13. 13.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  14. 14.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  15. 15.
    Minervini, M., Scharr, H., Tsaftaris, S.A.: Image analysis: the new bottleneck in plant phenotyping [applications corner]. IEEE Sig. Process. Mag. 32(4), 126–131 (2015)CrossRefGoogle Scholar
  16. 16.
    Nutter Jr., F., Gleason, M., Jenco, J., Christians, N.: Assessing the accuracy, intra-rater repeatability, and inter-rater reliability of disease assessment systems. Phytopathology 83(8), 806–812 (1993)CrossRefGoogle Scholar
  17. 17.
    Omasa, K., Onoe, M.: Measurement of stomatal aperture by digital image processing. Plant Cell Physiol. 25(8), 1379–1388 (1984)CrossRefGoogle Scholar
  18. 18.
    Rao, Y., et al.: EARLY senescence 1 encodes a SCAR-LIKE PROTEIN2 that affects water loss in rice. Plant Physiol. 00991 (2015)Google Scholar
  19. 19.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  20. 20.
    Sanyal, P., Bhattacharya, U., Bandyopadhyay, S.K.: Analysis of SEM images of stomata of different tomato cultivars based on morphological features. In: 2008 Second Asia International Conference on Modeling & Simulation, AICMS 08, pp. 890–894. IEEE (2008)Google Scholar
  21. 21.
    Scarlett, L., Tang, J., Petrie, P., Whitty, M.: A fast method to measure stomatal aperture by MSER on smart mobile phone. In: Applied Industrial Optics: Spectroscopy, Imaging and Metrology, pp. AIW2B-2. Optical Society of America (2016)Google Scholar
  22. 22.
    Schneider, C.A., Rasband, W.S., Eliceiri, K.W.: Nih image to imageJ: 25 years of image analysis. Nature Methods 9(7), 671 (2012)CrossRefGoogle Scholar
  23. 23.
    Sharma, M., Chaudhury, S., Lall, B.: Deep learning based frameworks for image super-resolution and noise-resilient super-resolution. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 744–751. IEEE (2017)Google Scholar
  24. 24.
    Siu, W.C., Hung, K.W.: Review of image interpolation and super-resolution. In: Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific, pp. 1–10. IEEE (2012)Google Scholar
  25. 25.
    Timofte, R., De Smet, V., Van Gool, L.: Anchored neighborhood regression for fast example-based super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1920–1927 (2013)Google Scholar
  26. 26.
    Vialet-Chabrand, S., Brendel, O.: Automatic measurement of stomatal density from microphotographs. Trees 28(6), 1859–1865 (2014)CrossRefGoogle Scholar
  27. 27.
    Xie, Y., Ji, Q.: A new efficient ellipse detection method. In: 2002 Proceedings of 16th International Conference on Pattern Recognition, vol. 2, pp. 957–960. IEEE (2002)Google Scholar
  28. 28.
    Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Zhang, C., Li, H., Wang, X., Yang, X.: Cross-scene crowd counting via deep convolutional neural networks. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 833–841. IEEE (2015)Google Scholar
  30. 30.
    Zhang, H., Niu, X., Liu, J., Xiao, F., Cao, S., Liu, Y.: RNAi-directed downregulation of vacuolar h+-ATPase subunit a results in enhanced stomatal aperture and density in rice. PloS One 8(7), e69046 (2013)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Indian Institute of Technology DelhiNew DelhiIndia
  2. 2.Indian Agricultural Research InstituteNew DelhiIndia

Personalised recommendations