Precision Agriculture

, Volume 18, Issue 2, pp 224–244 | Cite as

Machine vision for counting fruit on mango tree canopies

  • W. S. Qureshi
  • A. Payne
  • K. B. Walsh
  • R. Linker
  • O. Cohen
  • M. N. Dailey


Machine vision technologies hold the promise of enabling rapid and accurate fruit crop yield predictions in the field. The key to fulfilling this promise is accurate segmentation and detection of fruit in images of tree canopies. This paper proposes two new methods for automated counting of fruit in images of mango tree canopies, one using texture-based dense segmentation and one using shape-based fruit detection, and compares the use of these methods relative to existing techniques:—(i) a method based on K-nearest neighbour pixel classification and contour segmentation, and (ii) a method based on super-pixel over-segmentation and classification using support vector machines. The robustness of each algorithm was tested on multiple sets of images of mango trees acquired over a period of 3 years. These image sets were acquired under varying conditions (light and exposure), distance to the tree, average number of fruit on the tree, orchard and season. For images collected under the same conditions as the calibration images, estimated fruit numbers were within 16 % of actual fruit numbers, and the F1 measure of detection performance was above 0.68 for these methods. Results were poorer when models were used for estimating fruit numbers in trees of different canopy shape and when different imaging conditions were used. For fruit-background segmentation, K-nearest neighbour pixel classification based on colour and smoothness or pixel classification based on super-pixel over-segmentation, clustering of dense scale invariant feature transform features into visual words and bag-of-visual-word super-pixel classification using support vector machines was more effective than simple contrast and colour based segmentation. Pixel classification was best followed by fruit detection using an elliptical shape model or blob detection using colour filtering and morphological image processing techniques. Method results were also compared using precision–recall plots. Imaging at night under artificial illumination with careful attention to maintaining constant illumination conditions is highly recommended.


Fruit detection Heuristic shape analysis K nearest neighbour Night-time imaging Segmentation Super-pixels Yield prediction 



KW acknowledges support from Horticulture Innovation Australia (MT14048/ST15005).


  1. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & Susstrunk, S. (2012). SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11), 2274–2282.CrossRefPubMedGoogle Scholar
  2. Chaivivatrakul, S., & Dailey, M. N. (2014). Texture-based fruit detection. Precision Agriculture, 15(6), 662–683.CrossRefGoogle Scholar
  3. Chaivivatrakul, S., Moonrinta, J., & Dailey, M. N. (2010). Towards automated crop yield estimation—Detection and 3D reconstruction of pineapples in video sequences. In Proceedings of the Fifth International Conference on Computer Vision Theory and Applications (pp. 180–183). Angers: INSTICC Press.Google Scholar
  4. Davis, J., & Goadrich, M. (2006). The relationship between precision–recall and ROC curves. In Proceedings of the 23rd International Conference on Machine Learning (pp. 233–240). New York: ACM Publications.Google Scholar
  5. Diago, M.-P., Correa, C., Millán, B., Barreiro, P., Valero, C., & Tardaguila, J. (2012). Grapevine yield and leaf area estimation using supervised classification methodology on rgb images taken under field conditions. Sensors, 12(12), 16988–17006.CrossRefPubMedPubMedCentralGoogle Scholar
  6. Gongal, A., Amatya, S., Karkee, M., Zhang, Q., & Lewis, K. (2015). Sensors and systems for fruit detection and localization: A review. Computers and Electronics in Agriculture, 116, 8–19.CrossRefGoogle Scholar
  7. Hung, C., Nieto, J., Taylor, Z., Underwood, J., & Sukkarieh, S. (2013). Orchard fruit segmentation using multi-spectral feature learning. In Proceedings of the International Conference on Intelligent Robots and Systems (pp. 5314–5320). Tokyo: IEEE.Google Scholar
  8. Hung, C., Underwood, J., Nieto, J., & Sukkarieh, S. (2015). A feature learning based approach for automated fruit yield estimation. In Field and Service Robotics: Springer Tracts in Advanced Robotics (Vol 12) (pp.485–498).Google Scholar
  9. Jimenez, A., Ceres, R., & Pons, J. (2000). A survey of computer vision methods for locating fruit on trees. Transactions of the ASAE-American Society of Agricultural Engineers, 43(6), 1911–1920.CrossRefGoogle Scholar
  10. Kapach, K., Barnea, E., Mairon, R., Edan, Y., & Ben-Shahar, O. (2012). Computer vision for fruit harvesting robots-state of the art and challenges ahead. International Journal of Computational Vision and Robotics, 3(1), 4–34.CrossRefGoogle Scholar
  11. Kurtulmus, F., Lee, W. S., & Vardar, A. (2014). Immature peach detection in colour images acquired in natural illumination conditions using statistical classifiers and neural network. Precision Agriculture, 15(1), 57–79.CrossRefGoogle Scholar
  12. Linker, R., Cohen, O., & Naor, A. (2012). Determination of the number of green apples in RGB images recorded in orchards. Computers and Electronics in Agriculture, 81, 45–57.CrossRefGoogle Scholar
  13. Moonrinta, J., Chaivivatrakul, S., Dailey, M. N., & Ekpanyapong, M. (2010). Fruit detection, tracking, and 3D reconstruction for crop mapping and yield estimation. In Proceedings of the International Conference on Intelligent Robots and Systems (pp. 1181–1186). Singapore: IEEE.Google Scholar
  14. Nuske, S., Achar, S., Bates, T., Narasimhan, S., & Singh, S. (2011) Yield estimation in vineyards by visual grape detection. In Proceedings of the International Conference on Intelligent Robots and Systems (pp. 2352–2358). San Fransico: IEEE.Google Scholar
  15. Nuske, S., Wilshusen, K., Achar, S., Yoder, L., Narasimhan, S., & Singh, S. (2014). Automated visual yield estimation in vineyards. Journal of Field Robotics, 31(5), 837–860.CrossRefGoogle Scholar
  16. Oliveri, P., & Downey, G. (2013). Discriminant and class-modelling chemometric techniques for food PDO verification. In M. de la Guardia & A. G. Illueca (Eds.), Food protected designation of origin methodologies and applications (Vol. 60, pp. 317–338). Oxford: Elsevier.CrossRefGoogle Scholar
  17. Payne, A., & Walsh, K. (2014). Machine vision in estimation of crop yield. In S. D. Gupta & Y. Ibaraki (Eds.), Plant image analysis: Fundamentals and applications. Boca Raton: CRC Press.Google Scholar
  18. Payne, A., Walsh, K. B., Subedi, P., & Jarvis, D. (2013). Estimation of mango crop yield using image analysis–Segmentation method. Computers and Electronics in Agriculture, 91, 57–64.CrossRefGoogle Scholar
  19. Payne, A., Walsh, K., Subedi, P., & Jarvis, D. (2014). Estimating mango crop yield using image analysis using fruit at ‘stone hardening’stage and night time imaging. Computers and Electronics in Agriculture, 100, 160–167.CrossRefGoogle Scholar
  20. Powers, D. M. W. (2011). Evaluation: from precision, recall and F-measure to ROC, informedness, markedness & correlation. Journal of Machine Learning Technologies, 2(1), 37–63.Google Scholar
  21. Qureshi, W. S., Satoh, S., Dailey, M. N., & Ekpanyapong, M. (2014). Dense segmentation of textured fruits in video sequences. In Proceedings of the 9th International Conference on Computer Vision Theory and Applications (pp. 441–447). Lisbon: INSTICC Press.Google Scholar
  22. Roy, A., Banerjee, S., Roy, D., & Mukhopadhyay, A. (2011) Statistical video tracking of pomegranate fruits. In Proceedings of the National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (pp. 227–230). Hubli: IEEE.Google Scholar
  23. Schillaci, G., Pennisi, A., Franco, F., & Longo, D. (2012). Detecting tomato crops in greenhouses using a vision based method. In Proceedings of International Conference on Safety, Health and Welfare in Agriculture and Agro (pp. 3–6). Ragusa: RAGUSA-SHWA.Google Scholar
  24. Sengupta, S., & Lee, W. S. (2014). Identification and determination of the number of immature green citrus fruit in a canopy under different ambient light conditions. Biosystems Engineering, 117, 51–61.CrossRefGoogle Scholar
  25. Tola, E., Lepetit, V., & Fua, P. (2010). Daisy: An efficient dense descriptor applied to wide baseline stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(5), 815–830.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • W. S. Qureshi
    • 1
    • 2
  • A. Payne
    • 3
  • K. B. Walsh
    • 3
  • R. Linker
    • 4
  • O. Cohen
    • 4
  • M. N. Dailey
    • 2
  1. 1.Department of Mechatronics EngineeringNational University of Science and TechnologyIslamabadPakistan
  2. 2.Asian Institute of TechnologyPathumthaniThailand
  3. 3.Central Queensland UniversityRockhamptonAustralia
  4. 4.Technion—Israel Institute of TechnologyTechnion CityIsrael

Personalised recommendations