Skip to main content
Log in

Image-based recognition framework for robotic weed control systems

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In this paper, we introduce a novel and efficient image-based weed recognition system for the weed control problem of Broad-leaved Dock (Rumex obtusifolius L.). Our proposed weed recognition system is developed using a framework, that allows the examination of the affects for various image resolutions in detection and recognition accuracy. Moreover, it includes state-of-the-art object/image categorization processes such as feature detection and extraction, codebook learning, feature encoding, image representation and classification. The efficiency of those processes have been improved and optimized by introducing methodologies, techniques and system parameters specially tailored for the goal of weed recognition. Through an exhaustive optimization process, which is presented as our experimental evaluation, we conclude to a weed recognition system that uses an image input resolution of 200 ×150, SURF features over dense feature extraction, an optimized Gaussian Mixture Model based codebook combined with Fisher encoding, using a two level image representation. The resulting image representation vectors are classified using a linear classifier. This system is experimentally shown to yield state-of-the-art recognition accuracy of 89.09% in the examined dataset. Our proposed system is also experimentally shown to comply with the specifications of the examined applications since it provides low false-positive results of 4.38%. As a result, the proposed framework can be efficiently used in weed control robots for precision farming applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Aharon M, Elad M, Bruckstein A (2006) K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54(11):4311–4322. https://doi.org/10.1109/TSP.2006.881199. http://ieeexplore.ieee.org/document/1710377/, 59749104367

    Article  MATH  Google Scholar 

  2. Alahi A, Ortiz R, Vandergheynst P (2012) FREAK: Fast retina keypoint. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, pp 510–517. https://doi.org/10.1109/CVPR.2012.6247715. http://ieeexplore.ieee.org/document/6247715/

  3. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-Up Robust Features (SURF). Comput Vis Image Underst 110(3):346–359. https://doi.org/10.1016/j.cviu.2007.09.014. http://linkinghub.elsevier.com/retrieve/pii/S1077314207001555

    Article  Google Scholar 

  4. Belongie S, Malik J, Puzicha J (2000) Shape context: A new descriptor for shape matching and object recognition. In: Advances in Neural Information Processing Systems 13: Proceedings of the 2000 Conference, pp 831–837. https://doi.org/10.1109/34.993558

  5. Bengio Y (2011) Deep Learning of Representations for Unsupervised and Transfer Learning. Workshop on Unsupervised and Transfer Learn 2011(7):1–20. http://www.iro.umontreal.ca/lisa/pointeurs/DL_tutorial.pdf

    Google Scholar 

  6. Bosch A, Zisserman A, Munoz X (2007) Image classification using random forests and ferns. In: 2007 IEEE 11th International Conference on Computer Vision, IEEE, pp 1–8. https://doi.org/10.1109/ICCV.2007.4409066. http://ieeexplore.ieee.org/document/4409066/

  7. Chatfield K, Simonyan K, Vedaldi A, Zisserman A (2014) Return of the devil in the details: delving deep into convolutional nets. In: British machine vision conference. https://doi.org/10.5244/C.28.6. arXiv:1405.3531

  8. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: Proceedings - 2005 IEEE computer society conference on computer vision and pattern recognition, CVPR 2005, IEEE, vol I, pp 886–893. https://doi.org/10.1109/CVPR.2005.177. http://ieeexplore.ieee.org/document/1467360/9411012

  9. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the {EM}-alogrithm. J R Stat Soc B 39(1):1–38. https://doi.org/10.2307/2984875. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.133.4884, 0710.5696v2

    MATH  Google Scholar 

  10. Deng J, Dong W, Socher R, Li L-J, Li K, Li F-F (2009) ImageNet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition pp 248–255. https://doi.org/10.1109/CVPRW.2009.5206848, http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5206848

  11. DockWeeder (2015) The {DockWeeder} robot enables organic dairy farming by controlling grassland. European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 618123 [ICT-AGRI 2]. http://dockweeder.eu/

  12. Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T (2014) DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. In: International Conference in Machine Learning (ICML). arXiv:1310.1531

  13. Engineering FP (2014) Mechanical hoeing robot. http://www.visionweeding.com/robovator/

  14. Evert V, FK PG, Van Der Heijden GWAM, Kempenaar C, Lotz LAP (2009) Real-time vision-based detection of Rumex obtusifolius in grassland. Weed Res 49(2):164–174. https://doi.org/10.1111/j.1365-3180.2008.00682.x

    Article  Google Scholar 

  15. Fan RE, Chen PH, Lin CJ (2005) Working set selection using second order information for training support vector machines. J Mach Learn Res 6:1889–1918. http://www.jmlr.org/papers/volume6/fan05a/fan05a.pdf

    MathSciNet  MATH  Google Scholar 

  16. Fan RE, Chang KW, Hsieh CJ, Wang XR, Lin CJ (2008) LIBLINEAR: A Library for Large Linear Classification. J Mach Learn Res 9:1871–1874. https://doi.org/10.1038/oby.2011.351. http://www.csie.ntu.edu.tw/cjlin/papers/liblinear.pdf

    MATH  Google Scholar 

  17. Garford (2011) Robocrop inrow. http://garford.com/PDF/robocrop

  18. Grauman K, Darrell T (2005) The pyramid match kernel: Discriminative classification with sets of image features. In: Proceedings of the IEEE International Conference on Computer Vision, IEEE, vol II, pp 1458–1465. https://doi.org/10.1109/ICCV.2005.239. http://ieeexplore.ieee.org/document/1544890/

  19. Hamerly G, Elkan C (2002) Alternatives to the k-means algorithm that find better clusterings. In: Proceedings of the 11th international conference on Information and knowledge management, vol 4, pp 600–607. https://doi.org/10.1145/584887.584890

  20. Harris C, Stephens M (1988) A combined corner and edge detector. In: Procedings of the Alvey Vision Conference 1988, pp 23.1–23.6. https://doi.org/10.5244/C.2.23. http://www.bmva.org/bmvc/1988/avc-88-023.pdf http://www.bmva.org/bmvc/1988/avc-88-023.html

  21. Ik A (2014) A comparative evaluation of well-known feature detectors and descriptors. Int J Appl Math Electron Comput 3(1):1. https://doi.org/10.18100/ijamec.60004. http://ijamec.atscience.org/article/view/1065000135

    Article  Google Scholar 

  22. Jegou H, Perronnin F, Douze M, Sanchez J, Perez P, Schmid C (2012) Aggregating local image descriptors into compact codes. IEEE Trans Pattern Anal Mach Intell 34(9):1704–1716. https://doi.org/10.1109/TPAMI.2011.235. http://ieeexplore.ieee.org/document/6104058/

    Article  Google Scholar 

  23. Joly A, Goëau H, Glotin H, Spampinato C, Bonnet P, Vellinga WP, Planque R, Rauber A, Fisher R, Müller H (2014) Lifeclef 2014: multimedia life species identification challenges. In: Information Access Evaluation. Multilinguality, Multimodality, and Interaction, Springer, Cham, pp 229–249. https://doi.org/10.1007/978-3-319-11382-1_20

  24. Kanungo T, Mount D, Netanyahu N, Piatko C, Silverman R, Wu A (2002) An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans Pattern Anal Mach Intell 24(7):881–892. https://doi.org/10.1109/TPAMI.2002.1017616, http://ieeexplore.ieee.org/document/1017616/

    Article  MATH  Google Scholar 

  25. Kargar BAH, Shirzadifar AM (2013) Automatic weed detection system and smart herbicide sprayer robot for corn fields. In: International Conference on Robotics and Mechatronics, ICRoM 2013, IEEE, pp 468–473. https://doi.org/10.1109/ICRoM.2013.6510152, http://ieeexplore.ieee.org/document/6510152/

  26. Kazmi W, Garcia-Ruiz F, Nielsen J, Rasmussen J, Andersen HJ (2015) Exploiting affine invariant regions and leaf edge shapes for weed detection. Comput Electron Agric 118 (C):290–299. https://doi.org/10.1016/j.compag.2015.08.023. http://linkinghub.elsevier.com/retrieve/pii/S0168169915002495

    Article  Google Scholar 

  27. Kounalakis T, Triantafyllidis GA, Nalpantidis L (2016) Weed recognition framework for robotic precision farming. In: 2016 IEEE international conference on imaging systems and techniques (IST). IEEE, pp 466–471. https://doi.org/10.1109/IST.2016.7738271, http://ieeexplore.ieee.org/document/7738271/

  28. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet Classification with Deep Convolutional Neural Networks. https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks

  29. Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, IEEE, vol 2, pp 2169–2178. https://doi.org/10.1109/CVPR.2006.68. http://ieeexplore.ieee.org/document/1641019/9411012

  30. Leutenegger S, Chli M, Siegwart RY (2011) BRISK: Binary Robust invariant scalable keypoints. In: Proceedings of the IEEE International Conference on Computer Vision, IEEE, pp 2548–2555. https://doi.org/10.1109/ICCV.2011.6126542, http://ieeexplore.ieee.org/document/6126542/

  31. Li F-F, Perona P (2005) A Bayesian Hierarchical Model for Learning Natural Scene Categories. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), IEEE, vol 2, pp 524–531. https://doi.org/10.1109/CVPR.2005.16. http://ieeexplore.ieee.org/document/1467486/

  32. Lottes P, Hoeferlin M, Sander S, Muter M, Schulze P, Stachniss LC (2016) An effective classification system for separating sugar beets and weeds for precision farming applications. In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE, vol 2016-June, pp 5157–5163. https://doi.org/10.1109/ICRA.2016.7487720, http://ieeexplore.ieee.org/document/7487720/

  33. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94. http://link.springer.com/10.1023/B:VISI.0000029664.99615.94, 0112017

    Article  Google Scholar 

  34. Matas J, Chum O, Urban M, Pajdla T (2002) Robust Wide Baseline Stereo from Maximally Stable Extremal Regions. British Machine Vision Conference 2002:384–393. https://doi.org/10.5244/C.16.36. http://cmp.felk.cvut.cz/matas/papers/matas-bmvc02.pdf

    Google Scholar 

  35. Michaels A, Haug S, Albert A (2015) Vision-based high-speed manipulation for robotic ultra-precise weed control. In: IEEE International Conference on Intelligent Robots and Systems, IEEE, vol 2015-Decem, pp 5498–5505. https://doi.org/10.1109/IROS.2015.7354156, http://ieeexplore.ieee.org/document/7354156/

  36. Mikolajczyk K, Schmid C (2002) An Affine Invariant Interest Point Detector. Springer, Berlin, pp 128–142. https://doi.org/10.1007/3-540-47969-4_9, http://link.springer.com/10.1007/3-540-47969-4_9

  37. Miksik O, Mikolajczyk K (2012) Evaluation of Local Detectors and Descriptors for Fast Feature Matching. In: 2012 21st international conference on pattern recognition (ICPR), pp 2681–2684. 978-1-4673-2216-4

  38. Mukherjee D, JonathanWu QM, Wang G (2015) A comparative experimental study of image feature detectors and descriptors. Mach Vis Appl 26(4):443–466. https://doi.org/10.1007/s00138-015-0679-9

    Article  Google Scholar 

  39. Oerke EC (2006) Crop losses to pests. J Agric Sci 144(01):31. https://doi.org/10.1017/S0021859605005708, http://www.journals.cambridge.org/abstract_S0021859605005708

    Article  Google Scholar 

  40. Oquab M, Bottou L, Laptev I, Sivic J (2014) Learning and transferring mid-level image representations using convolutional neural networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, pp 1717–1724. https://doi.org/10.1109/CVPR.2014.222. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6909618

  41. Pérez-Ortiz M, Peña J, Gutiérrez P, Torres-Sánchez J, Hervás-Martínez C, López-Granados F (2015) A semi-supervised system for weed mapping in sunflower crops using unmanned aerial vehicles and a crop row detection method. Appl Soft Comput 37:533–544. https://doi.org/10.1016/j.asoc.2015.08.027. http://www.sciencedirect.com/science/article/pii/S1568494615005281

    Article  Google Scholar 

  42. Pérez-Ortiz M, Peña JM, Gutiérrez PA, Torres-Sánchez J, Hervás-Martínez C, López-Granados F (2016) Selecting patterns and features for between- and within- crop-row weed mapping using UAV-imagery. Expert Syst Appl 47(C):85–94. https://doi.org/10.1016/j.eswa.2015.10.043. http://linkinghub.elsevier.com/retrieve/pii/S0957417415007472, ESWA10373

    Article  Google Scholar 

  43. Perronnin F, Jorge S, Mensink T (2010) Improving the fisher kernel for large-scale image classification. In: Proceedings of the 11th European conference on computer vision: part, vol IV, pp 143–156. https://www.robots.ox.ac.uk/vgg/rg/papers/peronnin_etal_ECCV10.pdf

  44. Potena C, Nardi D, Pretto A (2016) Fast and accurate crop and weed identification with summarized train sets for precision agriculture. In: 14th International Conference on Intelligent Autonomous Systems (IAS-14). http://www.dis.uniroma1.it/pretto/papers/pnp_ias2016.pdf

  45. Reyes AK, Caicedo JC, Camargo JE (2015) Fine-tuning deep convolutional networks for plant recognition. In: CEUR Workshop Proceedings, vol 1391

  46. Rosten E, Drummond T (2005) Fusing points and lines for high performance tracking. In: Proceedings of the IEEE International Conference on Computer Vision, IEEE, vol II, pp 1508–1515. https://doi.org/10.1109/ICCV.2005.104. http://ieeexplore.ieee.org/document/1544896/

  47. Rouse JW, Hass RH, Shell J, Deering D (1974) Monitoring vegetation systems in the Great Plains with ERTS-1. In: Third Earth Resources Technologhy Satellite Symposium, vol 351, pp 309–317. 1974NASSP.351..309R

  48. Salahat E, Qasaimeh M (2017) Recent advances in features extraction and description algorithms: A comprehensive survey. In: Proceedings of the IEEE International Conference on Industrial Technology, pp 1059–1063. https://doi.org/10.1109/ICIT.2017.7915508, 1703.06376

  49. Schwarz M, Schulz H, Behnke S (2015) RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp 1329–1335. https://doi.org/10.1109/ICRA.2015.7139363, arXiv:http://ieeexplore.ieee.org/document/7139363/, 1204.3968

  50. Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R, LeCun Y (2013) OverFeat: integrated recognition. Localization and Detection using Convolutional Networks. https://doi.org/10.1109/CVPR.2015.7299176. arXiv:1312.6229

  51. Silpa-Anan C, Hartley R (2008) Optimised KD-trees for fast image descriptor matching. In: 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR. IEEE, pp 1–8. https://doi.org/10.1109/CVPR.2008.4587638, http://ieeexplore.ieee.org/document/4587638/

  52. Sünderhauf N, McCool C, Upcroft B, Tristan P (2014) Fine-grained plant classification using convolutional neural networks for feature extraction. In: Working notes of CLEF 2014 conference. http://www.tiny.cc/agrc-qut

  53. Tomasi JS (1994) Good features to track. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR-94, IEEE Comput. Soc. Press, pp 593–600. https://doi.org/10.1109/CVPR.1994.323794

  54. Ustyuzhanin A, Dammer KH, Giebel A, Weltzien C, Schirrmann M (2017) Discrimination of common ragweed (Ambrosia artemisiifolia) and mugwort (artemisia vulgaris) based on bag of visual words model. Weed Technol 31(02):310–319. https://doi.org/10.1614/WT-D-16-00068.1. https://www.cambridge.org/core/product/identifier/S0890037X17000021/type/journal_article

    Article  Google Scholar 

  55. Vedaldi A, Zisserman A (2012) Efficient additive kernels via explicit feature maps. IEEE Trans Pattern Anal Mach Intell 34(3):480–492. https://doi.org/10.1109/TPAMI.2011.153. http://ieeexplore.ieee.org/document/6136519/

    Article  Google Scholar 

  56. Wang J, Yang J, Yu K, Lv F, Huang T, Gong Y (2010) Locality-constrained linear coding for image classification. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, pp 3360–3367. https://doi.org/10.1109/CVPR.2010.5540018. http://ieeexplore.ieee.org/document/5540018/1309.7484, 1309.7484

  57. Wong W, Chekima A, Mariappan M, Khoo B, Nadarajan M (2014) Probabilistic multi SVM weed species classification for weed scouting and selectiv spot weeding. In: 2014 IEEE International Symposium on Robotics and Manufacturing Automation (ROMA). IEEE, pp 63–68. https://doi.org/10.1109/ROMA.2014.7295863, http://ieeexplore.ieee.org/document/7295863/

  58. Yang J, Yu K, Gong Y, Huang T (2009) Linear spatial pyramid matching using sparse coding for image classification. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp 1794–1801. https://doi.org/10.1109/CVPR.2009.5206757. http://ieeexplore.ieee.org/document/5206757/, 1504.06897

  59. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 8689 LNCS, pp 818–833. https://doi.org/10.1007/978-3-319-10590-1_53, arXiv:1311.2901

Download references

Acknowledgements

This work has been supported by the DockWeeder project (project ID: 30079), administered through the European Union’s Seventh Framework Program for research, technological development and demonstration under grant agreement no 618123 [ICT-AGRI 2]. The project has received funding from the Ministry of Economic Affairs (The Netherlands), from the Federal Office for Agriculture (Switzerland), and from Innovation Fund Denmark, the Ministry of Science, Innovation and Higher Education (Denmark).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tsampikos Kounalakis.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kounalakis, T., Triantafyllidis, G.A. & Nalpantidis, L. Image-based recognition framework for robotic weed control systems. Multimed Tools Appl 77, 9567–9594 (2018). https://doi.org/10.1007/s11042-017-5337-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5337-y

Keywords

Navigation