Advertisement

Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates

Weed Detection for Selective Spraying: a Review

  • 169 Accesses

Abstract

Purpose of Review

Weed detection systems are important solutions to one of the existing agricultural problems—unmechanized weed control. Weed detection also helps provide a means of reducing or eliminating herbicide use, mitigating agricultural environmental and health impact, and improving sustainability.

Recent Findings

Deep learning-based techniques are replacing traditional machine learning techniques to detect weeds in real time with the development of new models and increasing computational power. More hybrid machine learning models are emerging, utilizing benefits from different techniques. More large-scale crop and weed image datasets are available online now, and this provides more data and opportunities for researchers and engineers to join and contribute to this field.

Summary

This article provides a mini-review of all the different emerging and popular weed detection techniques for selective spraying, and summarizes the trends in this area in the past several years.

Introduction

Weeds are common on all croplands over the world. Weeds compete with desirable plants for light, water, and nutrients; introduce diseases or viruses; and attract harmful insects and pests, and therefore cause yield loss. In the USA, the cost of agricultural weeds is estimated to be over $26 billion with direct costs for weed control exceeding $10 billion every year [1]. There are more than 200 problematic weed species in croplands, and this makes direct weed detection challenging. Many studies have shown the impact of weeds on different crops. For example, Lanini and Strange [2] studied season-long weed competition in California lettuce fields and found that the lettuce yields reduced over 50%. Hodgson’s [3] study showed that two Canada thistle shoots per square meter reduced wheat yields by 15%. Monaco et al. [4] reported 71%, 67%, 48%, and 48% yield reductions respectively in direct-seeded tomatoes when jimsonweed, tall morning glory, common cocklebur, and large crabgrass were present at a density of 11 weeds/m2. The presence of weeds at the harvest stage of some crops may reduce the quality and their values. Nave and Wax’s [5] study showed that the soybean harvesting before weeds was desiccated resulted in great threshing and separating losses when the speed increased from 1 to 2 and 3 mph. Stubble, lodging, and stalk losses almost doubled in the pigweed and foxtail infested plots compared with the ones without weeds. Smith et al. [6] reported that Palmer amaranth density (3260 weeds/ha) reduced lint and seed yields. Harvesting time increased 2- to 3.5-fold when weeds were present because of slower ground speeds and work stoppages.

Agricultural weed management attempts to control or eliminate weeds (micro-spray, cutting, thermal, electrocution, etc.). Weed control consists of the application of herbicides and weed removal. Currently, the most widely adopted method for weed control in agriculture is to use herbicides, but excessive agricultural herbicide use raises many environmental, health, and economic concerns. Some other typical weed control methods include mechanical cultivation, hand hoeing, pre-emergence herbicide application and/or pre-emergence tillage, and post-emergence herbicide application. With the increasing cost of labor, awareness of agricultural suitability and human health, automated weeding systems become popular in crop production. Weed management is complicated by its need, cost, the labor involved, growing herbicide resistance, the requirement of groundwater and environmental protection, and the changing nature of weed growth. Weed infestations are typically distributed non-uniformly in agricultural fields, and this non-uniformity in weed populations has both temporal and spatial aspects that make automated weeding challenging. In the past decade, some automated weeding systems with various degrees of success have been introduced.

To successfully achieve automated weeding, the first fundamental step that needs to be done is weed detection. The technology for tractor-mounted real-time weed detection and control, and the automated selective spraying of weeds in agricultural fields have great potentials for reducing economic and environmental costs while maintaining a high level of weed control. Some early studies focused on the efficacy and reliability of using different light spectra and simple image processing techniques (color thresholding, differencing, filtering, etc.) to identify crops from weeds [7,8,9,10,11,12]. Other work applies fuzzy logic and related techniques to convert image data into commands that can be used to actuate herbicide sprayers [13, 14]. Thanks to the advancement of sensors, computational power, and algorithms, many breakthroughs have been made in weed detection in the past few years. Some commercially available autonomous weeding machines have been developed that couple robotics and computer vision to recognize and remove weeds [15, 16]. The key part of these systems is weed detection, which typically uses many labeled plant images (as examples) to teach a model to distinguish desirable plants from weeds (and label them), recognize patterns in weed distributions, and identify weed edges/boundaries. These technological advances have reduced the impact of weeds and the cost of weed management. Autonomous sprayers, automated thinning, hoeing, tilling machines, and new weeding machines have all improved weed management.

The objective of this paper is to discuss different weed detection technologies that directly target weed-crop discrimination and summarize the significant trends or developments of the related technologies. Some recommendations are provided for improved weed detection using the combination of sensor and application technologies. In the end, one example of weed detection for lettuce using deep learning is provided to demonstrate the workflow of weed detection for selective spraying.

Current Technologies for Weed Detection

In recent years, new weed detection technologies have been developed to improve the speed and accuracy of weed detection, mitigating the conflict between the goals of improving soil health, and to achieve sufficient weed control for profitable farming. A literature review is conducted in this section, and all the emerging weed detection techniques and methods in the recent 5 years are organized and grouped into two categories: digital image sensors based and non-digital image sensors based.

Digital Image Sensors

Weed detection based on digital imaging processing and computer vision is the most investigated and widely used techniques. Spectral features, biological morphology, visual textures, spatial contexts, and patterns present in digital images can be used to discriminate weeds and crops. For example, transplanted lettuce plants, in most cases, are bigger than weeds because of growing time, and the size feature can be used to detect weeds. These features are typically extracted by observations and experience, and then programmed/designed and translated to conventional image processing techniques. Another typical approach to handle the weed detection problem is machine learning. This approach utilizes machine learning models to extract features automatically from example images, and some complicated models not only conduct weed recognition but also can localize multiple weeds in one image.

Image processing is one of the common tools used in weed detection; its typical procedures include pre-processing, segmentation, feature extraction, and classification [17, 18•, 19], and procedures are shown in Fig. 1. Bakhshipour et al. [20] examined wavelet texture features to verify their potential in weed detection in a sugar beet crop. A discrimination algorithm was designed to determine the wavelet texture features for each image sub-division to be fed to an artificial neural network. Co-occurrence texture features were determined for each multi-resolution image created by a single-level wavelet transform. A neural network was finally used to label each sub-division as weeds or crops. Results showed that the wavelet texture features could discriminate weeds from crops effectively. Lavania and Matey [21] used image segmentation, double thresholding based on 3D-Otsu’s method to carry out crop row detection. Then, weed and crop discrimination was achieved by compressing the three-dimension vectors of an image to one dimension using the principal component analysis (PCA) method. Rumpf et al. [22] proposed a sequential support vector machine (SVM) classification for small-grain weed species by shape parameters from image data. First, similar sub-groups such as crop plants, monocotyledonous weeds, and dicotyledonous weeds were classified. Then, weed species were differentiated within one group. Specialized features for each sub-group were selected by SVM-weighting and filters.

Fig. 1
figure1

Typical image processing procedures for weed detection

Another emerging method of weed detection is using machine learning algorithms to directly extract crop features and classify weeds or crops based on the automatically extracted features [23,24,25]. dos Santos Ferreira et al. [26] used a convolutional neural network (CNN—AlexNet) to perform weed detection on soybean crop images collected using a drone, classified the weeds among grass and broadleaf, and applied the specific herbicide to detected weeds. This work reported about 97% accuracy using CNN in the detection of broadleaf and grass weeds without soil and soybean in the background. Wendel and Underwood [27] created a self-supervised method to discriminate weeds in crop fields, without manual labeling. This work gathered training data to create a self-supervised classification framework that was adaptive to crop variation without generating new datasets manually. The experimental results showed that the self-supervised weed/crop discrimination performance was approaching the performance of the model with manually labeled training data. Garcia-Ruiz et al. [28] proposed to use local features based on affine-invariant regions and scale-invariant key points for sweet beet leaf images. The SVM classifier, a fusion of surface color and edge shapes, improved the overall classification accuracy to 99.07%. Dyrmann et al. [29] proposed a method for automating weed detection in color images when heavy leaf occlusion presents. The CNN network was trained and validated on more than 17,000 annotations of weeds in images from winter wheat fields. The algorithm could detect 46% of the weeds when large parts of the weeds overlapped with wheat plants. When small weeds, grasses, and weeds were exposed to a severe degree of overlap, the performance of the proposed algorithm went down. For large plants, the algorithm had trouble creating optimal bounding boxes to include entire plants.

As the autonomous vehicle technologies (aerial and ground) have become popular and accessible as tools in agricultural research and production in recent years, an increasing number of applications in crop scouting, precision agriculture, weed management, livestock monitoring, frost mitigation, and fertilizer application have been reported [30, 31, 32•, 33, 34]. An increasing number of studies have been done on weed detection using digital images from robotic platforms in recent years. Peña et al.’s [35] study showed that the ability to use aerial images to discriminate weeds was affected by the type of cameras, flight altitude, and temporal resolution significantly. Barrero et al. [36] used a fixed-wing plane to take digital images (16.1 megapixels) at 50 m of rice fields. An orthomosaic map of the field was created, gray-level co-occurrence matrix (GCLM) with Haralick’s descriptor was used for texture classification, and normalized difference index (NDI) was used for color. The experimental result showed a 99% precision for the detection of weed on the test data, and the resolution of the images limited the accuracy when weed plants and rice plants were in similar sizes. López-Granados et al. [37] used an unmanned aerial vehicle (UAV) equipped with RGB and multi-spectral cameras flying at a 30- and 60-m altitude to capture images of a sunflower field. The overlapped images were then orthomosaicked to generate a high spatial resolution image. An object-based image analysis was developed to detect, and map soil, crop-rows, and weeds, and herbicide treatment maps were created accordingly.

Non-digital Image Sensors

In addition to weed detection using shapes and morphological properties of weeds in digital images, information from reflectance spectra can be used to differentiate characteristic patterns of weeds from crops [38,39,40]. Reflectance spectra collects information from across the electromagnetic spectrum, and it is used to identify materials, detect processes, or find objects. Some research has been done using reflectance spectra to detect weeds. Karimi et al. [41] used SVMs and neural networks for weed detection in corn with hyperspectral reflectance data as model input. The study detects and classifies four weed management practices and three nitrogen rates. The classification accuracies of SVMs and NNs are 69.2% and 58.3% respectively. Pantazi et al. [42] proposed a method that discriminates between crop and weed species based on their spectral reflectance differences. The detection was based on one-class classifiers using neural networks. The recognition performance for different weed species varied between 31 and 98% for the self-organizing map and 53–94% for a mixture of Gaussians.

The ultraviolet (UV) induced fluorescence of the plants appears as a promising technique for real-time weed detection, and some studies have been done in this area. Longchamps et al. [43] used UV-induced fluorescence for weed-crop discrimination in corn production. A linear discriminant analysis was applied on the scores of multi-variate analysis on 1440 spectral signatures of fluorescence from three plant groups (corn, monocotyledonous weeds, and dicotyledonous weeds). A classification success rate of 91.8% was achieved under laboratory conditions. However, the application to field conditions still requires further research. Wang et al. [44] tested a chlorophyll fluorescence imaging sensor introduced by WeedPAM to demonstrate its capability to detect herbicide-resistant A. myosuroides populations shortly after treatment. The study showed that chlorophyll fluorescence imaging could be used to detect herbicide stress in weeds, and 95% of the WeedPAM classifications of 5 days after treatment were correct. Panneton et al. [45] proposed a partial least squares (PLS) model for weed classification, and two bands were chosen in the blue-green fluorescence zone (400–425 nm and 425–490 nm). The study showed that the linear discriminant analysis using the signal from these two bands provided a solid differentiation between monocotyledonous and dicotyledonous plants. Suzuki et al. [46] used normalized difference vegetation index (NDVI) to differentiate between plants and background soil, and created a filter using 15 relevant wavebands as explanatory variables to discriminate between soybean crop plants and weed plants. The study showed that most relevant wavelengths were located around the green peak and in the near-infrared (NIR) range.

Example—Weed Detection for Romaine Lettuce

Lettuce is a multi-billion industry in the USA [47]. Lettuce weeding has a very big impact on its profitability. This section demonstrates the complete workflow of weed detection for romaine lettuce. In this example, we present the ways of automatic weed detection for romaine lettuce using deep learning based on YOLO-v2 (You Only Look Once version 2) framework and analytics. YOLO-v2 [48••] is an object detection framework that divides input images into 13 × 13 squares. At each square, the YOLO-v2 network predicts 5 bounding boxes with different aspect ratios. For each bounding box, the YOLO-v2 network predicts its central location within the square, the width, height of the box, and the confidence score of having any object in that box along with the probabilities of belong to each of the classes using a CNN model. Next, it removes the bounding boxes which have no object. Finally, it removes the bounding boxes that predict a confidence score less than a threshold of 0.245. Among the bounding boxes which claim to have an object, the network removes the redundancy of identifying the same object using non-max suppression and intersection over union.

To simplify the weed detection process, we first use image processing techniques to find vegetation areas in an image, then a YOLO-v2 model is used to detect lettuce plants, and finally, lettuce plants are excluded from the vegetation areas leaving the weeds areas only. The schematic diagram of the entire system is shown in Fig. 2.

Fig. 2
figure2

Schematic diagram of the lettuce weeding system

Data Collection and Image Pre-processing

A collection of 3000 digital RGB images of romaine lettuce (at different growing stages, 7 to 45 days) were collected at the Cal Poly State University Organic Farm (GPS location: 35.304779, − 120.672694). All the images are scaled down to 640 × 480 pixels as the input dataset for easier training and faster implementation purposes. An Otsu-based [49] color thresholding method is used to filter out the vegetation areas in the L*a*b color space. The filtered out image is shown in Fig. 3. The purple areas show the filtered out vegetation areas.

Fig. 3
figure3

Color thresholding for vegetations

Image Annotation

The next step is to differentiate lettuce plants from other vegetations. All the lettuces in the images are semi-automatically labeled (ground truth) using MATLAB™. First, a small set of 500 images are manually labeled, then a ResNet-50 convolutional neural network model with YOLO-v2 is trained with these images. Second, the trained YOLO-v2 model is used to automatically label the rest of the images in the dataset. Finally, all the automatically labeled images are manually inspected and adjusted.

Training of the Lettuce Detection Models

Once the lettuce labeling is done, these labeled images are ready to be used as a training dataset for romaine lettuce detection. In this example, we test some common CNN structures (ResNet-50, ResNet-101, MobileNet, InceptionResnet V2, SqueezeNet, VGG16, VGG19) as the feature extraction layers for the YOLO-v2 model to identify lettuces. The labeled lettuce image set is used as input for the training of these models. The models are trained on the Amazon™ Elastic Computer Cloud (EC2) with Tesla V100-SXM2-16GB GPUs. Different hyperparameters of these models are systematically adjusted to find the best configuration for each model. The trained models are tested with 20% of the images from the image set (reserved as test images, and the trained models have never seen them before), and the mean average precision (mAP) of the YOLO-v2 model with different feature extraction CNN models are shown in Table 1.

Table 1 Mean average precision (mAP) of romaine lettuce detection using different models

The results show that the CNN model VGG 16 gives the highest mAP when it is used with YOLO-v2 and the optimal feature extraction layer. The proposed model can detect and localize multiple lettuce plants with a speed near or in real time. An example of romaine lettuce detection in an image is shown in Fig. 4. The detected lettuce plants are boxed and labeled with the confidence level.

Fig. 4
figure4

Romaine lettuce detection

Weed Detection

After the YOLO-v2 model is modeled and trained, it is ready to detect romaine lettuce plants from input images, and then, these lettuce areas are excluded from the vegetation areas leaving the weeds areas only in the converted binary image. Some basic image processing techniques are further applied here to remove small noise areas. Finally, all centers of the weeds are calculated and marked for weeding applications later. One thing that needs to be noted is that all the weeds within bounding boxes (when weeds are very close to the lettuce plants) are treated as lettuce, and this is one of the limitations of using YOLO-v2. The marked weeds (red dots) in an example image are shown in Fig. 5. After the weeds are detected, the proposed model is ready to be deployed for selective spraying.

Fig. 5
figure5

Detected weeds in a test image

Deployment

To deploy and test the proposed weed detection model, a Microsoft™ Lifecam HD-3000 webcam is used to take images and the images are sent to an NVIDIA™ Jetson TX2 computer (256-core NVIDIA Pascal GPU architecture with 256 NVIDIA CUDA cores) for weed detection. Once the weeds are detected, the locations of them are calculated. The spraying control signals are sent via the USB port to an ATmega328 microcontroller to control a set of four sprayers to spray when they pass the targeted weeds on a sliding experimental apparatus. The hardware setup is shown in Fig. 6.

Fig. 6
figure6

Imaging and data processing unit

The developed system can identify romaine lettuce at different growth stages with a 92.8% mAP using a VGG16 model with YOLO-v2. The system can process an image around 0.03 s on the Jetson TX2 computer, and it makes real-time spraying feasible. Weeds within the crop boundaries could not be detected because of the limitations of the YOLO algorithm. The limitation can be mitigated with further image processing or other feature extraction models such as semantic segmentation. Weeds with other colors rather than green could not be detected, because of the simple color thresholding method chosen. This problem can be fixed using a multi-spectral camera and NDVI to differentiate vegetations from the background.

Conclusions

Accurate weed detection in croplands is a prerequisite for applying weed management such as site-specific selective spraying applications, mechanical, electrical, or thermal weeding. Two categories of weed detection technologies are discussed in the preceding sections. It has shown that deep learning-based techniques are replacing traditional machine learning techniques. The deep CNN architectures as feature extractors deliver better performance and enables quicker application development. More hybrid models using deep learning and conventional image processing are expected in the future.

In most weed detection cases, the leaves of weeds or crops overlap at late growth stages if weed management is not done early and properly, and this makes it hard to discriminate for the current weed detection techniques. In addition, many other factors such as variable environmental lighting condition, micro-climate, occluded or damaged plant leaves, solar angles, changing morphological, or spectral properties of plant leaves at different growth stages all can contribute to the challenging task of weed detection. Nevertheless, with the development of new deep learning models such as recurrent neural networks (RNN), regional convolutional neural networks (RCNN), and other hybrid deep learning models, real-time weed/crop detection in the field has reached promising performance with some room left to be improved; more and more commercial weeding machineries are emerging in the market at the same mean time.

Another trend is that more and more large-scale crop and weed image datasets are made available online in recent years; this allows more researchers and engineers to join and contribute to this field. Propelled by deep learning models and the growth of computational power, we expect great breakthroughs in model performance in weed detection for weeding and other new applications in the coming years.

References

Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance

  1. 1.

    Noxious Weeds Management In: ARTICLE 1.7. California Legislature. 2018. https: //leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=FAC&division=4. &title=&part=4.&chapter=1.&article=1.7. . Accessed 11/2 2019.

  2. 2.

    Lanini W, Strange M. Low-input management of weeds in vegetable fields. Calif Agric. 1991;45(1):11–3.

  3. 3.

    Hodgson JM. The nature, ecology, and control of Canada thistle. vol 1386. Agricultural Research service, US Dept. of Agriculture; [for sale by the Supt … ; 1968.

  4. 4.

    Monaco T, Grayson A, Sanders D. Influence of four weed species on the growth, yield, and quality of direct-seeded tomatoes (Lycopersicon esculentum). Weed Sci. 1981;29(4):394–7.

  5. 5.

    Nave W, Wax L. Effect of weeds on soybean yield and harvesting efficiency. Weed Sci. 1971;19(5):533–5.

  6. 6.

    Smith DT, Baker RV, Steele GL. Palmer amaranth (Amaranthus palmeri) impacts on yield, harvesting, and ginning in dryland cotton (Gossypium hirsutum). Weed Technol. 2000;14(1):122–6.

  7. 7.

    Weis M, Gerhards R. Detection of weeds using image processing and clustering. Bornimer Agrartechnische Berichte. 2008;69(138):e144.

  8. 8.

    Desai R, Desai K, Desai S, Solanki Z, Patel D, Patel V. Removal of weeds using image processing: a technical review. Int J Adv Comput Technol. 2015;4:27–31.

  9. 9.

    Weis M. An image analysis and classification system for automatic weed species identification in different crops for precision weed management. 2010.

  10. 10.

    Choudhary J, Nayak S. A survey on weed detection using image processing in agriculture. Int J Comput Sci Eng. 2016;4(6).

  11. 11.

    Mustafa MM, Hussain A, Ghazali KH, Riyadi S, editors. Implementation of image processing technique in real time vision system for automatic weeding strategy. 2007 IEEE International Symposium on Signal Processing and Information Technology; 2007: IEEE.

  12. 12.

    Robovator. VisionWeeding. http://www.visionweeding.com/robovator/. Accessed 11/02 2019.

  13. 13.

    Herrera P, Dorado J, Ribeiro Á. A novel approach for weed type classification based on shape descriptors and a fuzzy decision-making method. Sensors. 2014;14(8):15304–24.

  14. 14.

    Aravind R, Daman M, Kariyappa B, editors. Design and development of automatic weed detection and smart herbicide sprayer robot. 2015 IEEE Recent Advances in Intelligent Computational Systems (RAICS); 2015: IEEE.

  15. 15.

    FarmBot. Genesis Weeder. https://genesis.farm.bot/v1.1/docs/weeder. Accessed 11/01 2019.

  16. 16.

    VisionWeeding. Robovator. 2019. http://www.visionweeding.com/robovator/. Accessed 11/02 2019.

  17. 17.

    Weis M, Sökefeld M. Detection and identification of weeds. Precision crop protection-the challenge and use of heterogeneity. Springer; 2010. p. 119–134.

  18. 18.

    • Sa I, Chen Z, Popović M, Khanna R, Liebisch F, Nieto J, et al. weednet: Dense semantic weed classification using multispectral images and mav for smart farming. IEEE Robot Autom Lett. 2017;3(1):588–95. The results of this study show that NDVI as a distinguishable feature can be used for automatic ground truth generation, and semantic method weed classification provides a different deep learning approach to detect weeds at pixel levels.

  19. 19.

    Michaels A, Haug S, Albert A, editors. Vision-based high-speed manipulation for robotic ultra-precise weed control. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2015: IEEE.

  20. 20.

    Bakhshipour A, Jafari A, Nassiri SM, Zare D. Weed segmentation using texture features extracted from wavelet sub-images. Biosyst Eng. 2017;157:1–12.

  21. 21.

    Lavania S, Matey PS, editors. Novel method for weed classification in maize field using Otsu and PCA implementation. 2015 IEEE International Conference on Computational Intelligence & Communication Technology; 2015: IEEE.

  22. 22.

    Rumpf T, Römer C, Weis M, Sökefeld M, Gerhards R, Plümer L. Sequential support vector machine classification for small-grain weed species discrimination with special regard to Cirsium arvense and Galium aparine. Comput Electron Agric. 2012;80:89–96.

  23. 23.

    Dyrmann M, Skovsen S, Laursen MS, Jørgensen RN, editors. Using a fully convolutional neural network for detecting locations of weeds in images from cereal fields. International Conference on Precision Agriculture; 2018: International Society of Precision Agriculture.

  24. 24.

    Sarker MI, Kim H. Farm land weed detection with region-based deep convolutional neural networks. arXiv preprint arXiv:190601885. 2019.

  25. 25.

    Yu J, Sharpe SM, Schumann AW, Boyd NS. Detection of broadleaf weeds growing in turfgrass with convolutional neural networks. Pest Manag Sci. 2019.

  26. 26.

    dos Santos FA, Freitas DM, da Silva GG, Pistori H, Folhes MT. Weed detection in soybean crops using ConvNets. Comput Electron Agric. 2017;143:314–24.

  27. 27.

    Wendel A, Underwood J, editors. Self-supervised weed detection in vegetable crops using ground based hyperspectral imaging. 2016 IEEE International Conference on Robotics and Automation (ICRA); 2016: IEEE.

  28. 28.

    Garcia-Ruiz FJ, Wulfsohn D, Rasmussen J. Sugar beet (Beta vulgaris L.) and thistle (Cirsium arvensis L.) discrimination based on field spectral data. Biosyst Eng. 2015;139:1–15.

  29. 29.

    Dyrmann M, Jørgensen RN, Midtiby HS. RoboWeedSupport-Detection of weed locations in leaf occluded cereal crops using a fully convolutional neural network. Adv Anim Biosci. 2017;8(2):842–7.

  30. 30.

    Myers D, Ross CM, Liu B, editors. A review of unmanned aircraft system (UAS) applications for agriculture. 2015 ASABE Annual International Meeting; 2015: American Society of Agricultural and Biological Engineers.

  31. 31.

    Liu B. Wireless sensor network applications in precision agriculture. J Agric Syst Technol Manag. 2018;29:25–37.

  32. 32.

    • Lottes P, Khanna R, Pfeifer J, Siegwart R, Stachniss C, editors. UAV-based crop and weed classification for smart farming. 2017 IEEE International Conference on Robotics and Automation (ICRA); 2017: IEEE. The findings from this paper show that UAV-based images can be successfully used to map and identify weeds.

  33. 33.

    Torres-Sánchez J, López-Granados F, Peña JM. An automatic object-based method for optimal thresholding in UAV images: application for vegetation detection in herbaceous crops. Comput Electron Agric. 2015;114:43–52.

  34. 34.

    David LCG, Ballado AH, editors. Vegetation indices and textures in object-based weed detection from UAV imagery. 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE); 2016: IEEE.

  35. 35.

    Peña J, Torres-Sánchez J, Serrano-Pérez A, de Castro A, López-Granados F. Quantifying efficacy and limits of unmanned aerial vehicle (UAV) technology for weed seedling detection as affected by sensor resolution. Sensors. 2015;15(3):5609–26.

  36. 36.

    Barrero O, Rojas D, Gonzalez C, Perdomo S, editors. Weed detection in rice fields using aerial images and neural networks. 2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA); 2016: IEEE.

  37. 37.

    López-Granados F, Torres-Sánchez J, Serrano-Pérez A, de Castro AI, Mesas-Carrascosa F-J, Pena J-M. Early season weed mapping in sunflower using UAV technology: variability of herbicide treatment maps against weed thresholds. Precis Agric. 2016;17(2):183–99.

  38. 38.

    Thenkabail PS, Lyon JG. Hyperspectral remote sensing of vegetation. CRC press; 2016.

  39. 39.

    Peerbhay KY, Mutanga O, Ismail R. Random forests unsupervised classification: the detection and mapping of Solanum mauritianum infestations in plantation forestry using hyperspectral data. IEEE J Sel Top Appl Earth Observ Remote Sens. 2015;8(6):3107–22.

  40. 40.

    Gao J, Nuyttens D, Lootens P, He Y, Pieters JG. Recognising weeds in a maize crop using a random forest machine-learning algorithm and near-infrared snapshot mosaic hyperspectral imagery. Biosyst Eng. 2018;170:39–50.

  41. 41.

    Karimi Y, Prasher S, Patel R, Kim S. Application of support vector machine technology for weed and nitrogen stress detection in corn. Comput Electron Agric. 2006;51(1–2):99–109.

  42. 42.

    Pantazi X-E, Moshou D, Bravo C. Active learning system for weed species recognition based on hyperspectral sensing. Biosyst Eng. 2016;146:193–202.

  43. 43.

    Longchamps L, Panneton B, Samson G, Leroux GD, Thériault R. Discrimination of corn, grasses and dicot weeds by their UV-induced fluorescence spectral signature. Precis Agric. 2010;11(2):181–97.

  44. 44.

    Wang P, Peteinatos G, Li H, Gerhards R. Rapid in-season detection of herbicide resistant Alopecurus myosuroides using a mobile fluorescence imaging sensor. Crop Prot. 2016;89:170–7.

  45. 45.

    Panneton B, Guillaume S, Roger J-M, Samson G. Improved discrimination between monocotyledonous and dicotyledonous plants for weed control based on the blue-green region of ultraviolet-induced fluorescence spectra. Appl Spectrosc. 2010;64(1):30–6.

  46. 46.

    Suzuki Y, Okamoto H, Kataoka T. Image segmentation between crop and weed using hyperspectral imaging for weed detection in soybean field. Environ Control Biol. 2008;46(3):163–73.

  47. 47.

    USDA. National Statistics for Lettuce. 2018. https://www.nass.usda.gov/Statistics_by_Subject/result.php?CA67122E-5AF3-3058-B89C-6D375960D1F8&sector=CROPS&group=VEGETABLES&comm=LETTUCE. Accessed 11/2 2019.

  48. 48.

    •• Redmon J, Farhadi A, editors. YOLO9000: better, faster, stronger. Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. The proposed model enables real-time, high accuracy and multiple-oboject localization in many applications.

  49. 49.

    Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern. 1979;9(1):62–6.

  50. 50.

    He K, Zhang X, Ren S, Sun J, editors. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016.

  51. 51.

    Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C, editors. Mobilenetv2: Inverted residuals and linear bottlenecks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018.

  52. 52.

    Szegedy C, Ioffe S, Vanhoucke V, Alemi AA, editors. Inception-v4, inception-resnet and the impact of residual connections on learning. Thirty-First AAAI Conference on Artificial Intelligence; 2017.

  53. 53.

    Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:160207360. 2016.

  54. 54.

    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556. 2014.

Download references

Author information

Correspondence to Bo Liu.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Agriculture Robotics

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Liu, B., Bruch, R. Weed Detection for Selective Spraying: a Review. Curr Robot Rep 1, 19–26 (2020). https://doi.org/10.1007/s43154-020-00001-w

Download citation

Keywords

  • Weed detection
  • Deep learning
  • Image processing
  • Sensors