Selective Harvesting Robotics: Current Research, Trends, and Future Directions

Abstract

Purpose of Review

The world-wide demand for agricultural products is rapidly growing. However, despite the growing population, labor shortage becomes a limiting factor for agricultural production. Further automation of agriculture is an important solution to tackle these challenges.

Recent Findings

Selective harvesting of high-value crops, such as apples, tomatoes, and broccoli, is currently mainly performed by humans, rendering it one of the most labor-intensive and expensive agricultural tasks. This explains the large interest in the development of selective harvesting robots. Selective harvesting, however, is a challenging task for a robot, due to the high levels of variation and incomplete information, as well as safety.

Summary

This review paper provides an overview of the state of the art in selective harvesting robotics in three different production systems; greenhouse, orchard, and open field. The limitations of current systems are discussed, and future research directions are proposed.

Introduction

The world’s demand for agricultural products is growing at an unprecedented scale. An estimated 50% increase in agricultural productivity is needed in the next 30 years to provide the world population with sufficient food, feed, fuel, and fibers [1]. Despite the growing population, expecting to reach almost ten billion people by 2050, there is a growing labor shortage in agriculture, due to an aging farmer population and urbanization. Moreover, agricultural tasks are often physically demanding and highly repetitive and dull. To meet the growing demand and to compensate for the labor shortage, there is a strong need in the agricultural industry for increased automation and robotization.

Crops like wheat, corn, and potato ripen uniformly on the field, which allows efficient mass harvesting of the crop at a single moment in time by big machines. In contrast, high-value crops like apples, tomatoes, and broccoli ripen heterogeneously and require selective harvesting of only the ripe fruits. Multi-annual crops, like apple and grapes, furthermore, require that the plant is not damaged during the harvesting process. Selective harvesting turned out to be difficult to automate and therefore is currently mainly performed by human labor. This makes selective harvesting currently one of the most labor-intensive and expensive tasks on the farm. This stimulates the development of robotic systems for selective harvesting.

Apart from the labor and cost aspects, there are more advantages of robotic harvesting. Where humans have variable quality of operation, robots operate very consistently without individual or temporal variations. Furthermore, parallel to the harvesting task, robots can inspect the crop to detect diseases and monitor crop development, which allows improved farm management and can optimize the food-production chain.

The task of selective harvesting, however, is not an easy one for robots, which is illustrated by the fact that there are hardly any selective-harvesting robots on the market. We identify three main challenges in developing a selective-harvesting robot: variation, incomplete information, and safety [2]:

  • Variation. Different from robots operating in the manufacturing industry that work in highly controlled environment with known artificial objects, agricultural robots need to operate in uncontrolled environments with natural objects. These environments give rise to different types of variation. Firstly, there is object variation. Every plant and fruit is unique with different appearance, geometry, and mechanical properties from other instances. In addition, the appearance of the crop may change over time during the growing season. Secondly, there is environmental variation caused by the weather or indoor climate control, causing variation in, for instance, illumination, humidity, and temperature. Thirdly, there is variation in the cultivation system. Farmers have individual preferences in how they cultivate their crops, with differences in, for instance, infrastructure, irrigation, soil type, and pruning methods, resulting in different growth patterns. Finally, there is task variation. A robot solely designed for harvesting has limited value, which can be greatly improved if it can also perform other plant-management tasks, such as, pruning, thinning, pest control, monitoring, and providing nutrients.

  • Incomplete information. The environment of a selective harvesting robot is often highly complex and cluttered, giving rise to many occlusions. The partial observability causes that the robot has to operate with incomplete and uncertain information. The objects of interest for the robot’s task might be partially or completely occluded; the to-be-harvested produce will often be covered by other elements of the plant or tree, such as, leaves. Moreover, sensor data is often noisy and information about the weather, crop development, and the presence of pests and diseases are incomplete and uncertain.

  • Safety. Harvesting robots need to be inherently safe for their environment. Most of the fruits and vegetables are very delicate and the plants on which they grow are fragile. Damage to the fruits will devaluate the produce. At worst, damage to the plant could mean the end of the production of that plant. Selective harvesting robots, therefore, need to have a soft touch, being able to grasp and manipulate the objects with care. Moreover, there will also be humans in the production environment with whom the robots should be able to collaborate in a safe manner.

In this review, in the “State of the art in Selective Harvesting” section, we provide an overview of how current selective harvesting robots deal with these challenges. Limitations of these systems, trends, and future research directions are discussed in the “Limitations, Trends, and Future Research” section.

State of the art in Selective Harvesting

Applications of selective harvesting can be divided in three major application areas: greenhouse (protective cultivation), orchard, and open field. In this section, the state of the art in academia and industry in these application domains will be described.

Greenhouse

A greenhouse provides a protected and controlled environment for optimal crop production. The enclosed structure allows to control environmental factors, such as temperature, humidity, carbon dioxide, and to a certain extend also the light level, to set optimal conditions for year-round production. In addition, plants generally do not root in soil, but in an artificial growing medium offering the plant optimal concentrations of water and nutrients. As illustrated in Fig. 1, there are different cultivation systems to support and guide plant growth, which are optimized for light interception, space, and in some cases automation. van Henten [3] and van Henten et al. [4] describe different activities during the greenhouse production cycle, including greenhouse preparation, planting, crop maintenance, harvesting, grading, and packing, which can potentially be robotized.

Fig. 1
figure1

Examples of different plants cultivation systems. a Pepper “V” system. b Tomato high wire system. c Strawberry “Table-top” system

State of the art in Research

Concerning selective harvesting of high-valued crops, a thorough review of 50 robots developed in the past three decades was made in [5]. On average, the systems had a harvest success rate of 66% with a cycle time of 33 s. Success rates for fruit localization and detachment were respectively 85% and 75%. However, it must be noted that different studies cannot be compared as they were all applied in different experimental setups. Moreover, many studies modified the crop to simplify the task.

To illustrate recent work, we discuss work on sweet pepper, strawberry, and tomato harvesting. In the EU projects Crops and follow-up SWEEPER (www.sweeper-robot.eu), a robotic system for sweet-pepper harvesting was developed (see Fig. 2). The system composed of one 6 DOF industrial manipulator with a specially designed end effector, RGB-D camera with GPU computer, programmable logic controllers, and a small container to store harvested fruit. Over a period of 4 weeks, the system was evaluated on 262 fruits [6] showing a harvest success rate of 61% in optimal crop conditions and 18% in current commercial conditions, illustrating the need for cultivation systems that are specifically designed for robotic harvesting. Average cycle time was 24 s including fruit discharge and platform navigation. If the objects of interest were in the camera view, deep-learning techniques could successfully be applied for image segmentation and detection [7•, 8, 9]. However, due to the high level of occlusion, fruit detection and approach often failed in the commercial crop conditions. Another issue in these conditions was the end-effector colliding with the plant. In a similar project, the robot Harvey was developed and evaluated on 68 sweet peppers [10]. They reached a harvesting success rate of 76.5% in a modified crop and 47% in an unmodified crop with a cycle time of 36.9 s, excluding platform navigation. The fruit and peduncle detection system based on deep learning and 3D processing worked well in the modified crop but suffered from clutter and occlusion in the unmodified crop. Similarly, the customized harvesting tool suffered from the complex unmodified conditions resulting in fruit and plant damage and relatively low attachment and detachment rates. In a third project, an image-based closed-loop control system was developed for sweet pepper harvesting, achieving a 53.3% overall successful rate and average cycle time of 51.1 s [11].

Fig. 2
figure2

Prototype of the sweet-pepper-harvesting robot SWEEPER

Xiong et al. [12] focused on strawberry harvesting and developed a low-cost dual-arm harvesting robot. Their system was more resilient to lighting variations due to the modeling of color against light intensity. In order to deal with occlusions and clutter the robot could use the gripper to push aside surrounding obstacles to pick strawberries that are located in clusters. The pick success rate ranged from 20% for the most complex scenarios with one ripe strawberry in a cluster of unripe fruits to 100% for situations with one isolated ripe strawberry. In two-arm mode, the system had a cycle time of 4.6 s. For tomato harvesting, Ling et al. [13] developed a dual-arm robot using a binocular vision sensor for fruit detection and localization. In a greatly simplified experimental setup, the success rate of this robot was 87.5% with a harvesting cycle time excluding platform navigation of 29 s.

State of the art in Industry

Although harvesting robots are not commercially successful yet, there are several pre-commercial R&D initiatives [14•], such as for strawberry harvesting—Agrobot (www.agrobot.com), Octinion (www. http://octinion.com), DogTooth (https://dogtooth.tech), and Shibuya Seiki (www.shibuya-sss.co.jp)—tomato harvesting—Panasonic [15], MetroMotion-GroW (metomotion.com), RootAI (root-ai.com)—and de-leafing—Privia Kompano-DLR [16]) and SAIA(https://www.saia-agrobotics.com/). Although great progress has been made in the past years, these initiatives do not yet meet the requirements on success rate and speed.

Conclusion of Current Robotics in Greenhouses

There have been many academic and industrial projects on the development of selective-harvesting robots for greenhouses to date. Although performance of the robots is slowly improving, due to, for instance, advances in deep learning and mechanical engineering, to date, the harvesting success and operational speed are too low for commercial application. A key challenge is dealing with the highly cluttered crop environment, illustrated by the fact that performance greatly improves when the crop is simplified by removing some leaves and fruits.

Orchard

In the orchard environment, the environmental parameters are uncontrolled, making the variation in natural conditions more prominent. In addition, the layout of an orchard is much less structured than a greenhouse, especially in mountainous environments, making robot navigation more challenging. Different from greenhouse crops, orchard crops grow for many years and are relatively sturdy, which reduces the risk on damage of the harvesting robot. Where the trees in traditional orchards are large globular 3D structures, the trees are pruned, trimmed, and trained extensively in modern orchards, providing flat, 2D-like structures, optimizing light interception, and simplifying robotic operations [17], see Fig. 3. Activities in the orchard are similar to the greenhouse, with the addition of more thorough pruning and training of the trees.

Fig. 3
figure3

Examples of different orchard training systems. a Wall system. b Y-trellised system. c UFO system

State of the art in Research

When all fruits can be harvested from a tree at once and when they are allowed to get damaged, for instance for the juice market or for nuts, simple mechanical solutions—tree shakers—are commercially available [17]. Selective harvesting of fruits for the fresh market, however, is a much more complex and delicate operation that is actively being researched. Deep learning has revolutionized the detection of fruits in camera images. Sa et al. [18], for instance, showed the successful application of a deep neural network to detect different types of fruits, including apple, avocado, mango, and orange, with F1-scores above 0.93 and an average of 393-ms processing time per image. Moreover, the results indicated that the network could generalize to new environments and camera setups.

Harvesting of apple and orange has been studied most intensively, see Fig. 4. Silwal et al. [19•] presented the design and evaluation of a robotic system for harvesting of fresh-market apples. The system integrated a global camera set-up, seven degrees-of-freedom (DOF) manipulator, and grasping end-effector to execute fruit picking with open-loop control. The overall success rate of this robot was 84%, with an average picking time of 6.0 s per fruit. Zhao et al. [20] developed a manipulator with a custom 5 DOF structure to simplify control and obstacle avoidance. A spoon-shaped end-effector including a pressure sensor to control the grasping force and a cutting knife was designed to harvest the fruits. In a field test with 39 apples, a success rate of 77% was shown, with an average cycle time of approximately 15 s. Baeten et al. [21] presented an Autonomous Fruit Picking Machine (AFPM) for apple harvesting, which combined an industrial manipulator with an eye-in-hand camera. To simplify perception, they used a cover to shield sunlight and provide more controlled illumination. Results showed the productivity to be close to the workload of about 6 workers, which makes the machine economically viable.

Fig. 4
figure4

Examples of apple-harvesting robots. a WSU apple picking robot (From: Silwal et al. [19•], with permission from John Wiley and Sons), b JSU apple harvesting robot (From: Zhao et al. [20], with permission from Elsevier), and c AFRM robot (From: Baeten et al. [21], with permission from Springer Nature)

For orange harvesting robot, a robust image-based visual-servo controller for closed-loop control of a robotic manipulator was developed in [22, 23] to approach a target fruit in the presence of unknown fruit motion. An efficient and robust lighting system, with low-power image acquisition and processing hardware, and a reduced inspection chamber were developed by Cubero et al. [24]. Neither of these studies reported the specific harvest success rate and cycle time.

Research on orchard harvesting robotics cover a wide range of different crops, such as grape [25], litchi [26], kiwifruit [27], cherry [28], peach, pear [29], and coconut [30].

State of the art in Industry

Despite decades of research, there are still no selective robotic harvesters in commercial use. Some initiatives seem to be close to commercialization [14•]. FFRobotics developed an apple harvesting robot with multiple arms and a three-fingered gripper that removes the apple with a twisting motion (www.ffrobotics.com). Abundant Robotics developed an apple-harvesting robot using vacuum-based end-effector to detach the fruits from the plant (www.abundantrobotics.com). Energid developed a citrus harvesting robot (www.energid.com).

Conclusion of Current Robotics in Orchards

The biggest opportunity for robotic selective harvesting exists in the fresh market [17]. The limitations of robotic systems have been well documented in [5] and include insufficient cycle time, challenges with fruit detection in the presence of occlusions, and limitations with robust manipulation for fruit detachment.

Open Field

In open-field farming, the crops are produced on designated strips of land (fields) in the open air, where the plants grow in rows. Many open-field crops, such as wheat, maize, and potato, are being mass-harvested at a single moment of time and with the destruction of the plant. For these crops, efficient mechanical harvesters exist. Selective harvesting is required for crops that grow less homogeneously or are multi-annual, such as asparagus, broccoli, lettuce, and melon. Robotic harvesting of open-field crops poses more challenges than the harvesting in protected crop production (greenhouse, indoor cultivation), mainly due to environmental variations (light, wind, rain) and less consistent plant development [5].

State of the art in Research

Different from greenhouse and orchard harvesting where harvesting robots typically observe the crop from the side, open-field harvesters typically take a top view. To mitigate environmental variations, most systems use a cover to shield direct sunlight and to protect against rain.

Several efforts have been made to develop a selective harvesting robot for asparagus [31,32,33]. Chatzimichali et al. [31] presented a robot design for the selective harvest of white asparagus (which grows below the soil surface). Their design consisted of a caterpillar robot platform and two cameras for the identification of the tips of the asparagus. Leu et al. [33] presented a harvesting robot for green asparagus (which grows above the soil surface). Their robotic system consisted of a four-wheeled platform, one RGB-D camera, and two robotic harvesting tools (Fig. 5). The green asparagus were detected and tracked using a 3D point-cloud algorithm. The robotic harvesting tool consisted of an end effector with two rubber claws and two blades that could cut one asparagus in approximately 2 s. With two harvesting tools, an average of five asparagus plants could be harvested per meter. Leu et al. [33] reported a harvest success of 90% when tested on green asparagus fields. A video of the field performance can be found online [34].

Fig. 5
figure5

A close-up of the end-effector that harvests the green asparagus

Three research projects aimed to develop a selective harvesting robot for brassica crops (specifically broccoli and cauliflower). Kusumam et al. [35] developed a 3D-vision algorithm using machine learning to detect broccoli heads in RGB-D images. Blok et al. [36] studied the detection of broccoli heads using deep learning with a specific focus on the generalization of the method to the selective harvesting of new cultivars. Klein et al. [37] presented a feasibility study for the development of a selective harvesting robot for cauliflower. Their prototype robot consisted of an aluminum frame with LED lights, three RGB-D cameras for crop detection and maturity evaluation, and two dexterous robotic arms to cut and pick the cauliflower.

Birrell et al. [38] presented Vegebot, a selective-harvesting robot for iceberg lettuce. Vegebot was equipped with two RGB cameras and one robotic arm with a custom-made end-effector. For the image analysis, two convolutional neural networks (CNNs) were used. The first network localized the iceberg lettuces, whereas the second one classified the detected lettuces in three classes (harvest-ready, immature, infected). The lettuce was harvested with a pneumatic end-effector that was equipped with a camera, a belt drive, and a soft gripper. A force-feedback control system was used to detect whether the gripper reached the ground plane. Then, the iceberg lettuce was cut by a knife. In field tests, a 88% harvest success and an average harvest time of 31.7 s was reported.

Foglia, Reina [39] developed a prototype robot for the selective harvest of radicchio. The robot consisted of a pneumatic manipulator and gripper with an embedded RGB camera. The image analysis was based on color filtering and morphological operators. The gripper had two bucket-like cutting fingers that were triggered by the resistance of the soil to cut the radicchio 10 mm underground. In laboratory conditions, the detection error was less than 6.3% with an average harvest time of 6.5 s.

Edan [40] presented a selective harvesting robot for melon. The robot was constructed as an implement that was drawn by a tractor. The robot was equipped with two black-and-white cameras, a Cartesian manipulator and a pneumatic gripper. The melons were detected with a texture- and shape-based image algorithm. For the path planning of the robot, the traveling salesman algorithm was used. The pneumatic gripper was equipped with a proximity sensor that detected whether the ground plane was reached. Then, the melon was grabbed and lifted so that the stem of the melon was stretched before it was cut by two knives. Edan et al. [41] tested the robot during two seasons and reported a 93% detection rate and a 86% harvest success. The average harvest time was 15 s.

State of the art in Industry

To the best of our knowledge, the asparagus harvesting robot Sparter from Cerescon (www.cerescon.com) is the only open-field harvesting robot that is commercially available. The robot is equipped with underground sensors to detect the asparagus and two harvesting tools per row. The operating speed is approximately 0.3 ha/h. Another robot that is almost on the market is RoboVeg (www.roboveg.com), a selective broccoli harvesting robot.

Conclusion of Current Robotics on the Open Field

All presented robots were developed since the 1990s and were specifically built for the selective harvest of vegetables (asparagus, broccoli, cauliflower, lettuce, radicchio, and melon). Except for the Sparter robot, all robots used cameras to detect and localize the crops. The most recently developed robots used deep learning for robust image analysis. Two of the six presented robot manipulators were self-made, and the other four manipulators were purchased. Every end-effector was custom-made and performed the cutting action by some kind of a robotic knife. The harvest success and speed are high compared to the greenhouse and orchard settings due to the less complex structure of open-field crops.

Limitations, Trends, and Future Research

Robotics has been extremely successful in production industry building on a long tradition of improving production efficiency by separating tasks, implementing well-structured and -controlled working environments with low variation in working conditions, and, last but not least, reducing variation in the objects. Essentially, Henry Ford’s famous phrase “Any customer can have a car painted any color that he wants so long as it is black” together with assembly-line manufacturing paved the way for robotic operation. Compared with production industry, robotics in agriculture lags behind significantly. In the next sections, the main technical challenges of agricultural robotic systems will be identified and solution directions will be described.

Current State of Selective Harvesting Robotics

The “State of the art in Selective Harvesting” section provided an overview of the state of the art in selective harvesting robotics in greenhouse, orchard, and open-field conditions. Despite a few decades of research, selective harvesting robots are currently do not meet the requirements of commercial success in terms of harvest success and speed. For the harvesting success, the critical components are perception (the detection of the produce and other plant parts) and the harvesting tool and operation. Looking at the challenges for agricultural robotics posed in the “Introduction” section, a number of observations can be drawn from the state-of-the-art overview:

  • Perception. Recent advances in the field of deep learning greatly improved perception, making it more robust to the challenge of variation. Deep-learning-based detection algorithms have been shown to be robust to variations in the appearance of the objects and environmental conditions. The methods also generalize quite well to new cultivars and environments. Dealing with incomplete information due to occlusions is still a big challenge when operating in complex commercial production environments. This is especially the case in the greenhouse and the orchard, as plants there are more complex compared to the open field.

  • Harvesting tool and operation. There is no clear paradigm visible in the design of the end-effector. Every study developed its own custom harvesting tool. Most harvesting tools were quite rigid and bulky. Detachment was usually performed with an automated cutting knife and in some cases with a twisting motion or suction. In complex, cluttered environments, harvesting success dropped, often due to the tool not being able to reach the right location due to collisions with the plant, or due to not being able to localize the correct location due to perception limitations. In addition, the plant and fruits were frequently damaged by the tools.

  • Operation speed. Cycle times of greenhouse and orchard robots are typically in the range of half a minute, which is significantly slower than human operation, obstructing commercial application. Due to the simpler situation, robotic harvesting of vegetables on the field can typically be done much faster than in the greenhouse.

  • Complexity of environment. In some greenhouse studies, when the crop was modified to reduce clutter by removing some leaves and fruits, harvesting success rate drastically improved. Detection success of the perception algorithms improved significantly as occlusions occurred less frequently, and the harvesting operation was more successful now that the tool had more space for a collision-free approach.

  • Task variation. All robots discussed in this review are designed only for the harvesting task. Though outside the scope of this paper, harvesting is only one task in the whole crop-production process. Various crop maintenance operations need to be addressed when considering fully automated farming in the future (Kootstra et al., 2020). Although in a very rudimentary fashion, bi-functionality of a robotic platform was demonstrated by van Henten et al. [42] and van Henten et al. [43] for harvesting as well as leaf removal of cucumber plants grown in a high wire cultivation system.

  • Safety. Safety of the robot for its plant/fruit environment was evaluated in a number of studies, where damage to the fruit and plant was occasionally reported. Safety issues of autonomous robotics systems in open-field cultivation have received some attention (e.g., [44, 45]), yet safety in human robot co-working is not an active field of research yet.

Solution Directions for Technical Challenges

Essentially, three solution directions offer opportunities in dealing with uncertainty and variation in agricultural robotics:

  • Reducing variation and uncertainty in the environment as well as in the plant population. Despite advances in the field of deep learning, the performance of machine-vision systems remains sensitive to variations in the illumination of the operational scene. Flooding the operating scene under a hood with artificial light has successfully mitigated this weakness in many applications. Operation at night is another option, although it reduces the operation time of the robot to nighttime only. To reduce the complexity of the scene, breeding for robotics is an alternative pathway. There is a keen interest from plant breeders to select cultivars that are both productive as well as better suited for robotic treatment during production. Also, modification and standardization of the cultivation systems offer opportunities to reduce variation and uncertainty in the working environment of robotic farming systems [5, 42, 46].

  • Enhancing robotic technology. There are various ways to tackle variability and uncertainty in agricultural robotics. One way is to include more domain knowledge in the design and operation of robotic capabilities. This requires modeling of the world in which the robot has to operate, thus providing potential clues about the structure of the working environment, the presence and absence of objects, and the evolution of such characteristics in time due to growth and development. Another way is to extent the sensing capabilities beyond the common machine-vision systems operating in the visible and near-infra-red spectrum. Tactile sensing is an alternative that has hardly received attention in the agricultural robotics community. Combining different sensing modalities in a multi-modal sensing framework might literarily provide more insight into the work scene of the robot. Also, active perception in which the robot resolves uncertainty in the environment by actively gathering new sensory input by changing perspective and manipulating objects has potential in dealing with uncertainty and variation, as for instance proposed in [47, 48]. The current quite rigid gripping technology in agri-food does not work well in conditions that demand short cycle times while dealing with variability in product size and softness. Compliant actuators and end-effectors combining different grasp types and using tactile sensing and control to realize different force distributions and grasp stiffness are needed to deal with these challenges.

  • Human-robot collaboration. Variation and uncertainty in agriculture together with the relatively immature status of robotic technology when it comes to dealing with these challenges prohibit rapid deployment of fully autonomous robotic technology in agriculture. An intermediate step towards autonomy might be the combination of robotic skills with human capabilities in a human-robot co-working framework [49,50,51].

Trends in Agricultural Robotics: a Wider Perspective

This paper provided an overview of robotic technology for selective harvesting in agriculture. Societal needs, state of the art of technology, technical challenges, and potential solution directions were addressed. Yet, that is only part of the story when it comes to adoption of robotic technology in agricultural practice.

Economic viability is a key issue in the adoption of technology. Yet, economic viability should be addressed from a wider perspective than just balancing direct costs and benefits of a certain technology. Novel technology may provide advantages with no directly accountable economic return. The freedom to attend to other tasks and to develop a social life has been key success factors in the adoption of the milking robot. When it comes to economic viability, discussions on robotic technology are often based on the reasoning that novel technology should successfully replace human labor for 100% to be economically viable. Given the above listed technical challenges, this line of reasoning hampers innovation. Partial replacement of human labor or human-robot co-working is potential alternatives when farmers are willing to rethink the procedures used in their farming operation.

There is clearly some tension between the romantic image of agricultural food production and the use of robotic technology. While advancing technology should remain to rank high on the research agendas to meet the challenges faced by society, this progress should be accompanied with thorough thought on the consequences of such technologies for society. The discussion on robot ethics also in the framework of agrifood deserves attention [52].

Finally, agricultural production systems are developing. Stimulated by growing concerns about the long-term sustainability of current large-scale mono-cropping cultures, intercropping and pixel-farming are revisited as better alternatives [53]. Yet, this requires rethinking farming at large as well as specifically the technology used in farming. Given current and future limitations in availability of human labor, robotic technology might facilitate and support such developments in agronomy. Yet, this introduces even more variation and uncertainty, and thus additional challenges for robotic technology, both in selective harvesting, as well as in crop production as a whole.

References

Papers of particular interest, published recently, have been highlighted as: • Of importance

  1. 1.

    UN: World population prospects: the 2017 revision, key findings and ad-vance tables. ESA/P/WP/248.https://esa.un.org/unpd/wpp/Publications/Files/WPP2017_KeyFindings.pdf (2017).

  2. 2.

    Kootstra G, Bender A, Perez T, van Henten EJ: Robotics in agriculture. In: M., A., O., K., B., S. (eds.) Encyclopedia of robotics. Springer, Berlin, Heidelberg (2020).

  3. 3.

    van Henten EJ. Greenhouse mechanization: state of the art and future perspective. Acta Hortic. 2006;710:55–69.

    Article  Google Scholar 

  4. 4.

    van Henten EJ, Bac CW, Hemming J, Edan Y: Robotics in protected cultivation. Paper presented at the Proceedings ofthe 4th IFAC conference on modelling and control inagriculture, Espoo, 27–30 Aug 2013.

  5. 5.

    Bac CW, van Henten EJ, Hemming J, Edan Y. Harvesting robots for high-value crops: state-of-the-art review and challenges ahead. J Field Robot. 2014;31(6):888–911.

    Article  Google Scholar 

  6. 6.

    Arad B, Balendonck J, Barth R, Ben-Shahar O, Edan Y, Hellstrom T, et al. Development of a sweet pepper harvesting robot. J Field Robot. 2020;37(6):1027–39. https://doi.org/10.1002/rob.21937.

    Article  Google Scholar 

  7. 7.

    • Barth R, IJsselmuiden J, Hemming J, van Henten EJ. Data synthesis methods for semantic segmentation in agriculture: a Capsicum annuum dataset. Comput Electron Agric. 2018;144:284–96. https://doi.org/10.1016/j.compag.2017.12.001This paper shows the power of deep learning for fruit detection, including data synthesis methods to deal with limited training data.

    Article  Google Scholar 

  8. 8.

    Barth R, IJsselmuiden J, Hemming J, van Henten EJ. Synthetic bootstrapping of convolutional neural networks for semantic plant part segmentation. Comput Electron Agric. 2019;161:291–304. https://doi.org/10.1016/j.compag.2017.11.040.

    Article  Google Scholar 

  9. 9.

    Barth R, Hemming J, van Henten EJ. Optimising realism of synthetic images using cycle generative adversarial networks for improved part segmentation. Comput Electron Agric. 2020;173:105378. https://doi.org/10.1016/j.compag.2020.105378.

    Article  Google Scholar 

  10. 10.

    Lehnert C, McCool C, Sa I, Perez T. Performance improvements of a sweet pepper harvesting robot in protected cropping environments. J Field Robot. 2020. https://doi.org/10.1002/rob.21973.

  11. 11.

    Lee B, Kam D, Min B, Hwa J, Oh S: A vision servo system for automated harvest of sweet pepper in Korean greenhouse environment. Appl Sci-Basel 9(12) (2019). doi:https://doi.org/10.3390/app9122395

  12. 12.

    Xiong Y, Ge YY, Grimstad L, From PJ. An autonomous strawberry-harvesting robot: design, development, integration, and field evaluation. J Field Robot. 2020;37(2):202–24. https://doi.org/10.1002/rob.21889.

    Article  Google Scholar 

  13. 13.

    Ling X, Zhao YS, Gong L, Liu CL, Wang T. Dual-arm cooperation and implementing for robotic harvesting tomato using binocular vision. Robot Auton Syst. 2019;114:134–43. https://doi.org/10.1016/j.robot.2019.01.019.

    Article  Google Scholar 

  14. 14.

    • Shamshiri RR, Weltzien C, Hameed IA, Yule IJ, Grift TE, Balasundram SK, et al. Research and development in agricultural robotics: a perspective of digital farming. Int J Agr Biol Eng. 2018;11(4):1–14. https://doi.org/10.25165/j.ijabe.20181104.4278This paper gives an overview of agricultural robotics in the area of weed control, field scouting, and harvesting.

    Article  Google Scholar 

  15. 15.

    Humphries M: Panasonic’s new robot harvests tomatoes as fast as a human. https://www.pcmag.com/news/panasonics-new-robot-harvests-tomatoes-as-fast-as-a-human (2017).

  16. 16.

    Sparks B: Priva developing deleafing robot for tomatoes. https://www.greenhousegrower.com/technology/priva-developing-deleafing-robot-for-tomatoes (2016).

  17. 17.

    Zhang Q: Automation in tree fruit production: principles and practice. CABI, (2018).

  18. 18.

    Sa I, Ge Z, Dayoub F, Upcroft B, Perez T, McCool C. DeepFruits: a fruit detection system using deepneural networks. Sensors. 2016;16(8):1222.

    Article  Google Scholar 

  19. 19.

    • Silwal A, Davidson JR, Karkee M, Mo CK, Zhang Q, Lewis K. Design, integration, and field evaluation of a robotic apple harvester. J Field Robot. 2017;34(6):1140–59. https://doi.org/10.1002/rob.21715This paper provides a complete robotic system for selective apple harvesting.

    Article  Google Scholar 

  20. 20.

    Zhao DA, Lv JD, Ji W, Zhang Y, Chen Y. Design and control of an apple harvesting robot. Biosyst Eng. 2011;110(2):112–22. https://doi.org/10.1016/j.biosystemseng.2011.07.005.

    Article  Google Scholar 

  21. 21.

    Baeten, J., Donné, K., Boedrij, S., Beckers, W., Claesen, E.: Autonomous fruit picking machine: a robotic apple harvester. In: C., L., R., S. (eds.) Field and service robotics, vol. 42. Springer Tracts in Advanced Robotics. Springer, Berlin, Heidelberg., (2008).

  22. 22.

    Mehta SS, Burks TF. Vision-based control of robotic manipulator for citrus harvesting. Comput Electron Agric. 2014;102:146–58. https://doi.org/10.1016/j.compag.2014.01.003.

    Article  Google Scholar 

  23. 23.

    Mehta SS, MacKunis W, Burks TF. Robust visual servo control in the presence of fruit motion for robotic citrus harvesting. Comput Electron Agric. 2016;123:362–75. https://doi.org/10.1016/j.compag.2016.03.007.

    Article  Google Scholar 

  24. 24.

    Cubero S, Aleixos N, Albert F, Torregrosa A, Ortiz C, Garcia-Navarrete O, et al. Optimised computer vision system for automatic pre-grading of citrus fruit in the field using a mobile platform. Precis Agric. 2014;15(1):80–94. https://doi.org/10.1007/s11119-013-9324-7.

    Article  Google Scholar 

  25. 25.

    Luo LF, Tang YC, Lu QH, Chen X, Zhang P, Zou XJ. A vision methodology for harvesting robot to detect cutting points on peduncles of double overlapping grape clusters in a vineyard. Comput Ind. 2018;99:130–9. https://doi.org/10.1016/j.compind.2018.03.017.

    Article  Google Scholar 

  26. 26.

    Wang CL, Zou XJ, Tang YC, Luo LF, Feng WX. Localisation of litchi in an unstructured environment using binocular stereo vision. Biosyst Eng. 2016;145:39–51. https://doi.org/10.1016/j.biosystemseng.2016.02.004.

    Article  Google Scholar 

  27. 27.

    Barnett J, Duke M, Au CK, Lim SH. Work distribution of multiple Cartesian robot arms for kiwifruit harvesting. Comput Electron Agric. 2020;169:105202. https://doi.org/10.1016/j.compag.2019.105202.

    Article  Google Scholar 

  28. 28.

    Amatya S, Karkee M, Zhang Q, Whiting MD. Automated detection of branch shaking locations for robotic cherry harvesting using machine vision. Robotics. 2017;6(4):31. https://doi.org/10.3390/robotics6040031.

    Article  Google Scholar 

  29. 29.

    Font D, Pallejà T, Tresanchez M, Runcan D, Moreno J, Martínez D, et al. A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm. Sensors. 2014;14:1557–11579.

    Article  Google Scholar 

  30. 30.

    Parvathi, S., Selvi, S.T.: Design and fabrication of a 4 degree of freedom (DOF) robot arm for coconut harvesting. In: International Conference on Intelligent Computing and Control (I2C2), Coimbatore, India, 23–24 June 2017 2017.

  31. 31.

    Chatzimichali AP, Georgilas IP, Tourassis VD: Design of an advanced prototype robot for white asparagus harvesting. In: 2009 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 14–17 July 2009 2009, pp. 887–892.

  32. 32.

    Clary C, Ball T, Ward E, Fuchs S, Durfey J, Cavalieri R, et al. Performance and economic analysis of a selective asparagus harvester. Appl Eng Agric. 2007;23(5):571–7.

    Article  Google Scholar 

  33. 33.

    Leu A, Razavi M, Langstädtler L, Ristić-Durrant D, Raffel H, Schenck C, et al. Robotic green asparagus selective harvesting. IEEE/ASME Trans Mechatronics. 2017;22(6):2401–10. https://doi.org/10.1109/TMECH.2017.2735861.

    Article  Google Scholar 

  34. 34.

    ECHORD++: GARotics – Green asparagus harvesting robotic system. https://www.youtube.com/watch?v=wcp2Uq2E6IE (2020). Accessed 08-06-2020.

  35. 35.

    Kusumam K, Krajník T, Pearson S, Duckett T, Cielniak G. 3D-vision based detection, localization, and sizing of broccoli heads in the field. J Field Robot. 2017;34(8):1505–18. https://doi.org/10.1002/rob.21726.

    Article  Google Scholar 

  36. 36.

    Blok PM, van Evert FK, Tielen APM, van Henten EJ, Kootstra G. The effect of data augmentation and network simplification on the image-based detection of broccoli heads with Mask R-CNN. J Field Robot. 2020;38:85–104. 1–20. https://doi.org/10.1002/rob.21975.

    Article  Google Scholar 

  37. 37.

    Klein FB, Wilmot A, Tejada VFD, Rodriguez BL, Requena I, Busch S, Rondepierre A, Auzeeri T, Sauerwald T, Andrews WFP, Rihan H, Fuller MP, Stoelen MF: Proof-of-concept modular robot platform for cauliflower harvesting. In: Precision agriculture ‘19. pp. 783–789. (2019).

  38. 38.

    Birrell S, Hughes J, Cai JY, Iida F. A field-tested robotic harvesting system for iceberg lettuce. J Field Robot. 2019;37:225–45. https://doi.org/10.1002/rob.21888.

    Article  Google Scholar 

  39. 39.

    Foglia MM, Reina G. Agricultural robot for radicchio harvesting. J Field Robot. 2006;23(6–7):363–77. https://doi.org/10.1002/rob.20131.

    Article  Google Scholar 

  40. 40.

    Edan Y. Design of an autonomous agricultural robot. Appl Intell. 1995;5(1):41–50.

    Article  Google Scholar 

  41. 41.

    Edan Y, Rogozin D, Flash T, Miles GE. Robotic melon harvesting. IEEE Trans Robot Autom. 2000;16(6):831–5. https://doi.org/10.1109/70.897793.

    Article  Google Scholar 

  42. 42.

    van Henten EJ, Hemming J, van Tuijl BAJ, Kornet JG, Meuleman J, Bontsema J, et al. An autonomous robot for harvesting cucumbers in greenhouses. Auton Robot. 2002;13(3):241–58. https://doi.org/10.1023/A:1020568125418.

    Article  MATH  Google Scholar 

  43. 43.

    van Henten EJ, van Tuijl BAJ, Hoogakker GJ, van der Weerd MJ, Hemming J, Kornet JG, et al. An autonomous robot for de-leafing cucumber plants grown in a high-wire cultivation system. Biosyst Eng. 2006;94(3):317–23. https://doi.org/10.1016/j.biosystemseng.2006.03.005.

    Article  Google Scholar 

  44. 44.

    van Evert FK, Nieuwenhuizen AT. Obstacle detection for autonomous vehicles in agriculture. Utrecht: Paper presented at the Proceedings of Measuring Behavior; 2012.

    Google Scholar 

  45. 45.

    Kohanbash D, Bergerman M, Lewis KM, Moorehead SJ: A safety architecture for autonomous agricultural vehicles. Paper presented at the Proceedings of American Society of Agricultural and Biological Engineers Annual Meeting, July, 2012.

  46. 46.

    van Henten EJ, van Tuijl BAJ, Hemming J, Kornet JG, Bontsema J, van Os EA. Field test of an autonomous cucumber picking robot. Biosyst Eng. 2003;86(3):305–13. https://doi.org/10.1016/j.biosystemseng.2003.08.002.

    Article  Google Scholar 

  47. 47.

    Barth R, Hemming J, van Henten EJ. Design of an eye-in-hand sensing and servo control framework for harvesting robotics in dense vegetation. Biosyst Eng. 2016;146:71–84. https://doi.org/10.1016/j.biosystemseng.2015.12.001.

    Article  Google Scholar 

  48. 48.

    Lehnert C, Tsai D, Eriksson A, McCool C: 3D Move to See: Multi-perspective visual servoing towards the next best view within unstructured and occluded environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019 2019.

  49. 49.

    Berenstein R, Ben Halevi I, Edan Y: A remote interface for a human-robot cooperative vineyard sprayer. Paper presented at the International Conference of Precision Agriculture (ICPA), Indianapolis, Indiana, USA, July 15–18, 2012.

  50. 50.

    Cheein FA, Herrera D, Gimenez J, Carelli R, Torres-Torriti M, Rosell-Polo JR, Escolà A, Arnó J: Human-robot interaction in precision agriculture: sharing the workspace with service units. Paper presented at the 2015 IEEE International Conference on Industrial Technology (ICIT), Seville, Spain, 17–19 March 2015.

  51. 51.

    Murakami N, Ito A, Will JD, Steffen M, Inoue K, Kita K, et al. Development of a teleoperation system for agricultural vehicles. Comput Electron Agric. 2008;63(1):81–8. https://doi.org/10.1016/j.compag.2008.01.015.

    Article  Google Scholar 

  52. 52.

    van Henten EJ, Enzing C. CPS for agriculture and food supply. In: Van Woensel L, Kurrer C, Kritikos M, editors. Scientific foresight study ethical aspects of cyber-physical systems: scientific foresight study. Brussels: European Parliament; 2016.

    Google Scholar 

  53. 53.

    Lakkenborg-Kristensen: SureVeg - Strip-cropping and recycling for biodiverse and resource-efficient intensive vegetable production. https://projects.au.dk/fileadmin/projects/coreorganiccofund/sureveg_leaflet_web.pdf (2020). Accessed 18 August 2020 2020.

Download references

Funding

Gert Kootstra reports grants from The Dutch Research Council (NWO), funded through the program.

“FlexCRAFT” (program number: P17-01).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Gert Kootstra.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection on Agriculture Robotics

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kootstra, G., Wang, X., Blok, P.M. et al. Selective Harvesting Robotics: Current Research, Trends, and Future Directions. Curr Robot Rep (2021). https://doi.org/10.1007/s43154-020-00034-1

Download citation

Keywords

  • Agriculture
  • Agricultural robotics
  • Selective harvesting