Keywords

1 Introduction

With the improvement of technology, the observation and research of the life phenomenon have further to an fine, quantitative development. In agricultural areas, some monitoring methods integrated light, machine, electricity and other various techniques have played a key role in the production, scientific research and other key links. Among them, machine vision technology have achieved very good results in the quality detection of agricultural products, crop growth condition monitoring of agricultural machinery and automatic navigation of agricultural machinery, identification of crop diseases and insect pests for it’s rapid, nondestructive, non – contact [1,2,3,4,5,6,7,8,9].

Machine vision technology can obtain the digital information of plant growth in real time and continuously [10, 11], and it plays an important role in the understanding of plant growth and the formulation of reasonable control measures. Ma et al. [12] used machine vision and image processing technology to monitor the growth information of many plants. It was found that in the process of image processing, the EEC (Estimation Error Cancel) image analysis method based on sub pixel and region matching can improve the detection accuracy effectively. Using this system to monitor the growth of chrysanthemum, we can see that the growth rate of the chrysanthemum in the day was lower than that at night.

Plant growth in the three-dimensional world, while uses the method of two-dimensional image monitoring of crop growth in the case that the crop biomass is small, the change of morphology is little can play a better role. However, for the crop organs with complex form such as corn leaves, it is difficult to ensure that the motion of the monitored blade is in the plane of the camera’s projection.

The basic principle of binocular stereo vision technology is to observe the same object from two points of view in order to obtain the stereo image pair and match the corresponding image points, which can calculate the parallax and obtain the 3D information [13,14,15,16,17].

The difficulty of the binocular stereo vision technology is stereo matching of left and right two images. The stereo matching method based on feature points in Ref. [13] is difficult to get a dense 3D point cloud and obtain the 3D coordinates of the target points. References [16, 17] used the global matching methods based on the imulated annealing algorithm and dynamic programming to obtain the dense 3D point cloud of the maize leaf. However, the error of the solution is large, and the subtle movement of the corn leaves can not be well identified. The method presented in Ref. [14, 15] is special, and is not suitable for the folding and rolling of the blade edge. According to the shortage of these methods above, in order to realize the motion state monitoring of maize leaves, this paper present a method that attach the fluorescent light beads in maize leaves as markers, to simplify the complexity of stereo matching algorithm, and improve the precision of the 3D reconstruction of target.

2 System Design

Monitoring system includes the hardware support structure and the back-end image processing algorithm. The main function of hardware support structure is to create a relatively stable image acquisition environment in the corn canopy, while the back-end software algorithm is mainly used to complete the image processing, 3D reconstruction, visualization and so on.

2.1 Composition and Function of the System Hardware

The system is composed of a binocular camera, a sliding shaft, a sliding rod, a bracket, an image acquisition card, a data line and a computer. Image acquisition using binocular camera of Beijing microview MVC1000SAM-GE30ST and the 8.5 mm Pentax lens, which effective pixel is 1280 × 1024. The sensor size is 1/2 in. and the pixel size is 5.2 * 5.2 μm. Move the binocular camera to the top of the monitored blade through a sliding shaft when image is being acquired. The vertical height of camera is adjusted by the tripod of the supporting sliding rod. The distance 30–50 cm of the camera and blade is better. During image acquisition camera internal and external parameters remains the same and try to choose a better light sunny day using a larger aperture can ensure the images with a good depth of field (Fig. 1).

Fig. 1.
figure 1

Binocular stereo vision system. 1. A binocular camera 2. A sliding rod 3. A bracket 4. An image acquisition card 5. A data line 6. A computer.

2.2 Principle of Binocular Stereo Vision

Set a point M (X, Y, Z) in the marker. M1 (x1, Y1), M2 (X2, Y2) are the image coordinates of the M point’s projection in the left and right two images and the projection matrix of the left and right cameras is Pi:

$$ \varvec{P}_{i} = \left[ {\begin{array}{*{20}c} {\mathop a\nolimits_{11}^{i} } & {\mathop a\nolimits_{12}^{i} } & {\mathop a\nolimits_{13}^{i} } & {\mathop a\nolimits_{14}^{i} } \\ {\mathop a\nolimits_{21}^{i} } & {\mathop a\nolimits_{22}^{i} } & {\mathop a\nolimits_{23}^{i} } & {\mathop a\nolimits_{24}^{i} } \\ {\mathop a\nolimits_{31}^{i} } & {\mathop a\nolimits_{32}^{i} } & {\mathop a\nolimits_{33}^{i} } & {\mathop a\nolimits_{34}^{i} } \\ \end{array} } \right]\;\;\;\left( {i\, = \,1,2} \right) $$
(1)

and so

$$ \mathop w\nolimits_{i} \left[ {\begin{array}{*{20}c} {\mathop x\nolimits_{i} } \\ {\mathop y\nolimits_{i} } \\ {\mathop 1\nolimits_{{}} } \\ \end{array} } \right] = \varvec{P}_{i} \left[ {\begin{array}{*{20}c} X \\ Y \\ Z \\ 1 \\ \end{array} } \right]\;\;\;\left( {i\, = \,1,2} \right) $$
(2)

where (x1, y1, 1), (x2, y2, 1) are the homogeneous coordinates of m1 and m2 in their respective images, (X, Y, Z, 1) is the homogeneous coordinates of M (X, Y, Z) points in the world coordinate, wi is nonzero parameter, akmn (k = 1, 2; m = 1, 2, 3; n = 1, 2, 3, 4) is the element in the projection matrix Pi (i = 1, 2), representing the intrinsic matrix (focal length, distortion) and external parameters matrix (translation and rotation) of the camera. According to the coordinates m1 (x1, y1), m2 (x2, y2) of the measured point M on the camera image and formula (2), we can seek out the world coordinates (X, Y, Z) of unknown point M.

$$ \left[ {\begin{array}{*{20}c} {(\mathop a\nolimits_{11}^{i} - \mathop a\nolimits_{31}^{i} \mathop x\nolimits_{i} )} & {(\mathop a\nolimits_{12}^{i} - \mathop a\nolimits_{32}^{i} \mathop x\nolimits_{i} )} & {(\mathop a\nolimits_{13}^{i} - \mathop a\nolimits_{33}^{i} \mathop x\nolimits_{i} )} \\ {(\mathop a\nolimits_{21}^{i} - \mathop a\nolimits_{31}^{i} \mathop y\nolimits_{i} )} & {(\mathop a\nolimits_{22}^{i} - \mathop a\nolimits_{32}^{i} \mathop y\nolimits_{i} )} & {(\mathop a\nolimits_{23}^{i} - \mathop a\nolimits_{33}^{i} \mathop y\nolimits_{i} )} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} X \\ Y \\ Z \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {(\mathop x\nolimits_{i}^{{}} \mathop a\nolimits_{14}^{i} )} \\ {(\mathop y\nolimits_{i}^{{}} \mathop a\nolimits_{24}^{i} )} \\ \end{array} } \right] $$
(3)

The camera projection matrix Pi is obtained by Zhang plane template calibration method [18]. It isn’t described due to the limited space.

2.3 Image Processing Algorithm

The computer used in the algorithm development adopted the Pentium (R) D processor, whose memory is 2.0 GB. Software is written by C++ language and the computer vision library is OpenCV. Using the binocular stereo vision system to collect data and the image can be get at the speed of 30 frame/s supported by the system. After the experiment, the collected data were processed to obtain the state information of plant growth. The flow chart of the blade motion monitoring algorithm is shown in Fig. 2.

Fig. 2.
figure 2

Flow chat of the image processing

2.3.1 Image Acquisition and Preprocessing

In plant growth monitoring, a template with fixed form was placed on the surface of the tested plant, and the change of the position of the plant was indicated by the movement of this template. Considering the complex environment of the field and the plant itself, this paper put the diameter 0.35 cm fluorescent beads as landmarks and fluorescent beads attached in a black disk, which are convenient for placing maize leaves in different parts. According to the different monitoring purposes, one or more marks can be placed. Move the binocular camera to the appropriate position, to make the tag template in the visual field of the left and right two cameras. Set the image acquisition time interval is 60 s, the image is automatically stored in the computer to prepare for subsequent processing.

The initial image contains information about the leaves, marks, and soil background, as shown in Fig. 3.

Fig. 3.
figure 3

Maize leaf image

As the redundant information, the soil background will affect the subsequent image processing. There are many methods to remove it in the image of green plants, such as the threshold method using the difference of gray color between soil and plant, the super green method, which is said that the green plants meet this condition 2G > (B + R). We should also reserve the marker area at the same time. The traditional method will remove a part of the mark, and affect the final calculation accuracy. In this paper, the decision surface method is used to segment the image, which can keep the blade and mark pixels. The decision surface algorithm can be described as

$$ C = \frac{{R^{2} }}{{V^{2} }} + \frac{{\left( {1 - G} \right)^{2} }}{{\left( {YB + U} \right)^{2} }} $$
(4)

where R, G, B is the intensity values of there colors red, green and blue after normalization and the values range from 0 to 1. The original values of the original color intensity red R, green G, blue B is from 0 to 255, corresponding to the normalized R = r/255, G = g/255, B = b/255. V, Y, U are parameters describing the curved surface shape. Among of them V is the maximum intensity of the red when the green can be preceived. In this time B = 0, G = 1. Y is the tilt index of surface boundary in the plane of red and green. U is the difference between the maximum green strength value and the minimum value when the blue and red channels are both zero and the green can be sensed. V, Y, U are the constant value and here V = 0.85, Y = −0.37, U = 0.74. Put the normalized R, G, B value into the decision surface algorithm can calculate the C value. If the C value is greater than or equal to 1, this pixel belongs to the blade region, should be retained. If not, this pixel belongs to the soil background region, should be discarded.

There are still sporadic isolated island pixels in the soil background with the image binarization after segmentation, and it can be removed by area threshold filtering. The final image contains only the target area, as shown in Fig. 4.

Fig. 4.
figure 4

Background pixels removed image

2.3.2 Marker Extraction

Mark the brightness was significantly higher than that of the surrounding environment such as leaves, by edge detection algorithm to obtain fluorescence ball marker profile, if the ball to orthographic cameras, then ball image contour is round, but maize leaves have a larger bending, makes some fluorescence ball contour on the image oval. On a step of image using Canny operator for edge detection, and then use the following criteria, ellipse markers identification of fluorescent balls

  1. (1)

    Whether the edge is closed. The mark points edge is a closed curve, which removes the non closed curve caused by the texture, shadow, and so on by finding the closed contour.

  2. (2)

    The brightness of the marker is higher than that of the background, and the closed curve of the dark region is removed by the gray level threshold.

  3. (3)

    Contour area. Mark objects in the image pixel size is relatively fixed, which can remove the noise background contours of objects and smaller silhouettes.

  4. (4)

    To judge the roundness. Circular markers due to the visual distortion formed elliptic, compared with the images of natural objects contour closer to circular, judge whether the region as a marker by judge closed curve roundness. Like a circle method of calculating D2/(4πA), where D is the perimeter of the region, a area in the region for the calculated value is close to 1 the region shape is nearly circular.

By using these conditions, the image of the last step is restricted to get the two value image which contains the mark point, as shown in Fig. 5.

Fig. 5.
figure 5

The binary image of template

2.3.3 Markers of Sub-pixel Refinement

Get the logo pixels on the edge of the material, the edge pixels of the center of gravity as the left and right two pieces of stereo image of matching points. According to the prior calibration of the intrinsic and extrinsic parameters of the camera can obtain the three-dimensional coordinates of the point. A step for take marker edge are rough, directly used to calculate the results will appear larger error and some movement of the blade is small, this error will influence perception on the leaf fine motor. In order to further improve the accuracy, using sub-pixel edge refinement algorithm, first sub-pixel edge location, and then use the ladder of amplitude method of edge point adjusted so as to obtain the sub-pixel edge.

Gradient amplitude specific process: (1) at each pixel edge computing the gradient components of X and Y direction, each edge point and the gradient magnitude G(x, y) and the gradient direction alph a (x, y) is calculated by the two components. Specific calculation method:

$$ \begin{aligned} & \;\;\text{gradient}\;\text{magnitude} \\ & G(x,y) = \sqrt {G_{x}^{2} + G_{y}^{2} } \\ \end{aligned} $$
(5)
$$ \begin{aligned} & \;\;\;\;\text{gradient}\;{\text{direction}} \\ & \alpha (x,y) = \arctan (G_{y} /G_{x} ) \\ \end{aligned} $$
(6)

Count Gx and Gy the specific form of the convolution template used in the time is shown in Fig. 6

Fig. 6.
figure 6

Gradient convolute template

(2) according to the G (x, y) and a given threshold value T, the gradient direction is determined to meet the G (x, y) > T value range.

Assuming the pixel edge points (x, y), gradient direction for alpha (x, y), when alpha < alpha (x, y) < beta (Fig. 7 alpha = Pi/2 + arctan (1/3), beta = −arctan (1/3), along the gradient direction and the reverse direction judgment adjacent two points (x − 1, Y − 1) and (x + 1, y + 1), is greater than the threshold value T and obtains the distance component.

Fig. 7.
figure 7

Points in the direction of grads

(3) By using the gradient component Gx and Gy as the weight of the pixel edge, the edge pixels are obtained, Correction formula for:

$$ \Delta d_{x} = \sum\limits_{i = 1}^{n} {G_{{x_{i} }} } d_{{x_{i} }} /\sum\limits_{i = 1}^{n} {G_{{x_{i} }} } \;\;\Delta d_{y} = \sum\limits_{i = 1}^{n} {G_{{y_{i} }} } d_{{y_{i} }} /\sum\limits_{i = 1}^{n} {G_{{y_{i} }} } $$
(7)
  • Among, dxi—The distance between the pixel points along the gradient direction and the rough edge points X component product

  • dyi—The distance between the pixel points along the gradient direction and the rough edge points Y component product

  • Gxi—X directional gradient component

  • Gyi—Y directional gradient component

  • n—The number of pixels in the G (x, y) T > I along the gradient direction.

As shown in Fig. 8 after subpixel refinement markers of template image edge, edge to describe the boundary of the position of object and background in image logo, obtained by the edge of the center of gravity of the matching points matching accuracy is further improved. Figure 8(a) is sub-pixel refinement template edge, the edge in the lower left is a concave regions, corresponding to 8(b) image position of the same pixel gray than normal fluorescent spheres regional low, this may is caused by the processing defect of fluorescent balls, sub-pixel refinement algorithm to identify the defects, and can correct the edge extraction.

Fig. 8.
figure 8

Template edge after sub-pixel refine. (a) Template edge after sub-pixel refine processing, (b) Template edge before sub-pixel refine processing

3 Measurement and Result Analysis of Motion Information of Maize Leaf

3.1 Measurement Accuracy Analysis

From the experiments, the accuracy of the measuring precision of binocular stereo vision system of measurement accuracy is affected by many factors, such as the camera resolution and measurement of the object distance, two cameras into the angle calibration algorithm. It is difficult to through the theoretical derivation and calculation for a particular stereo vision system for the test of binocular stereo vision system accuracy and reliability, designed a special environment, such as Fig. 9 shows, with signs of blackboard is vertically placed in the horizontal displacement, the displacement in stepping motor to drive the horizontal motion and binocular camera is placed horizontally toward and blackboard vertical displacement platform is arranged at the lower part of the ruler. At the beginning of the binocular camera and blackboard The P, p + 2 cm, p + 3 cm, p + 5 cm, p + 7 cm, p + 9 cm, and p + 1 cm, are moved from the blackboard to the following default position points.

Fig. 9.
figure 9

Schematic of accuracy testing

In several preset position marking point 3D coordinates were calculated. Results are shown in Table 1 shows, X, Y, and Z preset three-dimensional coordinate value, the distance value for each preset point to the distance of the binocular camera coordinate system origin, the relative distance to the preset point to the first preset point distance.

Table 1. Result of simulation computation

Calculated from Table 1 data and real world 0.358102 1 cm and binocular camera coordinate relative should. Distance between adjacent points of measurement error, the mean absolute value is 0.00498, corresponding to the real world 0.0139 cm test camera and sign in initial position and the distance of 50 cm, the measurement error in the range system has reached 0.0139 cm can meet the maize leaf movement monitoring required measurement accuracy.

3.2 Corn Leaf Growth Monitoring

Tested materials for maize varieties to educating 3352013.6.17 sowing date, density of 60000 plant/hm2, normal water and fertilizer treatment. Select 13 leaf stage of maize plant growth monitoring experiment. At this time the growth of maize leaves is very strong, plant the rapid expansion of the volume, dry weight increased dramatically. Choose a unfolding visible leaf as experimental material, will mark the paste on the blade, using the system for continuous monitoring of plant growth, with 1H intervals calculated the time period of markers in three-dimensional space displacement (/). The results as shown in Fig. 10.

Fig. 10.
figure 10

The time course of leaf movement

3.3 Corn Leaf Movement Monitoring in One Day

Silking, ear of corn on a leaf of choose to do as the observation object, try to choose no wind conditions. From the leaves in the upper to tip six landmark points are uniformly arranged, mark points in the 8:00 time for initial state were recorded mark point position 10:00, 14:00, 16:00, calculation and the initial condition of the marker distance of 3D point position. Calculations such as shown in Fig. 11.

Fig. 11.
figure 11

The leaf movement in one day

From the figure can be seen mark points 14:00 and 8:00 landmarks near the blade tip of three points, large displacement, reaching the point of tip 0.579 cm, and 10:00 and 16:00 data with no obvious change of position. The reason, may is due to the midday leaf transpiration reached maximum, the cells of leaf water loss overall downward drooping, and 10:00 and 16:00 PM due to high temperature, transpiration have yet to make leaf water loss sag. In the central region of the corn leaf veins of leaves to a supporting role, especially lower in leaves, resulting in the loss of water drop phenomenon More pronounced at the tip of the blade.

4 Conclusions

Design and implementation of a for maize leaf movement monitoring of hardware structure and software algorithm, using fluorescent beads do as a marker to simplify the image processing algorithms, an increase of binocular stereo vision 3D coordinate measuring accuracy. Simulation experiment system measurement precision can reach 0.0139 cm, pass over field experiments demonstrate the movement monitoring system feasibility of leaves in three-dimensional displacement information acquisition, for further analysis of blade space attitude reflects the crops growth and physiological changes provides data acquisition means.

By the principle of binocular vision technology limited, if the measured object changes in the morphology of the larger, such as caused by the wind blade retroflexion and displacement, making the sign bit disappear from the camera’s field of view, measurement accuracy will have great effect on.