Keywords

1 Introduction

UAV is one of the most attractive but vulnerable robot platform, which has the potential to be applied in tons of scenarios, such as geography survey, agriculture fertilization, exploration in dangerous or disaster regions, products delivery, shooting photos. Safety of the UAV is always a vital property in a UAV application. Thus, researchers always seeking for better Sense and Avoidance (SAA) technics for UAVs. Classic UAVs use GPS or optic flow [12, 18] to navigate, and onboard distance sensor like ultra sonic, infrared, laser, or a cooperative system to avoid obstacles as reviewed by [23]. However, these distance sensors are largely dependent on obstacles’ materials, texture and backgrounds’ complexity, thus, they can only work in simple and structured environment [6]. Lidar and Vision based methods is more diverse and applicable. One popular vision based method is to detect and locate obstacles in a reconstructed map, mark out the frontiers of the obstacles as banded fields in the map, and then use specified pathfinding algorithm (e.g. heuristic algorithm) to generate safe trajectories to avoid collision [1, 2, 16]. This is also named as Simultaneous Localization and Mapping (SLAM), but its high demand of computational burden prevent it from small or micro UAVs.

On the other hand, bio-inspired vision based collision detecting methods are standing out for their efficiency. For example, Optic Flow (OF) is a widely used vision based motion detecting method inspired by biological mechanism in flies and bees [21]. It is also introduced in collision avoidance technology, e.g. Zufferey [30] applied 1D OF sensor onto a 30 g light weight fixed wing UAV and achieved automatic obstacle avoidance in indoor (GPS denied) structured environment. And later in 2009 [5] their group achieved autonomous avoidance towards trees with 7 OF sensor on a fixed wing platform. Griffiths [11] used optical mouse (key-point matching) converted OF sensor to fly through Canyon, besides the OF sensor, it also integrated a laser ranger for directly approaching obstacles. Serres [20] used a pair of EMD based OF sensor to avoid lateral obstacles for a hovercraft. Sabo [18] applied OF onto quadcopters and repeated some benchmark experiments to analysis the behaviours for honeybee-like flying robot, however, the algorithm was still computed off board. Stevens [22] achieved collision avoidance in cluttered 3D environments.

Lobula Giant Movement Detector (LGMD) is another bio-inspired neural network inspired by Locusts vision system, and especially, superior in detecting approaching obstacles and avoiding imminent collisions. Compared to Optic Flow, LGMD is more specialised for detecting directly approaching obstacles and eliminate redundant image difference caused by shifting things and backgrounds. The LGMD neuron and its presynaptic neural network has been modeled [17] and promoted by many researchers [8, 9, 24]. As a collision detecting model, LGMD has been introduced to mobile robots [4, 13], embedded systems [10, 14], hexapod walking robot [7], blimp [3] and cars [25, 26].

Basic LGMD model provide the threat level of collision in the whole field of view (FoV), but it is not enough to make wise avoidance behaviour, hence, early research generated randomly turn direction in mobile robots [13]. Shigang [27] divided the field of view into two bilateral halves, and discussed both winner-take-all and steering-wheel network in direction control system of the mobile robot. Compared to mobile robots, UAV has more degree of freedom, and is more vulnerable during flight. In the extremely limited literature of LGMD research on UAV platforms, Salt [19] implemented a neuromorphic LGMD model using recording from a UAV platform, and divided the FoV into half twice for direction information. But there is no real-time flight conducted. Our previous research has proved the applicability of LGMD on Quadcopter [28] real-time flight and collision avoidance. Previously, the quadcopter can only avoid obstacles by randomly turn left or right in horizontal plane. To acquire the information about the coming direction of imminent obstacles, this research proposed a new image partition strategy, especially for LGMD application on UAVs, and a corresponding steering method for 3D avoidance behaviour. Both video simulation and real-time flight demonstrated the performance of this method.

2 Model Description

2.1 LGMD Process

The LGMD process algorithm used in this paper is inherited from our previous research [28]. The LGMD process is composed of five groups of cells, which are P-cells (photoreceptor), I-cells (inhibitory), E-cells (excitatory), S-cells (summing) and G-cells (grouping), compared to previous model, we added four single competitive LGMD cells representing LGMD output of four sections: Left, Right, Up, and Down. The image is divided as shown in Fig. 1.

Fig. 1.
figure 1

A schematic illustration of the proposed LGMD based competitive neuron network for collision detection. \([\#]\) denotes the inherited LGMD process as described in our previous research [28].

The first layer of the neuron network is composed of P cells, which are arranged in a matrix, formed by luminance change between adjacent frames. The output of a P cell is given by:

$$\begin{aligned} P_{f}(x,y)=L_f(x,y)-L_{f-1}(x,y) \end{aligned}$$
(1)

where \(P_f(x,y)\) is the luminance change of pixel(xy) at frame f, \(L_f(x,y)\) and \(L_{f-1}(x,y)\) are the luminance at frame f and the previous frame.

The output of the P cells forms the input of the next layer and is processed by two different types of cells, which are I (inhibitory) cells and E (excitatory) cells. The E cells pass the excitatory flow directly to S layer so that the E cells has the same value to its counterpart in P Layer; While the I cells pass the inhibitory flow convoluted by surrounded delayed excitations.

The I layer can be described in a convolution operation:

$$\begin{aligned}{}[I]_f=[P]_f\otimes [w]_I \end{aligned}$$
(2)

where \( [w]_I \) is the convolution mask representing the local inhibiting weight distribution from the centre cell of P layer to neighbouring cells in S layer, a neighbouring cell’s local weight is reciprocal to its distance from the centre cell. To adapt fast image motion during UAV flight, \( [w]_I \) is set differently to it in mobile robot [13], the inhibition radius is expanded to 2 pixels:

$$\begin{aligned}{}[w]_I=0.25\begin{bmatrix} \frac{1}{\sqrt{8}}&\frac{1}{\sqrt{5}}&\frac{1}{2}&\frac{1}{\sqrt{5}}&\frac{1}{\sqrt{8}} \\ \frac{1}{\sqrt{5}}&\frac{1}{\sqrt{2}}&1&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{5}} \\ \frac{1}{2}&1&0&1&\frac{1}{2} \\ \frac{1}{\sqrt{5}}&\frac{1}{\sqrt{2}}&1&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{5}} \\ \frac{1}{\sqrt{8}}&\frac{1}{\sqrt{5}}&\frac{1}{2}&\frac{1}{\sqrt{5}}&\frac{1}{\sqrt{8}} \end{bmatrix} \end{aligned}$$
(3)

The next layer is the Sum layer, where the excitation and inhibition from the E and I layer is combined by linear subtraction, and after summation. Next, Group layer is involved to reduce the noise caused by sporadic image change or backgrounds. Detailed equation and parameters can be found in our previous work [28].

When it comes to G layer, The unnormalized membrane potential of four C-LGMDs are Calculated respectively:

$$\begin{aligned}&U_\_{LGMD0}=\sum _{x}^{}\sum _{y=0}^{min[Diag1,Diag2]}|\widetilde{G}_f(x,y)| \end{aligned}$$
(4)
$$\begin{aligned}&D_\_{LGMD0}=\sum _{x}^{}\sum _{y=0}^{max[Diag1,Diag2]}|\widetilde{G}_f(x,y)|\end{aligned}$$
(5)
$$\begin{aligned}&L_\_{LGMD0}=\sum _{x}^{}\sum _{y>=Diag1}^{y<=Diag2}|\widetilde{G}_f(x,y)| \end{aligned}$$
(6)
$$\begin{aligned}&R_\_{LGMD0}=\sum _{x}^{}\sum _{y=>Diag2}^{y<=Diag1}|\widetilde{G}_f(x,y)| \end{aligned}$$
(7)

where Diag1, Diag2, denote the coordinates in y axis of the two diagonals, and \(G_f(x,y)\) is the cells value of G layer, as illustrated in Fig. 2. For more details about the process from \(P_{f}(x,y)\) to \(G_f(x,y)\) please looks in our previous work [28].

Fig. 2.
figure 2

Image dividing method. The image scene is split through the diagonal.

Previously, the membrane potential of the LGMD cell \(K_{f0}\) is the summation of every pixel in G layer:

$$\begin{aligned} K_{f0}=\sum _{x}^{}\sum _{y}^{}|\widetilde{G}_f(x,y)| \end{aligned}$$
(8)

Now it also equals to the summation of the four C-LGMD neurons:

$$\begin{aligned} K_{f0}=U_{LGMD}+D_{LGMD}+L_{LGMD}+R_{LGMD} \end{aligned}$$
(9)

and then \(K_f\) is adjusted in range (0, 255) by a sigmoid equation:

$$\begin{aligned} \kappa _f=\frac{\text {tanh}(\sqrt{K_{f0}}-n_{cell}C_1 )}{n_{cell}C_2} \times 255 \end{aligned}$$
(10)

where \(C_1\) and \(C_2\) are constants to shape the normalizing function, limiting the excitation \(\kappa _f\) varies within [0, 255], \(n_{cell}\) represents the total number of pixels in one frame of image. The membrane potential of the four C-LGMDs, is also limited in (0, 255) by calculating their proportion in \(K_{f0}\), instead of modified with sigmoid function again:

$$\begin{aligned}&U_\_{LGMD}=\frac{U_{LGMD0}}{K_{f0}}\times \kappa _f \end{aligned}$$
(11)
$$\begin{aligned}&D_\_{LGMD}=\frac{D_{LGMD0}}{K_{f0}}\times \kappa _f \end{aligned}$$
(12)
$$\begin{aligned}&L_\_{LGMD}=\frac{L_{LGMD0}}{K_{f0}}\times \kappa _f \end{aligned}$$
(13)
$$\begin{aligned}&R_\_{LGMD}=\frac{R_{LGMD0}}{K_{f0}}\times \kappa _f \end{aligned}$$
(14)
figure a

If \(\kappa _f\) exceeds its threshold, then an LGMD spike is produced:

$$\begin{aligned} S_f^{spike}={\left\{ \begin{array}{ll} 1, &{} \text{ if } \kappa _f \geqslant T_s \\ 0, &{} \text{ otherwise. } \end{array}\right. } \end{aligned}$$
(15)

An impending collision is confirmed if successive spikes last consecutively no less than \(n_{sp}\) frames:

(16)

And then, based on the result of the competitive C-LGMDs, DCMD will switch to the corresponding escape command, and the command is sent through USART interface to the flight control system. The process from DCMD to PID based motor control system is shown in pseudocode Algorithm 1.

3 System Overview

In this section, the outline of the whole system is described. A system composed of Quadcopter, embedded LGMD detector, Ground Station, Remote and auxiliary sensors is depicted in Fig. 3. Luminance information is collected by the camera on the detector board, and then involved into the LGMD algorithm, the output command is passed through a USART port into the flight control to monitor avoiding tasks.

3.1 Quadcopter Platform

The UAV platform used in this research is a customized quadcopter with the skeleton size of 33 cm between diagonally rotors. The flight control module we used is based on a STM32F407V and provides 5 USART interface for extra peripheral. Multiple sensors are applied for data collection and enhance the stability of the quadcopter, including an IMU (Inertial Measurement Unit), an ultrasonic sensor, an optic flow sensor and the LGMD detector, as illustrated in Fig. 3. The Pix4Flow optic flow module [12] is occupied as a position and velocity feedback in horizontal plane. The flight control module works as the central controller to combine the other parts together. It receives source data from the embedded IMU module (MPU6050), the Pix4flow optic flow sensor, and the LGMD detector, calculates out the PWM (Pulse-Width Modulation) values as the output to the four motors and it also sends back real time data for analysis through the nRF24L01 module.

Fig. 3.
figure 3

The structure of the quadcopter platform.

4 Experiments and Results

To verify the performance of the proposed algorithm, both video simulation and arena real-time flight are conducted.

4.1 Video Simulation

The algorithm is firstly implemented on matlab and tested by a series recorded video, to verify whether the algorithm can distinguish stimulus from different directions. The results in Fig. 4 indicate that the new network is able to respond differently towards coming objects from different directions.

Fig. 4.
figure 4

Simulation results with snapshot. (a), (b), (c), (d) are membrane potential in C-LGMDs, toward upside, downside, left side, and right side stimuli respectively.

4.2 Hovering and Features Analysis

To further analyze the performance on quadcopter platform, we transplanted the algorithm into the embedded LGMD detector, and mounted the detector onto the quadcopter, stimulated the detector with test patterns while the quadcopter hovering in the air. Object is manually pushed towards the detector from four direction respectively, and each direction repeated 10 times. Figure 5 is an example of the trial scene, in which object is pushed towards the detector from left. According to the results in Fig. 6, the four competitive LGMD distinguished the coming direction of the object accurately. In all the four types of trials, when LGMD exceeds its threshold, the C-LGMD indicated the main direction is leading the other average values, even if the lowest performance (lower boundary of the shadow).

Fig. 5.
figure 5

Hovering experiments scene

Fig. 6.
figure 6

Average membrane potential during hovering tests. (a), (b), (c), (d), reflected the average membrane potential of the four competitive LGMD neuron in trials. The shadow is the continuous error of the C-LGMD of the main direction.

4.3 Arena Real-Time Flight

Finally, real-time flight and obstacle avoidance experiments are conducted to test the performance and robustness of the proposed directionally obstacle avoiding method. Trials reflecting four directions of coming object are set in two types: obstacles on the left and right side or on the upside and downside on the UAV’s route. The quadcopter is first challenged by a static obstacle and then challenged by a dynamic intruder. The results showed that the system is able to make smart escape behaviour based on the coming direction of the obstacle. The trajectories of these trials have been extracted and overlaid on a screenshot from the video, as shown in Fig. 7. Trajectories are detected by a python program using background subtractor [29] and template matching [15] method, and then printed onto a screen shot from the recorded video.

Fig. 7.
figure 7

Real-time obstacle avoiding test.

5 Conclusion

To conclude, a novel competitive LGMD and corresponding UAV control algorithm is proposed to address practical problems meet in UAV’s LGMD application. Both simulation and realtime flight experiments were conducted to analyze the proposed method, and the results showed high robustness. Based on the proposed competitive LGMD, quadcopter’s Real-time 3D collision avoidance is achieved in indoor environment. For the future work, totally autonomous flight in a larger arena should be take to analyze the boundary of this new method.