Advertisement

Distribution Line Pole Detection and Counting Based on YOLO Using UAV Inspection Line Video

  • Binghuang ChenEmail author
  • Xiren Miao
Open Access
Original Article
  • 47 Downloads

Abstract

In order to improve the efficiency of post-disaster treatment of power distribution network, the application of UAV in disaster reduction and relief has been paid much attention by the power sector. Aiming at the loss assessment needs of overhead transmission lines in distribution network, this paper proposes an innovative solution of pole detection and counting in distribution network based on UAV inspection line video. Combined with the characteristics of YOLO’s rapid detection, the convolution neural network is applied to the image detection of the pole state. In addition, the pole data and corresponding images are obtained at the same time of detecting the inspection line video. Therefore, the power department can quickly count the losses to cope with the disaster. The anchor value is modified before image training by YOLO v3, and sets the corresponding ROI for the UAV inspection line standard. In order to quickly obtain the loss assessment of post-disaster pole lodging, this paper proposes a counting algorithm by using the continuous ordinate change of the bounding box of the same pole in front and rear frame of video, so that the classified counting of pole is accurate and the detection precision is above 0.9. The results obtained in video test show that this method is effective in detecting and counting the state of the pole of overhead transmission line in distribution network.

Keywords

Image processing YOLO Distribution line Object detection 

1 Introduction

In recent years, UAV (Unmanned Aerial Vehicle) technology has been widely applied in railway, transportation and logistics, agricultural plant protection, petrochemical and other industries [1]. Meanwhile, the power industry is also exploring the research and application of UAV inspection technology [2]. State grid has been exploring the application of “collaborative inspection with helicopter, UAV and manpower” since 2011. Since 2016 UAV inspection are being carried out [3].

In general, operation and maintenance efficiency and technical level of distribution network equipment are insufficient, so routine inspection is the main equipment state control method [4]. Maintenance personnel always depends on the way of watching, taking notes and taking photos during the inspection [5]. Especially in the mountainous or river area, it is easy to occur various accidents. The efficiency and quality of inspection cannot be guaranteed, and there is an urgent need for better inspection methods. State grid is also using helicopter to inspection the main network for efficiency. But the high cost of helicopter inspection increases the cost of power distribution enterprises, and the approval process of air traffic control department is relatively cumbersome. With the continuous development of UAV technology, the application of UAV in the inspection of distribution network lines can further improve the efficiency of inspection and the automation level of management.

The emergency inspection of UAV after disaster is another important content in power inspection besides routine inspection. In recent years, UAV has demonstrated outstanding effects in the emergency response after natural disasters [6] to obtain images of disaster areas and carry out three-dimensional visualization of topography and landform [7]. UAV is mostly used in earthquakes, debris flows, landslides, fires, ice disasters and other disasters that may cause obvious changes in landform. However, due to the particularity of UAV power inspection, it is rare in the emergency inspection and disaster evaluation on power system.

In this paper, to achieve real-time distribution line pole detection, YOLO v3 [8] is applied for UAV video object detection. In order to realize the automatic detection and intelligent diagnosis of post-disaster distribution network loss, the key premise is to accurately detect the distribution line pole from the UAV video, so as to complete the post-disaster diagnosis, tracking and data management and other subsequent detection tasks. More importantly, the anchor value of YOLO v3 is modified to detect two states (upright and fallen) of distribution line pole effectively. Furthermore, ROI (region of interesting) is set and a counting algorithm of pole is proposed to improve counting accuracy. Through testing and comparing different neural network models, this method proves to be faster, more robust and more accurate.

The rest of this paper is structured as follows. Section 2 introduces the application of UAV in routine and post-disaster inspection, as well as related work in the field of image object detection. Section 3 introduces power distribution line pole detection and counting algorithm. Section 4 contains results and analysis of experiments. Finally, the corresponding conclusion and future work are presented in Sect. 6.

2 Related Work

Researchers have already applied computer vision to detect power tower, insulator and power line in aerial images [9]. Wang and Zhang [10] and Zhao et al. [11] used support vector machine (SVM) classifier for insulator detection and analysis. Gubbi et al. [12] used convolutional neural network (CNN) for power line detection. For power tower image extraction, various image segmentation methods are mainly used [13]. There are also applications that use multi-layer perceptron neural network to classify and train four types of towers and backgrounds, which can detect the towers in the image and track them [14]. However, there were still some problems such as the lack of navigation points and immature image mosaic technology. And the application of power UAV in the emergency response inspection after disaster is still rare. So far, no application has been found to detect the situation of the towers or poles after disasters, and the number of upright towers and fallen towers has not been statistically analyzed.

In the field of image object detection, Girshick et al. [15] and Ren et al. [16] respectively put forward the fast regional convolution neural network (Fast R-CNN) and faster regional convolution neural network (Faster R-CNN), which could increase the detection speed and also improved the accuracy. But this algorithm has low detection speed (only reach 5 frames/s), so it is not suitable for real-time video detection. However, YOLO (You Only Look Once) v1 [17], v2 [18], and v3 improve the speed (78 frames/s) of image object detection while constantly changing the structure, so that the detection accuracy is also improving steadily, achieving the current best balance between detection accuracy and speed. In the case of same mAP (mean Average Precision), YOLO v3 detection speed is 3.8 times faster than RetinaNet [8], which is an advantage for UAV to carry out real-time video detection of distribution line.

3 Pole Detection and Counting

As described in the Sect. 1, the pole of the distribution network line is fallen down due to disasters, which threatens the safe and stable operation of the power system. A rapid and accurate comprehensive assessment of the losses caused by disasters can provide a scientific guarantee for government departments to make disaster relief decisions and formulate emergency disaster response measures. In this paper, UAV is used for emergency inspection of distribution network lines. In order to effectively evaluate the disaster loss and recover the loss as soon as possible, the pole status recognition and counting are carried out for the image and video acquired by UAV.

3.1 YOLO v3

YOLO stands for You Only Look Once. It’s an object detector that uses features learned by a deep convolutional neural network to detect an object. YOLO has three versions. YOLO v3 uses the new network Darknet-53, which structure is shown in Fig. 1 (416 × 416 for example). Darknet-53 uses YOLO v2, Darknet-19 and Resnet. This model uses a lot of well-behaved 3 × 3 and 1 × 1 convolution layer, and some shortcut connection structures. Eventually it has 53 convolutional layers, so named Darknet-53.
Fig. 1

Network structure of YOLO v3

First YOLO v3 extracts features from the input image through network, which have a certain size of feature map, such as 13 × 13. Then the input image is divided into 13 × 13 grid cell. If the center coordinates of an object in the ground truth is in the grid cell, the object will be predicted. Each grid cell predicts 3 bounding boxes which are not the same size (13 × 13, 26 × 26, 52 × 52) as shown in Fig. 1. The object prediction predicts the IOU (Intersection over Union) of the ground truth and the proposed box. And the class predictions predict the probability of that class given that there is an object. There are two dimensions (width and height) in the output feature map such as 13 × 13. Another dimension (depth)  is
$$ B \times (5 + C), $$
(1)
in Eq. (1) the B stands the number of bounding boxes predicted at each grid cell, C stands the number of classes and 5 stands 4 coordinates and 1 objectness score.

YOLO v3 adopts multi-scales strategy as SSD. The feature map of 3 scales (13 × 13, 26 × 26, 52 × 52) is used for prediction. Prediction is more robust. YOLO v3 anchor boxes are also made by clustering. It uses k-Means clustering to determine the bounding box priors. Nine clusters and 3 scales are selected, and the 9 clusters are uniformly distributed on these 3 scales.

3.2 Improvements

Although YOLO v3 has achieved excellent detection results, it is not completely suitable for the detection of the UAV video of the distribution line pole. Therefore, YOLO v3 is improved for specific problems. Based on the YOLO v3 network, the following improvements are made:
  1. 1.

    Dimension clustering is carried out on the bounding box of the data set to determine the anchors. The anchors of original YOLO v3 are determined by COCO (Common Objects in Context) dataset clustering. COCO dataset is an image dataset maintained by Microsoft, which has more than 300,000 images and more than 80 categories of objects. However, COCO dataset does not have the two categories of upright pole and fallen pole. Therefore, the anchors of original YOLO v3 are not suitable for the specific detection of distribution line pole and the anchors need to be recalculated.

     
  2. 2.

    At the beginning of test, determine ROI according to the characteristics of the UAV inspection of distribution line pole. Let YOLO v3 only detect corresponding areas and improve the detection effect.

     
  3. 3.

    Add the counting algorithm for the distribution line poles detected by video.

     

3.3 Anchors Calculation

YOLO learns from the idea of Faster RCNN and introduces anchor. It clusters the manually marked bounding box in the data set through k-Means, and finds the statistical rule of the bounding box. It takes the cluster number k as the anchor number and the width and height dimensions of the k clustering center boxes as the anchor dimension. In order to improve the IOU, YOLO v3 determines the number of anchors as 9, and calculates anchors according to the COCO dataset. However, our dataset and classification are completely different from COCO classification. In order to improve the detection effect, anchors need to be recalculated, as shown in Table 1. Table 1 shows that the width and height dimensions are the actual pixel values of the bounding box width and height corresponding to the clustering center points of the nine regions. The difference between the width and height of the anchors of the COCO dataset is generally small, but our anchors are much different in width and height, which conforms to the thin and long geometric characteristics of the pole.
Table 1

The anchors of different data sets

COCO

 Width

10

16

33

30

62

59

116

156

373

 Height

13

30

23

61

45

119

90

198

326

Ours

 Width

23

26

14

25

25

120

32

114

96

 Height

52

115

276

169

224

61

344

132

191

3.4 ROI

In the practical application, the UAV inspects the distribution line, mainly flying above or side of the line. And the distribution line pole is usually in the middle of the video image. The pole of the image edge is not the focus one. For this feature, the ROI is determined in the middle of the image, which is conducive to YOLO v3 detection and counting, as shown in Fig. 2. For 1920 × 1080 image, ROI is divided according to Eq. (2).
Fig. 2

The setting of ROI

$$\begin {array} {lll} 70<& \quad {\text{ X\_center}} & \quad < 1850\\400< & \quad {\text {Y\_center}} & \quad < 600 \end{array}$$
(2)
In Eq. (2), X_center and Y_center are the coordinate positions of x center and y center of image bounding box respectively. It can be seen from Fig. 2 that the center point of the Y-axis of box 1 is in this ROI region, while the center point of the Y-axis of box 2 is not in this ROI region. Since the input sources of images are diverse, we set the proportion (0.03 and 0.97 for width, 0.37 and 0.56 for height) for the images to enable ROI to match other images with different resolutions. Thus, Eq. (2) is modified to Eq. (3).
$$\begin {array} {lll} 0.03<& \quad {\text{ X\_center}} & \quad < 0.97 \\ 0.37< & \quad {\text {Y\_center}} & \quad < 0.56 \end{array}$$
(3)

3.5 Counting Algorithm

Generally, the object counting of image detection is relatively simple. You can classify the objects identified by YOLO v3 in the image and count the number of boxes for each class. Video is not same as the image. Although the image is a frame in video, the target to be detected is a specific object in the video. And the video shot by the drone is different from the video shot by the fixed camera. The fixed camera can perceive the motion change of an object through the simple method, while the power distribution line pole cannot move in position. When Drone is moving, and from the perspective of drone, everything is moving. Some algorithms that track a particular object are not suitable for this situation.

In view of the characteristics of UAV video, we have set the ROI, making it clear that only targets in this area will be counted. The front and rear frames of video are continuous in YOLO v3 detection, so the coordinate of the bounding box of the pole is also continuous changes. The drone flies slowly along the distribution line while inspecting. In video, the time from the appearance of the pole to the disappearance of the pole is about 1 s. The distribution line is basically in the middle of the image, and it is obvious that the y axis value of the pole changes in video. Therefore, in comparison of the changes of y_center data of the bounding box in the front and rear frames, whether the pole is the same one can be determined as shown in Fig. 3.
Fig. 3

The counting algorithm flow chart

Due to the influence of flight speed (< 5 m/s) and distance between poles (50 m), three poles can be avoided in the UAV video. Even if there are more than 3 poles at the same time, the counting algorithm will ignore this situation and do not count from Fig. 3.

In the case that two poles appear at the same time in video, there are two processing methods. One is that the first consecutive frames of video are two poles as shown in Fig. 2, and only one pole is counted, because the latter pole will appear and be counted in the subsequent frames. In the case that there are two poles in the middle of video, this counting algorithm will count just once. At the end of video, there is at most one pole to ensure accurate counting. For each pole identified in video, it is saved as an image in time for check.

4 Experiments

4.1 Environments

In China, the UAV image data of distribution overhead pole is very scarce due to the lack of research, and such data sets have not been found in other countries. We need to collect them by ourselves. We used the drone to shoot along several distribution lines, and got 1080p high definition videos (MPEG-4 AVC Video Format). Then the videos are converted to 1920 × 1080 resolution sequence images. These images are not enough because fallen poles are hard to be found in reality, only after bad weather (like typhoon, mudslide, earthquake, etc.). We simulated a lot of pole images in the complex background. In the virtual environment, upright poles and fallen poles also have different shapes. Finally, there are 13,429 images (JPG Format), among which 11,951 for training and 1478 for testing as shown in Figs. 4 and 5. We have trained and tested all models on a single Nvidia Quadro M4000 (8 GB RAM).
Fig. 4

Actual environment picture

Fig. 5

Simulation environment picture

4.2 Training

Since there is no pre-trained model for the pole, we need to spend more time on training. The key training parameter of YOLO v3 is “filters” according to the Eq. (1). In our experiments, B is 3, C is 2 and “filters” is 21.

The other training parameters mainly include: learning rate, momentum and decay. In this experiment, the momentum is 0.9, decay is 0.0005. The learning rate strategy of the experiment in this paper is step. The learning rate is decreased to 0.0001 at the iteration number of 20,000, which can accelerate the training speed. The loss curve using YOLO v3-416 × 416 is shown as Fig. 6. The final average loss is 0.122295. The input resolution influences the detection accuracy [8], so we totally have trained five models with different sizes of input images, including YOLO v3-288 × 288, 352 × 352, 416 × 416, 480 × 480, 544 × 544.
Fig. 6

The loss curve

4.3 Detection Analysis and Comparison

In this experiment, the precision and recall of each class of pole are respectively calculated. Meanwhile, in order to evaluate the superiority of improved YOLOv3, Faster RCNN is also applied in the same data set. To analyze the result, the threshold t is set to 0.25 to ensure the high precision. Table 2 shows the precision and recall of five models and Faster RCNN on the dataset. In addition, the best results in all tables are bold.
Table 2

The results of 6 models in the dataset (t = 0.25)

Model

Precision

Recall

F1-score

mAP (%)

Upright pole ap (%)

Fallen pole ap (%)

YOLOv3 288 × 288

0.89

0.91

0.90

89.90

89.71

90.09

YOLOv3 352 × 352

0.90

0.92

0.91

90.45

90.09

90.81

YOLOv3 416 × 416

0.90

0.92

0.91

90.36

89.92

90.81

YOLOv3 480 × 480

0.90

0.91

0.90

90.23

89.71

90.74

YOLOv3 544 × 544

0.88

0.89

0.89

89.99

89.21

90.77

Faster RCNN

0.75

0.67

0.71

71.99

60.74

83.24

From Table 2, the F1-score of the YOLO v3-352 × 352 model is the best (0.91) which is better than the Faster RCNN (0.71). The mAP of Faster RCNN is 71.99% which is lower than YOLO v3. The precision and recall of the two classes predicted by YOLO v3-416 × 416 model are the second. And Fig. 7 shows the precision-recall curves of YOLO v3 and Faster RCNN. As can be seen from Table 2 and Fig. 7, the YOLO v3-352 × 352 model can provide quality assurance for the accurate assessment of power grid post-disaster losses.
Fig. 7

The precision-recall curves of 6 models

We also tested the impact of different thresholds under the YOLO v3-352 × 352 model on precision and recall, as shown in Table 3. Precision grows larger as the threshold increased, while recall grows smaller. It is not enough to predict the quality of precision or recall alone. Therefore, in order to give consideration to these two parameters, F1-score is proposed, as shown in Eq. (4). The higher the F1-score is, the more robust the classification model is. Obviously, when t = 0.6, 352 × 352 model is very robust in Table 3.
Table 3

The values of precision and recall

Threshold (t)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Precision

0.85

0.89

0.91

0.93

0.94

0.95

0.97

0.98

0.99

Recall

0.94

0.92

0.92

0.91

0.90

0.90

0.88

0.85

0.81

F1-score

0.89

0.91

0.92

0.92

0.93

0.93

0.92

0.91

0.89

$$ F1 = \frac{2}{{\frac{1}{precision} + \frac{1}{recall}}} $$
(4)
ROC curves of two classes (t = 0.6) are proposed as shown in Fig. 8. The AUC value for upright pole using YOLO v3-352 × 352 model is 0.9629 which is highest of 6 models. Although the AUC value for fallen pole using YOLO v3-352 × 352 model is 0.9867 which is the fourth, when considering AUC values for two classes, YOLO v3-352 × 352 is more robust than the other models. The AUC value using Faster RCNN is 0.8708 and 0.9409 which is lowest of 6 models.
Fig. 8

The ROC curves of 6 models

But some poles are difficult to detect which are shown in Fig. 9. Figure 9a shows that the upright pole and tree nearby are all detected which tree should not to be detected. It is mainly because the cement pole and the tree trunk, cement wall are the close color. Figure 9b shows that the fallen pole is not detected because the fallen pole is close to the background color.
Fig. 9

The examples those are difficult to detect

4.4 Video Test Analysis

The important application in this paper is video analysis and pole counting. Martinez et al. [14] identify and track power line towers, and only 87% detection rate can be achieved by using relatively complex algorithms and cannot count. The method proposed in this paper can easily and quickly identify and count the distribution network poles. In this way, a timely and accurate assessment can be realized when a power grid disaster occurs, and the disaster level can be determined to provide a scientific basis for emergency decision-making. Because YOLO v3 works well in real-time video analysis, YOLO v3 is used to test 6 1920 × 1080 HD videos (MPEG-4 AVC Video Format) which are real distribution line inspections conducted with UAV. The videos contain different kinds of distribution upright poles and background which make the videos challenging from the detection and counting point of view. There are many distribution upright poles in these videos. Each video test (the longest video is 5 min) is completed in less than 15 min in our experiment. When Faster RCNN was used for analysis test under the same operation platform, the test time needed was more than 720 min. This is too slow for the post-disaster inspection of power distribution network with high real-time requirements.

The analysis of the results is based on a visual examination and counting algorithm. Figure 10 shows the result of one of the videos which contains the inspection of 4 distribution upright poles (2531 frames). The main features of the frames in this video are different types of background (farmland and houses), image dithering, and that part of the pole is out of sight. Figure 10 shows a set of images illustrating the performance of the proposed pole detection and counting algorithm which is applied in each frame of the video. The coordinate graph shows the y center coordinate of pole bounding box which is found by the detection, and it can be found that these coordinates change continuously. In frame 2482, there is a pole in the center which is not detected because it is made of iron. Although a pole is detected on the left of frame 2482, it is not counted because the y coordinate of the pole is not in the ROI. On the other hand, there are some false positives between frame 1800 and frame 1900. Through the analysis of these frames of images, it is found that the trunk of the tree is caused by the identification error. However, it is worth noting that, despite a small error, the proposed pole detection and counting strategy has, in general, proved to be a suitable method for monitoring the status of poles in a UAV inspection. We also try to use Kalman filter tracking algorithm to analyze video. Although the Kalman filter algorithm can track the pole, the tracking number is not accurate and the tracking cannot be classified. At the same time, Kalman filter algorithm is relatively complex, and the counting algorithm we proposed is relatively simple, fast and practical, which is conducive to the rapid assessment of power grid losses after disasters.
Fig. 10

The result of video test

5 Conclusion

Distribution line pole is the infrastructure in the process of power transmission. In order to realize the goal of quickly finding distribution line pole problems in UAV autonomous inspection after disaster, this paper proposes the detection and counting algorithm of the distribution pole video of from UAV based on neural networks. The neural network YOLO v3 model is used to solve the problem of detection and classification of distribution line poles. This model is mainly used to detect the distribution line poles in two states: upright pole and fallen pole. Firstly, in order to quickly determine the number of upright and fallen poles, the anchors of YOLO v3 are recalculated before applying the YOLO v3 training data set. Secondly, in order to accurately count the pole, ROI is set and a counting algorithm is presented. As far as we know, the detection, classification and counting of distribution line pole in UAV video is not solved in machine learning, which is the innovation of this paper.

The detection, classification and counting methods of distribution line pole are evaluated by using the image data from video of UAV inspection actual line and simulation image data. Compared with other algorithms, encouraging results are obtained. The detection precision of this algorithm is 90% which achieves the expected effect. This method can be applied to the rapid assessment of grid loss after disaster. Moreover, the results show that the pole detection has better robustness, which indicates that this method can be extended to the rapid detection of the transmission line tower and distribution line pole in various environments if appropriate training data sets are available. We will plan to appropriately modify the network structure of YOLO v3 to improve the accuracy of the pole detection in the UAV video.

Notes

Funding

Funding was provided by Research Foundation of Fuzhou University (XRC-1623, XRC-17011), Fujian Provincial Department of Science and Technology (CN) (2017J01728), Fujian Science and Technology Department (2017J01470).

References

  1. 1.
    Yang P, Pan X, Liu J, Guo R (2017) Optimal fault-tolerant control for UAV systems with time delay and uncertainties over wireless network. Peer Peer Netw Appl 10(3):717–725CrossRefGoogle Scholar
  2. 2.
    Adabo GJ (2014) Long range unmanned aircraft system for power line inspection of Brazilian electrical system. J Energy Power Eng 8(2):394–398Google Scholar
  3. 3.
    Katrasnik J, Pernus F, Likar B (2010) A survey of mobile robots for distribution power line inspection. IEEE Trans Power Deliv 25(1):485–493CrossRefGoogle Scholar
  4. 4.
    Abbasi E, Fotuhi-Firuzabad M, Abiri-Jahromi A (2009) Risk based maintenance optimization of overhead distribution networks utilizing priority based dynamic programming. In: 2009 IEEE power and energy society general meeting. IEEE, pp 1–11Google Scholar
  5. 5.
    Deng C, Wang S, Huang Z, Tan Z, Liu J (2014) Unmanned aerial vehicles for power line inspection: a cooperative way in platforms and communications. J Commun 9(9):687–692CrossRefGoogle Scholar
  6. 6.
    Yun L, Wei X, Wei W (2011) Application research on aviation remote sensing UAV for disaster monitoring. J Catastrophol 26(1):138–143Google Scholar
  7. 7.
    Shen YL, Liu J, Wu LX, Li FS, Wang Z (2011) Reconstruction of disaster scene from UAV images and flight-control data. Geogr Geo Inf Sci 27(6):13–17Google Scholar
  8. 8.
    Redmon J, Farhadi A (2018) YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767
  9. 9.
    Varghese A, Gubbi J, Sharma H, Balamuralidhar P (2017) Power infrastructure monitoring and damage detection using drone captured images. In: 2017 international joint conference on neural networks (IJCNN). IEEE, pp. 1681–1687Google Scholar
  10. 10.
    Wang X, Zhang Y (2016) Insulator identification from aerial images using support vector machine with background suppression. In: 2016 International conference on unmanned aircraft systems (ICUAS). IEEE, pp 892–897Google Scholar
  11. 11.
    Zhao Z, Xu G, Qi Y, Liu N, Zhang T (2016) Multi-patch deep features for power line insulator status classification from aerial images. In: 2016 International joint conference on neural networks (IJCNN). IEEE, pp 3187–3194Google Scholar
  12. 12.
    Gubbi J, Varghese A, Balamuralidhar P (2017) A new deep learning architecture for detection of long linear infrastructure. In: 2017 Fifteenth IAPR international conference on machine vision applications (MVA). IEEE, pp 207–210Google Scholar
  13. 13.
    Cheng W, Song Z (2008) Power pole detection based on graph cut. In: 2008. CISP’08. Congress on 2008 image and signal processing, vol 3. IEEE, pp 720–724Google Scholar
  14. 14.
    Martinez C, Sampedro C, Chauhan A, Campoy P (2014) Towards autonomous detection and tracking of electric towers for aerial power line inspection,” In: International conference on unmanned aircraft systems (ICUAS). IEEE 2014, pp 284–95Google Scholar
  15. 15.
    Girshick R (2015) “Fast R-CNN”. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448Google Scholar
  16. 16.
    Ren SQ, He KM, Girshick R, et al (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, pp 91–99Google Scholar
  17. 17.
    Redmon J, Divvala S, Girshick R et al (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 779–788Google Scholar
  18. 18.
    Redmon J, Farhadi A, “YOLO9000: better, faster, stronger”. arXiv preprint arXiv:1612.08242,2016:1-9

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.College of Electrical Engineering and AutomationFuzhou UniversityFuzhouChina
  2. 2.School of Information Science and EngineeringFujian University of TechnologyFuzhouChina
  3. 3.Fujian Key Laboratory of Automotive Electronics and Electric DriveFuzhouChina

Personalised recommendations