Advertisement

A Parameterisable FPGA-Tailored Architecture for YOLOv3-Tiny

  • Zhewen Yu
  • Christos-Savvas BouganisEmail author
Conference paper
  • 11 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12083)

Abstract

Object detection is the task of detecting the position of objects in an image or video as well as their corresponding class. The current state of the art approach that achieves the highest performance (i.e. fps) without significant penalty in accuracy of detection is the YOLO framework, and more specifically its latest version YOLOv3. When embedded systems are targeted for deployment, YOLOv3-tiny, a lightweight version of YOLOv3, is usually adopted. The presented work is the first to implement a parameterised FPGA-tailored architecture specifically for YOLOv3-tiny. The architecture is optimised for latency-sensitive applications, and is able to be deployed in low-end devices with stringent resource constraints. Experiments demonstrate that when a low-end FPGA device is targeted, the proposed architecture achieves a 290x improvement in latency, compared to the hard core processor of the device, achieving at the same time a reduction in mAP of 2.5 pp (30.9% vs 33.4%) compared to the original model. The presented work opens the way for low-latency object detection on low-end FPGA devices.

Keywords

YOLOv3-tiny FPGA Object detection 

References

  1. 1.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html
  2. 2.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
  3. 3.
    Girshick, R.: Fast R-CNN. In: The IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448, December 2015Google Scholar
  4. 4.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 580–587, June 2014Google Scholar
  5. 5.
    Lin, T., et al.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014). http://arxiv.org/abs/1405.0312
  6. 6.
    Liu, B., Xu, X.: FCLNN: a flexible framework for fast CNN prototyping on FPGA with OpenCL and Caffe. In: 2018 International Conference on Field-Programmable Technology (FPT), pp. 238–241, December 2018Google Scholar
  7. 7.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  8. 8.
  9. 9.
    Nakahara, H., Yonekawa, H., Fujii, T., Sato, S.: A lightweight YOLOv2: a binarized CNN with a parallel support vector regression for an FPGA. In: 2018 ACM/SIGDA International Symposium, pp. 31–40, February 2018Google Scholar
  10. 10.
    Nguyen, D.T., Nguyen, T.N., Kim, H.: A high-throughput and power-efficient FPGA implementation of YOLO CNN for object detection. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 27(8), 1861–1873 (2019)Google Scholar
  11. 11.
    Nvidia: Geforce gtx titan x user guide (2014). https://www.nvidia.com/content/geforce-gtx/GTX_TITAN_X_User_Guide.pdf
  12. 12.
    Preußer, T.B., Gambardella, G., Fraser, N., Blott, M.: Inference of quantized neural networks on heterogeneous all-programmable devices. In: 2018 Design, Automation Test in Europe Conference Exhibition (DATE), pp. 833–838, March 2018Google Scholar
  13. 13.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, June 2016Google Scholar
  14. 14.
    Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement, April 2018. https://pjreddie.com/media/files/papers/YOLOv3.pdf
  15. 15.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2015) CrossRefGoogle Scholar
  16. 16.
    Venieris, S.I., Kouris, A., Bouganis, C.S.: Toolflows for mapping convolutional neural networks on FPGAs: a survey and future directions. ACM Comput. Surv. 51(3), 56:1–56:39 (2018).  https://doi.org/10.1145/3186332. http://doi.acm.org/10.1145/3186332CrossRefGoogle Scholar
  17. 17.
    Wai, Y.J., bin Mohd Yussof, Z., bin Salim, S.I., Chuan, L.K.: Fixed point implementation of Tiny-Yolo-v2 using OpenCL on FPGA. Int. J. Adv. Comput. Sci. Appl. 9(10), 506–512 (2018)Google Scholar
  18. 18.
    Wei, G., Hou, Y., Cui, Q., Deng, G., Tao, X., Yao, Y.: YOLO accelration using FPGA architecture. In: 2018 IEEE/CIC International Conference on Communications in China (ICCC), pp. 734–735, August 2018Google Scholar
  19. 19.
    Zhao, Z.Q., Zheng, P., Xu, S.T., Wu, X.: Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30, 3212–3232 (2019)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Electrical and Electronic EngineeringImperial College LondonLondonUK

Personalised recommendations