Advertisement

Co-design and Implementation of Image Recognition Based on ARM and FPGA

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 857)

Abstract

With the development of the Internet of things, the image recognition system is widely required in many fields. It has very high requirement in real-time, but usually it has high complexity and large data. So the real-time, which improved by hardware acceleration, is the key of image recognition system. Traditional processors have the disadvantages of low flexibility and configurability for prototype of embedded system. The family of Xilinx Zynq7000 processors integrate dual-core ARM Cortext-A9 and low-power FPGA. It can improve the operating efficiency and dynamical configurability of developing image applications. It also can reduce the power consumption of image processing. In this paper, we present an ARM and FPGA Co-design architecture of image recognition system based on Zynq7000 processor. Then we validate this architecture by the leaf recognition system. This architecture is based on module designed at system-level and modeled at algorithm-level. After determining the algorithm option, we partite the ARM and FPGA of modules depending on algorithm simulation, and then implement them separately. Finally, ARM and FPGA modules are interconnected by interface or driver. When the joint debugging is completed, prototype development of the embedded application is finished. As the experiment shown, FPGA and ARM co-design is 1.84 times faster than the pure ARM.

Keywords

Co-design and implementation ARM + FPGA Image recognition 

Notes

Acknowledgement

This work is partially sponsored by Natural Science Foundation of Shanghai (15ZR1410000).

References

  1. 1.
    Liu, H., Fu, Y.: Design of image processing system platform based on Zynq chip. Comput. Modernization 240(8), 1–5 (2015)Google Scholar
  2. 2.
    Liao, Y.P., Zhou, H.G., Fan, G.R.: Accelerating recognition system of leaves on Nios II embedded platform. In: International Symposium on Computer Communication Control and Automation, pp. 334–337. IEEE (2010)Google Scholar
  3. 3.
    Lu, J., et al.: Embedded System Hardware and Software Co-Design Combat Guide Based on Xilinx Zynq. China Machine Press, Beijing (2013)Google Scholar
  4. 4.
    Zhang, J., et al.: A hardware accelerated OpenCV image processing methods. Image Multimedia (2015)Google Scholar
  5. 5.
    Li, L.: Based on Zynq-7000 video processing system framework design. Comput. Technol. Dev. (2016)Google Scholar
  6. 6.
    Uluturk, C., Ugur, A.: Recognition of leaves based on morphological features derived from two half-region. In: International Symposium on Innovations in Intelligent Systems and Applications, pp. 1–4. IEEE (2012). A brief guide to the systems modeling languageGoogle Scholar
  7. 7.
    Drozdenko, B., et al.: High-level hardware-software co-design of an 802.11a transceiver system using Zynq SoC. In: Computer Communications Workshops, pp. 682–683. IEEE (2016)Google Scholar
  8. 8.
    Liao, Y.-P.: Accelerating recognition system of leaves on Nios II embedded platform. In: 2010 International Symposium on Computer, Communication, Control and Automation (2010)Google Scholar
  9. 9.
    Wang, Z., et al.: Based on ZYNQ dense optical flow method hardware and software co-processing. Comput. Eng. Appl. (2014)Google Scholar
  10. 10.
    Ameur, R.B., Valet, L., Coquin, D.: A fusion system for tree species recognition through leaves and barks. Computational Intelligence, pp. 1–8. IEEE (2017)Google Scholar
  11. 11.
    Zhou, Z.: Machine Learning. Tsinghua University Press, Beijing (2016)Google Scholar
  12. 12.
    Delligatti, L.: A Brief Guide to the Systems Modeling Language. China Machine Press, Beijing (2014)Google Scholar
  13. 13.

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.School of Computer Science and Software Engineering, MOE Research Center for Software/Hardware Co-Design EngineeringEast China Normal UniversityShanghaiChina

Personalised recommendations