Advertisement

Evaluating LULESH Kernels on OpenCL FPGA

  • Zheming JinEmail author
  • Hal Finkel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11444)

Abstract

FPGAs are becoming promising heterogeneous computing components for high-performance computing. In this paper, we evaluate the resource utilizations, performance, and performance per watt of our implementations of the LULESH kernels in OpenCL on an Arria10-based FPGA platform. LULESH is a complex proxy application in the CORAL benchmark suite. We choose two representative kernels “CalcFBHourglassForceForElems” and “EvalEOSForElems” from the application in our study. Compared with the baseline implementations, our optimizations improve the performance by a factor of 1.65X and 2.96X for the two kernels on the FPGA, respectively. Using directives for accelerator programming, we also evaluate the performance of the kernels on an Intel Xeon 16-core CPU and an Nvidia K80 GPU. We find that the FPGA, constrained by the memory bandwidth, can perform 1.05X to 3.4X better than the CPU and GPU for small problem sizes. For the first kernel, the performance per watt on the FPGA is 1.59X and 7.1X higher than that on an Intel Xeon 16-core CPU and an Nvidia K80 GPU, respectively. For the second kernel, the performance per watt on the GPU is 1.82X higher than that on the FPGA. However, the performance per watt on the FPGA is 1.77X higher than that on the CPU.

Keywords

FPGA OpenCL LULESH Kernel optimizations 

Notes

Acknowledgments

The research was supported by the U.S. Department of Energy, Office of Science, under contract DE-AC02-06CH11357 and made use of the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

References

  1. 1.
    Huang, S., Manikandan, G.J., Ramachandran, A., Rupnow, K., Hwu, W.M.W., Chen, D.: Hardware acceleration of the pair-HMM algorithm for DNA variant calling. In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 275–284. ACM, February 2017Google Scholar
  2. 2.
    Casper, J., Olukotun, K.: Hardware acceleration of database operations. In: Proceedings of the 2014 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 151–160. ACM, February 2014Google Scholar
  3. 3.
    Inggs, G., Thomas, D., Luk, W.: A heterogeneous computing framework for computational finance. In: 2013 42nd International Conference on Parallel Processing (ICPP), pp. 688–697. IEEE, October 2013Google Scholar
  4. 4.
    Chen, D., Singh, D.: Fractal video compression in OpenCL: an evaluation of CPUs, GPUs, and FPGAs as acceleration platforms. In: 2013 18th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 297–304. IEEE, January 2013Google Scholar
  5. 5.
    Sharma, H., et al.: From high-level deep neural models to FPGAs. In: 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 1–12. IEEE, October 2016Google Scholar
  6. 6.
    Kirsch, S., Rettig, F., Hutter, D., de Cuveland, J., Angelov, V., Lin-denstruth, V.: An FPGA-based high-speed, low-latency processing system for high-energy physics. In: 2010 International Conference on Field Programmable Logic and Applications (FPL), pp. 562–567. IEEE, August 2010Google Scholar
  7. 7.
    Stone, J.E., Gohara, D., Shi, G.: OpenCL: a parallel programming standard for heterogeneous computing systems. Comput. Sci. Eng. 12(3), 66–73 (2010)CrossRefGoogle Scholar
  8. 8.
    Intel FPGA SDK for OpenCL Cyclone V SoC Getting Started Guide. Intel (2017)Google Scholar
  9. 9.
    Intel FPGA SDK for OpenCL Stratix V Network Reference Platform Porting Guide. Intel (2017)Google Scholar
  10. 10.
    Intel FPGA SDK for OpenCL Arria 10 GX FPGA Development Kit Reference Platform Porting Guide. Intel (2017)Google Scholar
  11. 11.
    Loring Wirbel: Xilinx SDAccel Whitepaper. Xilinx (2014)Google Scholar
  12. 12.
    Karlin, I.: LULESH programming model and performance ports over-view (No. LLNL-TR-608824). Lawrence Livermore National Laboratory (LLNL), Livermore, CA (2012)Google Scholar
  13. 13.
  14. 14.
    Bercea, G.T., et al.: Performance analysis of OpenMP on a GPU using a CORAL proxy application. In: Proceedings of the 6th International Workshop on Performance Modeling, Benchmarking, and Simulation of High Performance Computing Systems, p. 2. ACM, November 2015Google Scholar
  15. 15.
    Jin, Z., Finkel, H., Yoshii, K., Cappello, F.: Evaluation of a floating-point intensive kernel on FPGA. In: Heras, D.B., Bougé, L. (eds.) Euro-Par 2017. LNCS, vol. 10659, pp. 664–675. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-75178-8_53CrossRefGoogle Scholar
  16. 16.
    León, E.A., Karlin, I.: Characterizing the impact of program optimizations on power and energy for explicit hydrodynamics. In: 2014 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 773–781. IEEE, May 2014Google Scholar
  17. 17.
    León, E.A., Karlin, I., Grant, R.E.: Optimizing explicit hydrodynamics for power, energy, and performance. In: 2015 IEEE International Conference on Cluster Computing (CLUSTER), pp. 11–21. IEEE, September 2015Google Scholar
  18. 18.
    Wu, X., Taylor, V., Cook, J. Juedeman, T.: Performance and power characteristics and optimizations of hybrid MPI/OpenMP LULESH miniapps under various workloads. In: Proceedings of the 5th International Workshop on Energy Efficient Supercomputing, p. 4. ACM, November 2017Google Scholar
  19. 19.
    Lim, R., Malony, A., Norris, B., Chaimov, N.: Identifying optimization opportunities within kernel execution in GPU codes. In: Hunold, S., et al. (eds.) Euro-Par 2015. LNCS, vol. 9523, pp. 185–196. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-27308-2_16CrossRefGoogle Scholar
  20. 20.
    Sulyok, A.A., Balogh, G.D., Reguly, I.Z., Mudalige, G.R.: Improving locality of unstructured mesh algorithms on GPUs. arXiv preprint arXiv:1802.03749 (2018)
  21. 21.
    Karlin, I., McGraw, J., Gallardo, E., Keasler, J., Leon, E.A., Still, B.: Memory and parallelism exploration using the LULESH proxy application. In: 2012 SC Companion: High Performance Computing, Networking, Storage and Analysis (SCC), pp. 1427–1428. IEEE, November 2012Google Scholar
  22. 22.
    Lee, S., Vetter, J.S.: OpenARC: open accelerator research compiler for directive-based, efficient heterogeneous computing. In: Proceedings of the 23rd International Symposium on High-performance Parallel and Distributed Computing, pp. 115–120. ACM, June 2014Google Scholar
  23. 23.
    Lee, S., Kim, J., Vetter, J.S.: OpenACC to FPGA: a framework for directive-based high-performance reconfigurable computing. In: 2016 IEEE International Parallel and Distributed Processing Symposium, pp. 544–554. IEEE, May 2016Google Scholar
  24. 24.
    Sommer, L., Korinth, J., Koch, A.: OpenMP device offloading to FPGA accelerators. In: 2017 IEEE 28th International Conference on Application-specific Systems, Architectures and Processors (ASAP), pp. 201–205. IEEE, July 2017Google Scholar
  25. 25.
    Gautier, Q., Althoff, A., Meng, P., Kastner, R.: Spector: an OpenCL FPGA benchmark suite. In: 2016 International Conference on Field-Programmable Technology (FPT), pp. 141–148. IEEE, December 2016Google Scholar
  26. 26.
    Wang, Z., He, B., Zhang, W., Jiang, S.: A performance analysis framework for optimizing OpenCL applications on FPGAs. In: 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 114–125. IEEE, March 2016Google Scholar
  27. 27.
    Wang, Z., Zhang, S., He, B., Zhang, W.: Melia: a map reduce framework on OpenCL-based FPGAs. IEEE Trans. Parallel Distrib. Syst. 27(12), 3547–3560 (2016)CrossRefGoogle Scholar
  28. 28.
    Settle, S.O.: High-performance dynamic programming on FPGAs with OpenCL. In: Proceedings of the IEEE High Perform Extreme Computing Conference (HPEC), pp. 1–6, September 2013Google Scholar
  29. 29.
    Chen, D., Singh, D.: Fractal video compression in OpenCL: an evaluation of CPUs, GPUs, and FPGAs as acceleration platforms. In: 2013 18th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 297–304. IEEE, January 2013Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Leadership Computing FacilityArgonne National LaboratoryArgonneUSA

Personalised recommendations