Advertisement

Posit NPB: Assessing the Precision Improvement in HPC Scientific Applications

  • Steven W. D. ChienEmail author
  • Ivy B. Peng
  • Stefano Markidis
Conference paper
  • 112 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12043)

Abstract

Floating-point operations can significantly impact the accuracy and performance of scientific applications on large-scale parallel systems. Recently, an emerging floating-point format called Posit has attracted attention as an alternative to the standard IEEE floating-point formats because it could enable higher precision than IEEE formats using the same number of bits. In this work, we first explored the feasibility of Posit encoding in representative HPC applications by providing a 32-bit Posit NAS Parallel Benchmark (NPB) suite. Then, we evaluate the accuracy improvement in different HPC kernels compared to the IEEE 754 format. Our results indicate that using Posit encoding achieves optimized precision, ranging from 0.6 to 1.4 decimal digit, for all tested kernels and proxy-applications. Also, we quantified the overhead of the current software implementation of Posit encoding as 4\(\times \)–19\(\times \) that of IEEE 754 hardware implementation. Our study highlights the potential of hardware implementations of Posit to benefit a broad range of HPC applications.

Keywords

HPC Floating point precision Posit NPB 

Notes

Acknowledgments

Funding for the work is received from the European Commission H2020 program, Grant Agreement No. 801039 (EPiGRAM-HS). LLNL release: LLNL-PROC-779741.

References

  1. 1.
    An unofficial C version of the NAS parallel Benchmarks OpenMP 3.0 (2014). https://github.com/benchmark-subsetting/NPB3.0-omp-C
  2. 2.
    Anzt, H., Dongarra, J., Flegar, G., Higham, N.J., Quintana-Ortí, E.S.: Adaptive precision in block-jacobi preconditioning for iterative sparse linear system solvers. Concurr. Comput. Pract. Exp. 31(6), e4460 (2019)CrossRefGoogle Scholar
  3. 3.
    Bailey, D.H., et al.: The NAS parallel benchmarks. Int. J. Supercomput. Appl. 5(3), 63–73 (1991)CrossRefGoogle Scholar
  4. 4.
    Dongarra, J., et al.: The international exascale software project roadmap. Int. J. High Perform. Comput. Appl. 25(1), 3–60 (2011)CrossRefGoogle Scholar
  5. 5.
    Gustafson, J.L.: The End of Error: Unum Computing. Chapman and Hall/CRC, Boca Raton (2015)zbMATHGoogle Scholar
  6. 6.
    Gustafson, J.L., Yonemoto, I.T.: Beating floating point at its own game: posit arithmetic. Supercomput. Front. Innov. 4(2), 71–86 (2017)Google Scholar
  7. 7.
    Johnson, J.: Rethinking floating point for deep learning. arXiv preprint arXiv:1811.01721 (2018)
  8. 8.
    Lindstrom, P., Lloyd, S., Hittinger, J.: Universal coding of the reals: alternatives to IEEE floating point. In: Proceedings of the Conference for Next Generation Arithmetic (2018).  https://doi.org/10.1145/3190339.3190344
  9. 9.
    Markidis, S., Chien, S.W.D., Laure, E., Peng, I.B., Vetter, J.S.: Nvidia tensor core programmability, performance & precision. In: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 522–531. IEEE (2018)Google Scholar
  10. 10.
    Menon, H., et al.: ADAPT: algorithmic differentiation applied to floating-point precision tuning. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis (2018)Google Scholar
  11. 11.
    Podobas, A., Matsuoka, S.: Hardware implementation of POSITs and their application in FPGAs. In: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 138–145, May 2018.  https://doi.org/10.1109/IPDPSW.2018.00029

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Steven W. D. Chien
    • 1
    Email author
  • Ivy B. Peng
    • 2
  • Stefano Markidis
    • 1
  1. 1.KTH Royal Institute of TechnologyStockholmSweden
  2. 2.Lawrence Livermore National LaboratoryLivermoreUSA

Personalised recommendations