Evaluating the NVIDIA Tegra Processor as a Low-Power Alternative for Sparse GPU Computations

  • José I. Aliaga
  • Ernesto DufrechouEmail author
  • Pablo Ezzatti
  • Enrique S. Quintana-Ortí
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 796)


In the last years, the presence of heterogeneous hardware platforms in the HPC field increased enormously. One of the major reason for this evolution is the necessity to contemplate energy consumption restrictions. As an alternative for reducing the power consumption of large clusters, new systems that include unconventional devices have been proposed. In particular, it is now common to encounter energy-efficient hardware such as GPUs and low-power ARM processors as part of hardware platforms intended for scientific computing.

A current line of our work aims to enhance the linear system solvers of ILUPACK by leveraging the combined computational power of GPUs and distributed memory platforms. One drawback of our solution is the limited level of parallelism offered by each sub-problem in the distributed version of ILUPACK, which is insufficient to exploit the conventional GPU architecture.

This work is a first step towards exploiting the use of energy efficient hardware to compute the ILUPACK solvers. Specifically, we developed a tuned implementation of the SPD linear system solver of ILUPACK for the NVIDIA Jetson TX1 platform, and evaluated its performance in problems that are unable to fully leverage the capabilities of high end GPUs. The positive results obtained motivate us to move our solution to a cluster composed by this kind of devices in the near future.


ILUPACK Jetson TX1 Sparse linear systems High performance 



The researchers from the Universidad Jaime I were supported by the CICYT project TIN2014-53495R of The researchers from UdelaR were supported by PEDECIBA and CAP-UdelaR Grant.


  1. 1.
    Aliaga, J.I., Bollhöfer, M., Martín, A.F., Quintana-Ortí, E.S.: Parallelization of multilevel ILU preconditioners on distributed-memory multiprocessors. In: Jónasson, K. (ed.) PARA 2010. LNCS, vol. 7133, pp. 162–172. Springer, Heidelberg (2012). CrossRefGoogle Scholar
  2. 2.
    Aliaga, J.I., Dufrechou, E., Ezzatti, P., Quintana-Ortí, E.S.: Design of a task-parallel version of ILUPACK for graphics processors. In: Barrios Hernández, C.J., Gitler, I., Klapp, J. (eds.) CARLA 2016. CCIS, vol. 697, pp. 91–103. Springer, Cham (2017). CrossRefGoogle Scholar
  3. 3.
    Aliaga, J.I., Bollhöfer, M., Dufrechou, E., Ezzatti, P., Quintana-Ortí, E.S.: Leveraging data-parallelism in ILUPACK using graphics processors. In: Muntean, T., Rolland, R., Mugwaneza, L. (eds.) IEEE 13th International Symposium on Parallel and Distributed Computing, ISPDC 2014, Marseille, France, 24–27 June 2014, pp. 119–126. IEEE (2014)Google Scholar
  4. 4.
    Barrett, R., Berry, M.W., Chan, T.F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., Van der Vorst, H.: Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, vol. 43. SIAM, Philadelphia (1994)CrossRefzbMATHGoogle Scholar
  5. 5.
    Bollhöfer, M., Grote, M.J., Schenk, O.: Algebraic multilevel preconditioner for the Helmholtz equation in heterogeneous media. SIAM J. Sci. Comput. 31(5), 3781–3805 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Bollhöfer, M., Saad, Y.: Multilevel preconditioners constructed from inverse-based ILUs. SIAM J. Sci. Comput. 27(5), 1627–1650 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Anonymous Contributors: start—mont-blanc prototype (2016). Accessed 10 July 2017Google Scholar
  8. 8.
    George, T., Gupta, A., Sarin, V.: An empirical analysis of the performance of preconditioners for SPD systems. ACM Trans. Math. Softw. 38(4), 24:1–24:30 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Geveler, M., Ribbrock, D., Donner, D., Ruelmann, H., Höppke, C., Schneider, D., Tomaschewski, D., Turek, S.: The ICARUS white paper: a scalable, energy-efficient, solar-powered HPC center based on low power GPUs. In: Desprez, F., et al. (eds.) Euro-Par 2016. LNCS, vol. 10104, pp. 737–749. Springer, Cham (2017). ISBN 978-3-319-58943-5CrossRefGoogle Scholar
  10. 10.
    Geveler, M., Turek, S.: How applied sciences can accelerate the energy revolution-a pleading for energy awareness in scientific computing. In: Newsletter of the European Community on Computational Methods in Applied Sciences, January 2017, acceptedGoogle Scholar
  11. 11.
    NVIDIA: TESLA K20 GPU Accelerator (2013). Accessed 10 July 2017
  12. 12.
    NVIDIA: NVIDIA Tegra X1 NVIDIAs New Mobile Superchip (2015). Accessed 10 July 2017
  13. 13.
    Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM Publications, Philadelphia (2003)CrossRefzbMATHGoogle Scholar
  14. 14.
    Schenk, O., Wächter, A., Weiser, M.: Inertia-revealing preconditioning for large-scale nonconvex constrained optimization. SIAM J. Sci. Comput. 31(2), 939–960 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Schenk, O., Bollhöfer, M., Römer, R.A.: On large scale diagonalization techniques for the Anderson model of localization. SIAM Rev. 50, 91–112 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Schenk, O., Wächter, A., Weiser, M.: Inertia revealing preconditioning for large-scale nonconvex constrained optimization. SIAM J. Sci. Comput. 31(2), 939–960 (2008)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • José I. Aliaga
    • 1
  • Ernesto Dufrechou
    • 2
    Email author
  • Pablo Ezzatti
    • 2
  • Enrique S. Quintana-Ortí
    • 1
  1. 1.Dep. de Ingeniería y Ciencia de la ComputaciónUniversidad Jaime ICastellónSpain
  2. 2.Instituto de ComputaciónUniversidad de la RepúblicaMontevideoUruguay

Personalised recommendations