Abstract
High-dimensional tensors of low rank can be represented in the hierarchical Tucker format (HT format) with a complexity which is linear in the dimension d of the tensor. We developed parallel algorithms which perform arithmetic operations on tensors in the HT format, where we assume the tensor data to be distributed over several compute nodes. The parallel runtime of our algorithms grows like \(\log (d)\) with the tensor dimension d, due to the tree structure of the HT format. On each of the compute nodes one can use shared memory parallelization to accelerate the algorithms further. One application of our algorithms is parameter-dependent problems. Solutions of parameter-dependent problems can be approximated as tensors in the HT format if the parameter dependencies fulfil some low rank property. Our algorithms can then be used to perform post-processing on solution tensors, e.g., compute mean values, expected values or other quantities of interest. If the problem is of the form Ax = b with the matrix A as well in the HT format, we can compute the residual of a solution tensor or even compute the entire solution directly in the HT format by means of iterative methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We abbreviate \(\mathbb {R}^{\mathcal {I}_1\times \{1,\ldots ,r\}}\) by \(\mathbb {R}^{\mathcal {I}_1\times r}\).
- 2.
A rSVD exists for any matrix which is not the zero matrix. This is, in general, not the case for tensors of higher dimension d > 2. Nevertheless, if \(\mathcal {I}_t\) is large, a rSVD of \(\mathcal {M}_t(A)\) may be no more computable. This is, however, not a handicap for us when we have available HT representations of a matrix A and a right-hand side B and want to solve AX = B by some iterative method inside the HT format. We can then choose a starting vector X 0 in the HT format (e.g., X 0 := B) and we will never have to transfer a tensor into the HT format.
If we need to approximate large tensors in the HT format, we may use other approximation techniques as, e.g., the cross approximation for HT tensors [2].
- 3.
We define (r t)t ∈ T ≤ (s t)t ∈ T :⇔ r t ≤ s t for all t ∈ T.
- 4.
The Hadamard product x ∘ y of two vectors \(x,y\in \mathbb {R}^{\mathcal {I}}\) is the vector of the entry-wise products: (x ∘ y)(i) = x(i) ⋅ y(i) for all \(i\in \mathcal {I}\).
References
Ballani, J., Grasedyck, L.: Tree Adaptive Approximation in the Hierarchical Tensor Format. SIAM Journal on Scientific Computing 36(4), A1415–A1431 (2014)
Ballani, J., Grasedyck, L., Kluge, M.: Black box approximation of tensors in hierarchical Tucker format . Linear Algebra Appl. 438(2), 639–657 (2013)
Etter, S.: Parallel ALS Algorithm for Solving Linear Systems in the Hierarchical Tucker Representation. SIAM Journal on Scientific Computing 38(4), A2585–A2609 (2016)
Grasedyck, L.: Hierarchical Singular Value Decomposition of Tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2010)
Grasedyck, L., Löbbert, C.: Distributed Hierarchical SVD in the Hierarchical Tucker Format. arXiv (2017). URL http://arxiv.org/abs/1708.03340
Hackbusch, W.: Tensor spaces and numerical tensor calculus, Springer series in computational mathematics, vol. 42. Springer, Heidelberg (2012)
Karlsson, L., Kressner, D., Uschmajew, A.: Parallel algorithms for tensor completion in the CP format. Parallel Computing 57(Supplement C), 222–234 (2016)
Solomonik, E., Matthews, D., Hammond, J.R., Stanton, J.F., Demmel, J.: A massively parallel tensor contraction framework for coupled-cluster computations. Journal of Parallel and Distributed Computing 74(12), 3176–3190 (2014). Domain-Specific Languages and High-Level Frameworks for High-Performance Computing
Woody Austin Grey Ballard, T.G.K.: Parallel Tensor Compression for Large-Scale Scientific Data. 2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS) pp. 912–922 (2016)
Acknowledgement
The authors gratefully acknowledge the support by the DFG priority programme 1648 (SPPEXA) under grant GR-3179/4-2.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Grasedyck, L., Löbbert, C. (2019). Parallel Algorithms for Low Rank Tensor Arithmetic. In: Singh, V., Gao, D., Fischer, A. (eds) Advances in Mathematical Methods and High Performance Computing. Advances in Mechanics and Mathematics, vol 41. Springer, Cham. https://doi.org/10.1007/978-3-030-02487-1_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-02487-1_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-02486-4
Online ISBN: 978-3-030-02487-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)