Advertisement

Algorithmica

, Volume 81, Issue 5, pp 2092–2121 | Cite as

Computing Dense Tensor Decompositions with Optimal Dimension Trees

  • Oguz KayaEmail author
  • Yves Robert
Article
  • 19 Downloads

Abstract

Dense tensor decompositions have been widely used in many signal processing problems including analyzing speech signals, identifying the localization of signal sources, and many other communication applications. Computing these decompositions poses major computational challenges for big datasets emerging in these domains. CANDECOMP/PARAFAC (CP) and Tucker formulations are the prominent tensor decomposition schemes heavily used in these fields, and the algorithms for computing them involve applying two core operations, namely tensor-times–matrix and tensor-times–vector multiplication, which are executed repetitively within an iterative framework. In the recent past, efficient computational schemes using a data structure called dimension tree, are employed to significantly reduce the cost of these two operations, through storing and reusing partial results that are commonly used across different iterations of these algorithms. This framework has been introduced for sparse CP and Tucker decompositions in the literature, and a recent work investigates using an optimal binary dimension tree structure in computing dense Tucker decompositions. In this paper, we investigate finding an optimal dimension tree for both CP and Tucker decompositions. We show that finding an optimal dimension tree for an N-dimensional tensor is NP-hard for both decompositions, provide faster exact algorithms for finding an optimal dimension tree in \(O(3^N)\) time using \(O(2^N)\) space for the Tucker case, and extend the algorithm to the case of CP decomposition with the same time and space complexities.

Keywords

Tensor computations CP decomposition Tucker decomposition Dimension tree 

Notes

Acknowledgements

This research was funded in part by the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). The authors would like to thank Bora Uçar for several discussions. Finally, the authors would like to thank both anonymous reviewers for their comments and suggestions, that helped us to improve this manuscript.

References

  1. 1.
    Acar, E., Dunlavy, D.M., Kolda, T.G.: A scalable optimization approach for fitting canonical tensor decompositions. J. Chemom. 25(2), 67–86 (2011)CrossRefGoogle Scholar
  2. 2.
    Andersson, C.A., Bro, R.: The N-way toolbox for MATLAB. Chemom. Intell. Lab. Syst. 52(1), 1–4 (2000)CrossRefGoogle Scholar
  3. 3.
    Bader, B.W., Kolda, T.G.: Efficient MATLAB computations with sparse and factored tensors. SIAM J. Sci. Comput. 30(1), 205–231 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Bader, B.W., Kolda, T.G., et al.: Matlab tensor toolbox version 2.6. Available online (2015)Google Scholar
  5. 5.
    Baskaran, M., Meister, B., Vasilache, N., Lethin, R.: Efficient and scalable computations with sparse tensors. In: Proceedings of the IEEE Conference on High Performance Extreme Computing, HPEC 2012, pp. 1–6 (2012).  https://doi.org/10.1109/HPEC.2012.6408676
  6. 6.
    Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Jr., E.R.H., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI ’10, pp. 1306–1313. AAAI Press (2010). http://dl.acm.org/citation.cfm?id=2898607.2898816
  7. 7.
    Carroll, D.J., Chang, J.: Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika 35(3), 283–319 (1970)CrossRefzbMATHGoogle Scholar
  8. 8.
    Chakaravarthy, V.T., Choi, J., Joseph, D.J., Liu, X., Murali, P., Sabharwal, Y., Sreedhar, D.: On optimizing distributed tucker decomposition for dense tensors. In: Proceedings of the IEEE International Symposium on Parallel and Distributed Processing, IPDPS ’17, Orlando, FL, USA (2017)Google Scholar
  9. 9.
    Choi, J.H., Vishwanathan, S.V.N.: DFacTo: distributed factorization of tensors. In: 27th Advances in Neural Information Processing Systems, Montreal, Quebec, Canada, pp. 1296–1304 (2014)Google Scholar
  10. 10.
    Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31(4), 2029–2054 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Harshman, R.A.: Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multi-modal factor analysis. UCLA Working Papers in Phonetics 16, 1–84 (1970)Google Scholar
  12. 12.
    Håstad, J.: Tensor rank is np-complete. J. Algorithms 11(4), 644–654 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Kang, U., Papalexakis, E., Harpale, A., Faloutsos, C.: GigaTensor: Scaling tensor analysis up by 100 times—algorithms and discoveries. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’12, pp. 316–324. ACM, New York (2012)Google Scholar
  14. 14.
    Kaya, O., Uçar, B.: High-performance parallel algorithms for the Tucker decomposition of higher order sparse tensors. Technical Report RR-8801, Inria, Grenoble–Rhône-Alpes (2015)Google Scholar
  15. 15.
    Kaya, O., Uçar, B.: Scalable sparse tensor decompositions in distributed memory systems. Technical Report RR-8722, Inria, Grenoble–Rhône-Alpes (2015)Google Scholar
  16. 16.
    Kaya, O., Uçar, B.: Scalable sparse tensor decompositions in distributed memory systems. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’15, pp. 77:1–77:11. ACM, New York (2015).  https://doi.org/10.1145/2807591.2807624
  17. 17.
    Kaya, O., Uçar, B.: High performance parallel algorithms for the Tucker decomposition of sparse tensors. In: Proceedings of the 45th International Conference on Parallel Processing, ICPP ’16, pp. 103–112 (2016).  https://doi.org/10.1109/ICPP.2016.19
  18. 18.
    Kaya, O., Uçar, B.: Parallel CP decomposition of sparse tensors using dimension trees. Research Report RR-8976, Inria - Research Centre Grenoble–Rhône-Alpes (2016)Google Scholar
  19. 19.
    Kolda, T.G., Bader, B.: The TOPHITS model for higher-order web link analysis. In: Proceedings of Link Analysis, Counterterrorism and Security, pp. 26–29 (2006)Google Scholar
  20. 20.
    Kolda, T.G., Bader, B.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Lathauwer, L.D., Moor, B.D.: From matrix to tensor: Multilinear algebra and signal processing. In: Proceedings of the Institute of Mathematics and Its Applications Conference Series, vol. 67, pp. 1–16 (1998)Google Scholar
  22. 22.
    Lathauwer, L.D., Moor, B.D., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Li, J., Choi, J., Perros, I., Sun, J., Vuduc, R.: Model-driven sparse CP decomposition for higher-order tensors. In: Proceedings of the IEEE International Symposium on Parallel and Distributed Processing, IPDPS ’17, Orlando, FL, USA, pp. 1048–1057 (2017)Google Scholar
  24. 24.
    Ng, C., Barketau, M., Cheng, T., Kovalyov, M.Y.: Product partition and related problems of scheduling and systems reliability: computational complexity and approximation. Eur. J. Oper. Res. 207(2), 601–604 (2010).  https://doi.org/10.1016/j.ejor.2010.05.034 MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Nion, D., Mokios, K.N., Sidiropoulos, N.D., Potamianos, A.: Batch and adaptive PARAFAC-based blind separation of convolutive speech mixtures. IEEE Trans. Audio Speech Lang. Process. 18(6), 1193–1207 (2010).  https://doi.org/10.1109/TASL.2009.2031694 CrossRefGoogle Scholar
  26. 26.
    Nion, D., Sidiropoulos, N.D.: Tensor algebra and multidimensional harmonic retrieval in signal processing for mimo radar. IEEE Trans. Signal Process. 58(11), 5693–5705 (2010).  https://doi.org/10.1109/TSP.2010.2058802 MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Perros, I., Chen, R., Vuduc, R., Sun, J.: Sparse hierarchical Tucker factorization and its application to healthcare. In: Proceedings of the 2015 IEEE International Conference on Data Mining, ICDM 2015, pp. 943–948 (2015)Google Scholar
  28. 28.
    Phan, A.H., Tichavský, P., Cichocki, A.: Fast alternating LS algorithms for high order CANDECOMP/PARAFAC tensor factorizations. IEEE Trans. Signal Process. 61(19), 4834–4846 (2013).  https://doi.org/10.1109/TSP.2013.2269903 CrossRefGoogle Scholar
  29. 29.
    Rendle, S., Lars, T.S.: Pairwise interaction tensor factorization for personalized tag recommendation. In: Proceedings of the Third ACM International Conference on Web Search and Data Mining, WSDM ’10, pp. 81–90. ACM, New York (2010).  https://doi.org/10.1145/1718487.1718498
  30. 30.
    Rendle, S., Leandro, B.M., Nanopoulos, A., Schmidt-Thieme, L.: Learning optimal ranking with tensor factorization for tag recommendation. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’09, pp. 727–736. ACM, New York (2009).  https://doi.org/10.1145/1557019.1557100
  31. 31.
    Sidiropoulos, N.D., Bro, R., Giannakis, G.B.: Parallel factor analysis in sensor array processing. IEEE Trans. Signal Process. 48(8), 2377–2388 (2000).  https://doi.org/10.1109/78.852018 CrossRefGoogle Scholar
  32. 32.
    Smith, S., Karypis, G.: A medium-grained algorithm for sparse tensor factorization. In: 2016 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2016, Chicago, IL, USA, May 23–27, 2016, pp. 902–911 (2016)Google Scholar
  33. 33.
    Smith, S., Ravindran, N., Sidiropoulos, N.D., Karypis, G.: SPLATT: Efficient and parallel sparse tensor–matrix multiplication. In: Proceedings of the 29th IEEE International Parallel and Distributed Processing Symposium, IPDPS ’15, pp. 61–70. IEEE Computer Society, Hyderabad (2015)Google Scholar
  34. 34.
    Symeonidis, P., Nanopoulos, A., Manolopoulos, Y.: Tag recommendations based on tensor dimensionality reduction. In: Proceedings of the 2008 ACM Conference on Recommender Systems, RecSys ’08, pp. 43–50. ACM, New York (2008).  https://doi.org/10.1145/1454008.1454017
  35. 35.
    Vasilescu, M.A.O., Terzopoulos, D.: Multilinear analysis of image ensembles: TensorFaces. In: Computer Vision—ECCV 2002, pp. 447–460. Springer, Berlin (2002)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Laboratoire de Recherche en Informatique (LRI)Orsay CedexFrance
  2. 2.ENS LyonLyonFrance
  3. 3.University of TennesseeKnoxvilleUSA

Personalised recommendations