Skip to main content

Greedy Algorithms for Matrix-Valued Kernels

  • Conference paper
  • First Online:
  • 1631 Accesses

Part of the book series: Lecture Notes in Computational Science and Engineering ((LNCSE,volume 126))

Abstract

We are interested in approximating vector-valued functions on a compact set \(\varOmega \subset \mathbb {R}^d\). We consider reproducing kernel Hilbert spaces of \(\mathbb {R}^m\)-valued functions which each admit a unique matrix-valued reproducing kernel k. These spaces seem promising, when modelling correlations between the target function components. The approximation of a function is a linear combination of matrix-valued kernel evaluations multiplied with coefficient vectors. To guarantee a fast evaluation of the approximant the expansion size, i.e. the number of centers n is desired to be small. We thus present three different greedy algorithms by which a suitable set of centers is chosen in an incremental fashion: First, the P-Greedy which requires no function evaluations, second and third, the f-Greedy and fP-Greedy which require function evaluations but produce centers tailored to the target function. The efficiency of the approaches is investigated on some data from an artificial model.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. M. Alvarez, L. Rosasco, N.D. Lawrence, Kernels for vector-valued functions: a review. Found. Trends Mach. Learn. 4(3), 195–266 (2012)

    Article  Google Scholar 

  2. S. De Marchi, R. Schaback, H. Wendland, Near-optimal data-independent point locations for radial basis function interpolation. Adv. Comput. Math. 23(3), 317–330 (2005)

    Article  MathSciNet  Google Scholar 

  3. C.A. Micchelli, M. Pontil, Kernels for multi-task learning, in Advances in Neural Information Processing Systems (MIT, Cambridge, 2004)

    Google Scholar 

  4. G. Santin, B. Haasdonk, Convergence rate of the data-independent P-greedy algorithm in kernel-based approximation. Dolomites Res. Notes Approx. 10, 68–78 (2017)

    Article  MathSciNet  Google Scholar 

  5. R. Schaback, J. Werner, Linearly constrained reconstruction of functions by kernes with applications to machine learning. Adv. Comput. Math. 25, 237–258 (2006)

    Article  MathSciNet  Google Scholar 

  6. D. Wittwar, G. Santin, B. Haasdonk, Interpolation with uncoupled separable matrix-valued kernels. arXiv 1807.09111

    Google Scholar 

Download references

Acknowledgements

We thank Gabriele Santin for fruitful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dominik Wittwar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wittwar, D., Haasdonk, B. (2019). Greedy Algorithms for Matrix-Valued Kernels. In: Radu, F., Kumar, K., Berre, I., Nordbotten, J., Pop, I. (eds) Numerical Mathematics and Advanced Applications ENUMATH 2017. ENUMATH 2017. Lecture Notes in Computational Science and Engineering, vol 126. Springer, Cham. https://doi.org/10.1007/978-3-319-96415-7_8

Download citation

Publish with us

Policies and ethics