Abstract
The Gaussian Process (GP) model is used widely in many hard machine learning tasks. In practice, it faces the challenge from scalability concerns. In this manuscript, we proposed a domain decomposition method in GP learning. It is shown that the GP model itself has the inherent capability of being trained through divide-and-conquer. Given a large GP learning problem, it can be divided into smaller problems. By solving the smaller problems and merging the solutions, it is guaranteed to reach the solution to the original problem. We further verified the efficiency and the effectiveness of the algorithm through experiments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Williams, C., Rasmussen, C.: Gaussian Processes for Regression. In: Advances in Neural Information Processing Systems’8. MIT Press (1996)
MacKay, D.: Introduction to Gaussian Processes. Technical report, Cambridge university (1997)
Seeger, M.: Gaussian Processes for Machine Learning. International Journal of Neural Systems 14, 69–106 (2004)
Csató, L., Opper, M.: Sparse On-line Gaussian Processes. Neural Computation 14, 641–668 (2002)
Lawrence, N., Seeger, M., Herbrich, R.: Fast sparse gaussian process methods: The informative vector machine. In: Advances in Neural Information Processing Systems’15, pp. 609–616. MIT Press (2003)
Tipping, M.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research 1, 211–244 (2001)
Williams, C., Seeger, M.: Using the Nyström Method to Speed Up Kernel Machines. In: Advances in Neural Information Processing Systems’13, pp. 682–688. MIT Press (2001)
Rifkin, R.: Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning. PhD thesis, Massachusetts Institute of Technology (2002)
Yang, C., Duraiswami, R., Davis, L.: Efficient Kernel Machines Using the Improved Fast Gauss Transform. In: Advances in Neural Information Processing Systems’17, pp. 1561–1568. MIT Press (2005)
Chalupka, K., Williams, C., Murray, I.: A Framework for Evaluating Approximation Methods for Gaussian Process Regression. Journal of Machine Learning Research 14, 333–350 (2013)
Bo, L., Sminchisescu, C.: Greedy Block Coordinate Descent for Large Scale Gaussian Process Regression. In: Proceedings of the 24th Conference in Uncertainty in Artificial Intelligence, pp. 43–52. AUAI Press (2008)
Golub, G., van Loan, C.: Matrix Computations. John Hopkins Studies in the Mathematical Sciences. 3rd edn. Johns Hopkins University Press (1996)
Faul, A., Powell, M.: Proof of Convergence of an Iterative Technique for Thin Plate Spline Interpolation in Two Dimensions. Advances in Computational Mathematics 11, 183–192 (1999)
Schaback, R., Wendland, H.: Numerical Techniques Based on Radial Basis Functions. In: Curve and Surface Fitting, pp. 359–374. Vanderbilt University Press (2000)
Li, W., Lee, K.H., Leung, K.S.: Large-scale RLSC Learning Without Agony. In: Proceedings of the 24th Annual International Conference on Machine Learning, pp. 529–536. ACM (2007)
von Neumann, J.: Mathematical Foundations of Quantum Mechanics. Princeton University Press (1955)
Smith, K., Solomon, D., Wagner, S.: Practical and Mathematical Aspects of the Problem of Reconstructing Objects from Radiographs. Bulletin of the American Mathematical Society, 1227–1270 (1977)
Li, W., Lee, K.H., Leung, K.S.: Generalized Regularized Least-Squares Learning with Predefined Features in a Hilbert Space. In: Advances in Neural Information Processing Systems’19, pp. 881–888. MIT Press (2007)
Li, W., Leung, K.S., Lee, K.H.: Generalizing the Bias Term of Support Vector Machines. In: Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp. 919–924. AAAI (2007)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Li, W. (2014). Gaussian Process Learning: A Divide-and-Conquer Approach. In: Zeng, Z., Li, Y., King, I. (eds) Advances in Neural Networks – ISNN 2014. ISNN 2014. Lecture Notes in Computer Science(), vol 8866. Springer, Cham. https://doi.org/10.1007/978-3-319-12436-0_29
Download citation
DOI: https://doi.org/10.1007/978-3-319-12436-0_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-12435-3
Online ISBN: 978-3-319-12436-0
eBook Packages: Computer ScienceComputer Science (R0)