Advertisement

Gaussian Process Learning: A Divide-and-Conquer Approach

  • Wenye Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8866)

Abstract

The Gaussian Process (GP) model is used widely in many hard machine learning tasks. In practice, it faces the challenge from scalability concerns. In this manuscript, we proposed a domain decomposition method in GP learning. It is shown that the GP model itself has the inherent capability of being trained through divide-and-conquer. Given a large GP learning problem, it can be divided into smaller problems. By solving the smaller problems and merging the solutions, it is guaranteed to reach the solution to the original problem. We further verified the efficiency and the effectiveness of the algorithm through experiments.

Keywords

Gaussian process Domain decomposition Machine learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Williams, C., Rasmussen, C.: Gaussian Processes for Regression. In: Advances in Neural Information Processing Systems’8. MIT Press (1996)Google Scholar
  2. 2.
    MacKay, D.: Introduction to Gaussian Processes. Technical report, Cambridge university (1997)Google Scholar
  3. 3.
    Seeger, M.: Gaussian Processes for Machine Learning. International Journal of Neural Systems 14, 69–106 (2004)CrossRefGoogle Scholar
  4. 4.
    Csató, L., Opper, M.: Sparse On-line Gaussian Processes. Neural Computation 14, 641–668 (2002)CrossRefzbMATHGoogle Scholar
  5. 5.
    Lawrence, N., Seeger, M., Herbrich, R.: Fast sparse gaussian process methods: The informative vector machine. In: Advances in Neural Information Processing Systems’15, pp. 609–616. MIT Press (2003)Google Scholar
  6. 6.
    Tipping, M.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research 1, 211–244 (2001)zbMATHMathSciNetGoogle Scholar
  7. 7.
    Williams, C., Seeger, M.: Using the Nyström Method to Speed Up Kernel Machines. In: Advances in Neural Information Processing Systems’13, pp. 682–688. MIT Press (2001)Google Scholar
  8. 8.
    Rifkin, R.: Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning. PhD thesis, Massachusetts Institute of Technology (2002)Google Scholar
  9. 9.
    Yang, C., Duraiswami, R., Davis, L.: Efficient Kernel Machines Using the Improved Fast Gauss Transform. In: Advances in Neural Information Processing Systems’17, pp. 1561–1568. MIT Press (2005)Google Scholar
  10. 10.
    Chalupka, K., Williams, C., Murray, I.: A Framework for Evaluating Approximation Methods for Gaussian Process Regression. Journal of Machine Learning Research 14, 333–350 (2013)zbMATHMathSciNetGoogle Scholar
  11. 11.
    Bo, L., Sminchisescu, C.: Greedy Block Coordinate Descent for Large Scale Gaussian Process Regression. In: Proceedings of the 24th Conference in Uncertainty in Artificial Intelligence, pp. 43–52. AUAI Press (2008)Google Scholar
  12. 12.
    Golub, G., van Loan, C.: Matrix Computations. John Hopkins Studies in the Mathematical Sciences. 3rd edn. Johns Hopkins University Press (1996)Google Scholar
  13. 13.
    Faul, A., Powell, M.: Proof of Convergence of an Iterative Technique for Thin Plate Spline Interpolation in Two Dimensions. Advances in Computational Mathematics 11, 183–192 (1999)CrossRefzbMATHMathSciNetGoogle Scholar
  14. 14.
    Schaback, R., Wendland, H.: Numerical Techniques Based on Radial Basis Functions. In: Curve and Surface Fitting, pp. 359–374. Vanderbilt University Press (2000)Google Scholar
  15. 15.
    Li, W., Lee, K.H., Leung, K.S.: Large-scale RLSC Learning Without Agony. In: Proceedings of the 24th Annual International Conference on Machine Learning, pp. 529–536. ACM (2007)Google Scholar
  16. 16.
    von Neumann, J.: Mathematical Foundations of Quantum Mechanics. Princeton University Press (1955)Google Scholar
  17. 17.
    Smith, K., Solomon, D., Wagner, S.: Practical and Mathematical Aspects of the Problem of Reconstructing Objects from Radiographs. Bulletin of the American Mathematical Society, 1227–1270 (1977)Google Scholar
  18. 18.
    Li, W., Lee, K.H., Leung, K.S.: Generalized Regularized Least-Squares Learning with Predefined Features in a Hilbert Space. In: Advances in Neural Information Processing Systems’19, pp. 881–888. MIT Press (2007)Google Scholar
  19. 19.
    Li, W., Leung, K.S., Lee, K.H.: Generalizing the Bias Term of Support Vector Machines. In: Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp. 919–924. AAAI (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Macao Polytechnic InstituteMacao SARChina

Personalised recommendations