Advertisement

Gradient Descent for Gaussian Processes Variance Reduction

  • Lorenzo Bottarelli
  • Marco Loog
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11004)

Abstract

A key issue in Gaussian Process modeling is to decide on the locations where measurements are going to be taken. A good set of observations will provide a better model. Current state of the art selects such a set so as to minimize the posterior variance of the Gaussian Process by exploiting submodularity. We propose a Gradient Descent procedure to iteratively improve an initial set of observations so as to minimize the posterior variance directly. The performance of the technique is analyzed under different conditions by varying the number of measurement points, the dimensionality of the domain and the hyperparameters of the Gaussian Process. Results show the applicability of the technique and the clear improvements that can be obtain under different settings.

References

  1. 1.
    Bottarelli, L., Bicego, M., Blum, J., Farinelli, A.: Skeleton-based orienteering for level set estimation. In: 22nd European Conference on Artificial Intelligence, ECAI 2016, Including Prestigious Applications of Artificial Intelligence, The Hague, The Netherlands, 29 August–2 September 2016, pp. 1256–1264 (2016)Google Scholar
  2. 2.
    Bottarelli, L., Blum, J., Bicego, M., Farinelli, A.: Path efficient level set estimation for mobile sensors. In: Proceedings of the Symposium on Applied Computing SAC 2017, pp. 262–267, ACM. New York, NY, USA (2017)Google Scholar
  3. 3.
    Guestrin, C., Krause, A., Singh, A.P.: Near-optimal sensor placements in Gaussian processes. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 265–272. ACM (2005)Google Scholar
  4. 4.
    Krause, A., Guestrin, C.: Near-optimal observation selection using submodular functions. In: National Conference on Artificial Intelligence (AAAI), Nectar track, July 2007Google Scholar
  5. 5.
    Krause, A., Guestrin, C., Gupta, A., Kleinberg, J.: Robust sensor placements at informative and communication-efficient locations. ACM Trans. Sen. Netw. 7(4), 31:1–31:33 (2011)CrossRefGoogle Scholar
  6. 6.
    Krause, A., McMahan, H.B., Guestrin, C., Gupta, A.: Robust submodular observation selection. J. Mach. Learn. Res. 9(Dec), 2761–2801 (2008)zbMATHGoogle Scholar
  7. 7.
    Krause, A., Singh, A.: Near-optimal sensor placements in Gaussian processes: theory, efficient algorithms and empirical studies. J. Mach. Learn. Res. 9(Feb), 235–284 (2008)zbMATHGoogle Scholar
  8. 8.
    La, H.M., Sheng, W.: Distributed sensor fusion for scalar field mapping using mobile sensor networks. IEEE Trans. Cybern. 43(2), 766–778 (2013)CrossRefGoogle Scholar
  9. 9.
    Nemhauser, G.L., Wolsey, L.A., Fisher, M.L.: An analysis of approximations for maximizing submodular set functions–I. Math. Program. 14(1), 265–294 (1978)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Powers, T., Bilmes, J., Krout, D.W., Atlas, L.: Constrained robust submodular sensor selection with applications to multistatic sonar arrays. In: 2016 19th International Conference on Information Fusion (FUSION), pp. 2179–2185, July 2016Google Scholar
  11. 11.
    Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of VeronaVeronaItaly
  2. 2.Pattern Recognition LaboratoryDelft University of TechnologyDelftThe Netherlands

Personalised recommendations