Advertisement

Approximations of Gaussian Process Uncertainties for Visual Recognition Problems

  • Paul Bodesheim
  • Alexander Freytag
  • Erik Rodner
  • Joachim Denzler
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7944)

Abstract

Gaussian processes offer the advantage of calculating the classification uncertainty in terms of predictive variance associated with the classification result. This is especially useful to select informative samples in active learning and to spot samples of previously unseen classes known as novelty detection. However, the Gaussian process framework suffers from high computational complexity leading to computation times too large for practical applications. Hence, we propose an approximation of the Gaussian process predictive variance leading to rigorous speedups. The complexity of both learning and testing the classification model regarding computational time and memory demand decreases by one order with respect to the number of training samples involved. The benefits of our approximations are verified in experimental evaluations for novelty detection and active learning of visual object categories on the datasets C-Pascal of Pascal VOC 2008, Caltech-256, and ImageNet.

Keywords

Training Sample Gaussian Process Target Class Novelty Detection Memory Demand 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Barla, A., Odone, F., Verri, A.: Histogram intersection kernel for image classification. In: ICIP, pp. 513–516 (2003)Google Scholar
  2. 2.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer (2006)Google Scholar
  3. 3.
    Bodesheim, P., Rodner, E., Freytag, A., Denzler, J.: An efficient approximation for gaussian process regression. Tech. Rep. TR-FSU-INF-CV-2013-01, Computer Vision Group, Friedrich Schiller University Jena, Germany (2013), http://www.inf-cv.uni-jena.de/dbvmedia/TR_FSU_INF_CV_2013_01.pdf
  4. 4.
    Chen, X., Qi, H., Qi, L., Teo, K.L.: Smooth convex approximation to the maximum eigenvalue function. Journal of Global Optimization 30(2), 253–270 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR, pp. 248–255 (2009)Google Scholar
  6. 6.
    Ebert, S., Larlus, D., Schiele, B.: Extracting structures in image collections for object recognition. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part I. LNCS, vol. 6311, pp. 720–733. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  7. 7.
    Freytag, A., Rodner, E., Bodesheim, P., Denzler, J.: Rapid uncertainty computation with gaussian processes and histogram intersection kernels. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012, Part II. LNCS, vol. 7725, pp. 511–524. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  8. 8.
    Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset. Tech. Rep. 7694, California Institute of Technology (2007), http://authors.library.caltech.edu/7694
  9. 9.
    Kapoor, A., Grauman, K., Urtasun, R., Darrell, T.: Gaussian processes for object categorization. IJCV 88(2), 169–188 (2010)CrossRefGoogle Scholar
  10. 10.
    Kemmler, M., Rodner, E., Denzler, J.: One-class classification with gaussian processes. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010, Part II. LNCS, vol. 6493, pp. 489–500. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  11. 11.
    Nickisch, H., Rasmussen, C.E.: Approximations for binary gaussian process classfication. JMLR 9, 2035–2078 (2008)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Overton, M.L.: On minimizing the maximum eigenvalue of a symmetric matrix. SIAM Journal on Matrix Analysis and Applications 9(2), 256–268 (1988)MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Quinonero-Candela, J., Rasmussen, C.E.: A unifying view of sparse approximate gaussian process regression. JMLR 6, 1939–1959 (2005)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Rasmussen, C.E., Nickisch, H.: Gpml gaussian processes for machine learning toolbox (2010), http://mloss.org/software/view/263/
  15. 15.
    Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press (2006)Google Scholar
  16. 16.
    Rodner, E., Freytag, A., Bodesheim, P., Denzler, J.: Large-scale gaussian process classification with flexible adaptive histogram kernels. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part IV. LNCS, vol. 7575, pp. 85–98. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  17. 17.
    Schölkopf, B., Platt, J.C., Shawe-Taylor, J.C., Smola, A.J., Williamson, R.C.: Estimating the support of a high-dimensional distribution. Neural Computation 13(7), 1443–1471 (2001)zbMATHCrossRefGoogle Scholar
  18. 18.
    Snelson, E., Ghahramani, Z.: Sparse gaussian processes using pseudo-inputs. In: NIPS, pp. 1257–1264 (2005)Google Scholar
  19. 19.
    Tax, D.M.J., Duin, R.P.W.: Support vector data description. Machine Learning 54(1), 45–66 (2004)zbMATHCrossRefGoogle Scholar
  20. 20.
    Vempati, S., Vedaldi, A., Zisserman, A., Jawahar, C.V.: Generalized rbf feature maps for efficient detection. In: BMVC, pp. 2.1–2.11 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Paul Bodesheim
    • 1
  • Alexander Freytag
    • 1
  • Erik Rodner
    • 1
    • 2
  • Joachim Denzler
    • 1
  1. 1.Computer Vision GroupFriedrich Schiller University JenaGermany
  2. 2.International Computer Science InstituteUC Berkeley EECSUnited States

Personalised recommendations