Advertisement

Interpolation Models with Multiple Hyperparameters

  • David J C MacKay
  • Ryo Takeuchi
Conference paper
Part of the Fundamental Theories of Physics book series (FTPH, volume 70)

Abstract

A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant α, and the noise model has a single parameter β. The ratio α/β alone is responsible for determining globally all these attributes of the interpolant: its ‘complexity’, ‘flexibility’, ‘smoothness’, ‘characteristic scale length’, and ‘characteristic amplitude’. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of ‘conditional convexity’ when designing models with many hyperparameters.

Keywords

Traditional Model Noise Model Markov Chain Monte Carlo Method Artificial Data Characteristic Amplitude 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blake, A., and Zisserman, A. (1987) Visual Reconstruction. Cambridge Mass.: MIT Press.Google Scholar
  2. Gilks, W., and Wild, P. (1992) Adaptive rejection sampling for Gibbs sampling. Applied Statistics41: 337 – 348.zbMATHCrossRefGoogle Scholar
  3. Gu, C., and Wahba, G. (1991) Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method. SIAM J. Sci. Stat. Comput.12: 383 – 398.MathSciNetzbMATHCrossRefGoogle Scholar
  4. Gull, S. F. (1988) Bayesian inductive inference and maximum entropy. In Maximum Entropy and Bayesian Methods in Science and Engineering, vol. 1: Foundations, ed. Gull, S. F, pp. 53 – 74, Dordrecht. Kluwer.Google Scholar
  5. Kimeldorf, G. S., and Wahba, G. (1970) A correspondence between Bayesian estimation of stochastic processes and smoothing by splines. Annals of Mathematical Statistics41 (2): 495 – 502.MathSciNetzbMATHCrossRefGoogle Scholar
  6. Lewicki, M. (1994) Bayesian modeling and classification of neural signals. Neural Computation6 (5): 1005 – 1030.MathSciNetzbMATHCrossRefGoogle Scholar
  7. MacKay, D. J. C. (1992) Bayesian interpolation. Neural Computation4 (3): 415 – 447.CrossRefGoogle Scholar
  8. MacKay, D. J. C. (1994) Hyperparameters: Optimize, or integrate out? In Maximum Entropy and Bayesian Methods, Santa Barbara 1993, ed. by G. Heidbreder, Dordrecht. Kluwer.Google Scholar
  9. MacKay, D. J. C, and Takeuchi, R., (1994) Interpolation models with multiple hyperparameters. Submitted to IEEE PAMI.Google Scholar
  10. Muller, H. G., and Stadtmuller, U. (1987) Variable bandwidth kernel estimators of regression-curves. Annals of Statistics15 (1): 182 – 201.MathSciNetCrossRefGoogle Scholar
  11. Smith, A. (1991) Bayesian computational methods. Philosophical Transactions of the Royal Society of London A337: 369 – 386.zbMATHCrossRefGoogle Scholar

Copyright information

© Kluwer Academic Publishers 1996

Authors and Affiliations

  • David J C MacKay
    • 1
  • Ryo Takeuchi
    • 2
  1. 1.Cavendish LaboratoryCambridgeUK
  2. 2.Waseda UniversityTokyoJapan

Personalised recommendations