Maximum Entropy, Likelihood and Uncertainty: A Comparison
A framework for comparing the maximum likelihood (ML) and maximum entropy (ME) approaches is developed. Two types of linear models are considered. In the first type, the objective is to estimate probability distributions given some moment conditions. In this case the ME and ML are equivalent. A generalization of this type of estimation models to incorporate noisy data is discussed as well. The second type of models encompasses the traditional linear regression type models where the number of observations is larger than the number of unknowns and the objects to be inferred are not natural probabilities. After reviewing a generalized ME estimator and the empirical likelihood (or weighted least squares) estimator, the two are compared and contrasted with the ML. It is shown that, in general, the ME estimators use less input information and may be viewed, within the second type models, as expected log-likelihood estimators. In terms of informational ranking, if the objective is to estimate with minimum a-priori assumptions, then the generalized ME estimator is superior to the other estimators. Two detailed examples, reflecting the two types of models, are discussed. The first example deals with estimating a first order Markov process. In the second example the empirical (natural) weights of each observation, together with the other unknowns, are the subject of interest.
Key wordsEmpirical likelihood Information Maximum entropy Maximum likelihood
Unable to display preview. Download preview PDF.
- 1.Akaike, H (1986), “The Selection Smoothness Priors for Distributed Lag Estimation,” in Bayesian and Decision Techniques: Essays in Honor of Bruno de Finetti, eds. P. K. Goel and A. Zellner (Amsterdam, North-Holland) 109–118.Google Scholar
- 7.Golan, A., and G. Judge (1996), “A Maximum Entropy Approach to Empirical Likelihood Estimation and Inference,” (Working paper), UC Berkeley.Google Scholar
- 8.Golan, A., and S. J. Vogel (1997), “Estimation of Stationary and Non-Stationary Social Accounting Matrix Coefficients With Structural and Supply-Side Information,” (Working paper).Google Scholar
- 11.Imbens, G.W. (1993), “A New Approach to Generalized Method of Moments Estimation,” (mimeo), Harvard University.Google Scholar
- 12.Imbens, G.W. and J.K. Hellerstein (1994), “Imposing Moment Restrictions by Weighting,” (mimeo), Harvard University.Google Scholar
- 15.Jaynes, E.T. (1963), “Information Theory and Statistical Mechanics II,” in K.W. Ford (ed.), Statistical Physics, W.A. Benamin, Inc., New York, 181–218.Google Scholar
- 16.Jaynes, E.T. (1984), “Prior Information and Ambiguity in Inverse Problems,” in D.W. McLaughlin (ed.), Inverse Problems, SIAM Proceedings, American Mathematical Society, Providence, RI, 151–166.Google Scholar
- 25.Skilling, J. (1989), “The Axioms of Maximum Entropy,” in J. Skilling (ed.), Maximum Entropy and Bayesian Methods in Science and Engineering, Kluwer Academic, Dordrecht, 173–187.Google Scholar
- 26.Tobias, J., and A. Zellner (1997), Further Results on Bayesian Method of Moments Analysis of the Multiple Regression Model. H.G.B. Alexander Research Foundation, University of Chicago.Google Scholar
- 28.A. Zellner. Bayesian method of moments/ instrumental variable (BMOM/iv) analysis of mean and regression models. In J.C. Lee, W.C. Johnson, and A. Zellner, editors, Modeling and Prediction: Honoring Seymour Geisser, pages 61–75. Springer-Verlag, 1996.Google Scholar
- 29.A. Zellner (1997). The Bayesian method of moments (BMOM): theory and applications. In T. Fomby and R.C. Hill, editors, Advances in Econometrics.Google Scholar