Massive Inference and Maximum Entropy
In data analysis, maximum entropy (MaxEnt) has been used to reconstruct measures i.e. positive, additive distributions) from limited data. The MaxEnt prior was originally derived from the “monkey model” in which quanta of uniform intensity could appear randomly in the field of view. To avoid undue digitisation, the quanta had to be small, and this led to difficulties with the Law of Large Numbers, and to unavoidable approximations in computing the posterior. A better way of avoiding digitisation is to give the quanta variable intensity with an exponential prior, that being the natural MaxEnt assignment. We call this technique “Massive Inference” (MassInf). Although the entropy formula no longer appears in the prior, MassInf results show improved quality. MassInf is also capable of assigning a simple prior for polarized images.
Key wordsMaximum entropy infinitely divisible polarization regularizaron
Unable to display preview. Download preview PDF.
- Bretthorst, G.L.: 1990, ‘Bayesian Spectrum Analysis and Parameter Estimation’ in Lecture Notes in Statistics 48, Springer-Verlag, New York.Google Scholar
- Gull, S.F.: 1989, ‘Developments in Maximum Entropy data analysis’ in Maximum Entropy and Bayesian Methods, J. Skilling (ed.) Kluwer Academic Publishers, Dordrecht, 53–71.Google Scholar
- Skilling, J.: 1989, ‘Classic Maximum Entropy’ in Maximum Entropy and Bayesian Methods, J. Skilling (ed.) Kluwer Academic Publishers, Dordrecht, 45–52.Google Scholar
- Stone, J.M.: 1963, Radiation and Optics, McGraw-Hill, New York, 313.Google Scholar