Abstract
We examine the problem of deconvolving blurred text. This is a task in which there is strong prior knowledge (e.g., font characteristics) that is hard to express computationally. These priors are implicit, however, in mock data for which the true image is known. When trained on such mock data, a neural network is able to learn a solution to the image deconvolution problem which takes advantage of this implicit prior knowledge. Prior knowledge of image positivity can be hard-wired into the functional architecture of the network, but we leave it to the network to learn most of the parameters of the task from the data. We do not need to tell the network about the point spread function, the intrinsic correlation function, or the noise process.
Neural networks have been compared with the optimal linear filter, and with the Bayesian algorithm MemSys, on a variety of problems. The networks, once trained, were faster image reconstructors than MemSys, and had similar performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S.F. Gull. Developments in maximum entropy data analysis. In J. Skilling, editor, Maximum Entropy and Bayesian Methods, Cambridge 1988, pages 53–71, Dordrecht, 1989. Kluwer.
S.F. Gull and G.J. Daniell. Image reconstruction from incomplete and noisy data. Nature, 272: 686–690, 1978.
I. Guyon, V.N. Vapnik. B.E. Boser, L.Y. Bottou, and S.A. Solla. Structural risk minimization for character recognition. In J.E. Moody, S.J. Hanson, and R.P. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 471–479, San Mateo, California, 1992. Morgan Kaufmann.
D.J.C. MacKay. The evidence framework applied to classification networks. Neural Computation, 4 (5): 698–714, 1992.
D.J.C. MacKay. A practical Bayesian framework for backpropagation networks. Neural Computation, 4 (3): 448–472, 1992.
J. Skilling. Classic maximum entropy. In J. Skilling, editor, Maximum Entropy and Bayesian Methods, Cambridge 1988, Dordrecht, 1989. Kluwer.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Tansley, J.E., Oldfield, M.J., MacKay, D.J.C. (1996). Neural Network Image Deconvolution. In: Heidbreder, G.R. (eds) Maximum Entropy and Bayesian Methods. Fundamental Theories of Physics, vol 62. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-8729-7_25
Download citation
DOI: https://doi.org/10.1007/978-94-015-8729-7_25
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-4407-5
Online ISBN: 978-94-015-8729-7
eBook Packages: Springer Book Archive