Density Networks and their Application to Protein Modelling

  • David J. C. MacKay
Conference paper
Part of the Fundamental Theories of Physics book series (FTPH, volume 70)


I define a latent variable model in the form of a neural network for which only target outputs are specified; the inputs are unspecified. Although the inputs are missing, it is still possible to train this model by placing a simple probability distribution on the unknown inputs and maximizing the probability of the data given the parameters. The model can then discover for itself a description of the data in terms of an underlying latent variable space of lower dimensionality. I present preliminary results of the application of these models to protein data.


Latent Variable Density Network Density Model Latent Variable Model Beta Sheet 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Eddy, S. R., and Durbin, R., (1994) RNA sequence analysis using covariance models. NAR, in press.Google Scholar
  2. Everitt, B. S. (1984) An Introduction to Latent Variable Models. London: Chapman and Hall.zbMATHGoogle Scholar
  3. Hinton, G. E., and Zemel, R. S. (1994) Autoencoders, minimum description length and Helmholtz free energy. In Advances in Neural Information Processing Systems 6, ed. by J. D. Cowan, G. Tesauro, and J. Alspector, San Mateo, California. Morgan Kaufmann.Google Scholar
  4. Krogh, A., Brown, M., Mian, I. S., Sjolander, K., and Haussler, D. (1994) Hidden Markov models in computational biology: Applications to protein modeling. Journal of Molecular Biology 235: 1501 – 1531.CrossRefGoogle Scholar
  5. Luttrell, S. P., (1994) The partitioned mixture distribution: an adaptive Bayesian network for low-level image processing. to appear.Google Scholar
  6. MacKay, D. J. C. (1992) A practical Bayesian framework for backpropagation networks. Neural Computation 4 (3): 448 – 472.CrossRefGoogle Scholar
  7. Mackay, D. J. C. (1995) Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research, Section A.Google Scholar
  8. MacKay, D. J. C., and Neal, R. M. (1994) Automatic relevance determination for neural networks. Technical Report in preparation, Cambridge University.Google Scholar
  9. Nakai, K., Kidera, A., and Kanehisa, M. (1988) Cluster analysis of amino acid indices for prediction of protein structure and function. Prot. Eng. 2: 93 – 100.CrossRefGoogle Scholar
  10. Neal, R. M. (1993) Bayesian learning via stochastic dynamics. In Advances in Neural Information Processing Systems 5, ed. by C. L. Giles, S. J. Hanson, and J. D. Cowan, pp. 475 – 482, San Mateo, California. Morgan Kaufmann.Google Scholar

Copyright information

© Kluwer Academic Publishers 1996

Authors and Affiliations

  • David J. C. MacKay
    • 1
  1. 1.Cavendish LaboratoryCambridgeUK

Personalised recommendations