Skip to main content

Mixture Modeling to Incorporate Meaningful Constraints into Learning

  • Conference paper
Maximum Entropy and Bayesian Methods

Part of the book series: Fundamental Theories of Physics ((FTPH,volume 79))

  • 916 Accesses

Abstract

This paper addresses the problem of incorporating prior knowledge into learning algorithms. An impressive variety of hand-crafted methods dealing with this difficult problem has been elaborated by both symbolic machine learning and neural networks communities. However, no fairly general methodology has emerged yet. The contribution of this paper is two-folded. First, we propose a Bayesian view of domain knowledge incorporation. In our framework the learning algorithm’s designer is proposed to express available prior knowledge in terms of mixture distributions on both parameters and models envolved into a given type of learning. Then the known Bayesian learning machinery is put into work. Second, we fulfill a detailled case study applying our framework to extract logical rules from neural networks. We analyse our approach doing computational experiments on the difficult problem of protein secondary structure prediction. Besides, for this problem, we propose a Dirichlet mixture modeling of training data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ortega J., Fisher D. (1995) Flexibly exploiting prior knowledge in empirical learning, in Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montreal, pp. 1041–1047.

    Google Scholar 

  2. Clark P., Matwin S. (1993) Using qualitative models to guide inductive learning, in Proceedings of the Tenth International Conference on Machine Learning, pp. 49–56, Amherst, MA.

    Google Scholar 

  3. Mooney R.J. (1993) Induction over the unexplained: using overly-general theories to aid concept learning, Machine Learning, 10:79–110.

    Google Scholar 

  4. Pazzani M., Kibler D. (1992) The utility of knowledge in inductive learning, Machine Learning, 9(l):57–94.

    Google Scholar 

  5. Towell, G.G., Shavlik, J. (1992). Interpretation of artificial neural networks: mapping knowledge-based neural networks into rules, Advances in Neural Information Processing Systems 4, pp. 977–984, San Mateo, CA, Morgan Kaufman.

    Google Scholar 

  6. Thrun S., Mitchell T. (1993) Integrating inductive neural network learning and explanation-based learning, in Proceedings of the 13th International Joint Conference on Artificial Intelligence, Montreal, pp. 930–936.

    Google Scholar 

  7. Thrun S., Mitchell T. (1995) Learning one more thing, in Proceedings of the 14th International Joint Conference on Artificial Intelligence, pp. 1217–1223.

    Google Scholar 

  8. Jordan M., Jacobs Ft. (1994) Hierarchical mixtures of experts and the EM algorithm, Neural Computation, 6:181–214.

    Article  Google Scholar 

  9. Hinton G.E., Drew van Camp (1993) Keeping neural networks simple by minimizing the description length of the weights, in Proceedings of the Conference on Computational Learning Theory, Santa Cruz.

    Google Scholar 

  10. Weigend A.S., Zimmermann H.G. (1995) The observer-oservation dilemma in neuro-forecasting: reliable models from unreliable data through CLEANING, in Proceedings of the Neural Networks in Capital Markets Conference, London Business School, October 1995, to appear.

    Google Scholar 

  11. Rumelhart D. E., Durbin Ft., Golden Ft., Chauvin Y. (1995) Backpropagation: the basic theory, in Chauvin Y. and Rumelhart D. E., editors, Backpropagation: Theory, Architectures, and Applications, pp. 1–34, Hillsdale, NJ, Lawrence Erlbaum Associates.

    Google Scholar 

  12. Buntine, W. (1991) Classifiers: A theoretical and empirical study, in Proceedings of the 12th International Joint Conference on Artificial Intelligence, Sydney, Morgan Kaufmann, pp.638–644.

    Google Scholar 

  13. Buntine W. (1994) Operations for learning with graphical models, Journal of Artificial Intelligence Research, 2, December 1994.

    Google Scholar 

  14. Mackay D.J.C. (1992) A practical Bayesian framework for backpropagation networks, Neural Computation, pp. 448–472.

    Google Scholar 

  15. Tchoumatchenko I., Ganascia J.-G. (1994) A Bayesian Framework to Integrate Symbolic and Neural Learning, in Proceedings of the 11th International Conference on Machine Learning.

    Google Scholar 

  16. Le Cun Y., Kanter I., Solla S.A. (1990) Second order properties of error surfaces: learning time and generalization, Advances in Neural Information Processing Systems 2, San Mateo, CA, Morgan Kaufman.

    Google Scholar 

  17. Craven, M. W., Shavlik, J.W. (1993). Learning symbolic rules using artificial neural networks, in Proceedings of the 10th International Conference on Machine Learning, Ah-marest, MA.

    Google Scholar 

  18. Rost B., Sander C. (1993) Prediction of protein secondary structure at better than 70% accuracy, Journal of Molecular Biology 232.

    Google Scholar 

  19. Brown M. et al. (1993) Using Dirichlet mixture priors to derive hidden markov models for protein families, in Proceedings of the 1st International Conference on Intelligent Systems for Molecular Biology, pp. 47–55, July 6–9, Bethesda, MD.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer Science+Business Media Dordrecht

About this paper

Cite this paper

Tchoumatchenko, I., Ganascia, JG. (1996). Mixture Modeling to Incorporate Meaningful Constraints into Learning. In: Hanson, K.M., Silver, R.N. (eds) Maximum Entropy and Bayesian Methods. Fundamental Theories of Physics, vol 79. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-5430-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-94-011-5430-7_10

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-010-6284-8

  • Online ISBN: 978-94-011-5430-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics