Abstract
This paper addresses the problem of incorporating prior knowledge into learning algorithms. An impressive variety of hand-crafted methods dealing with this difficult problem has been elaborated by both symbolic machine learning and neural networks communities. However, no fairly general methodology has emerged yet. The contribution of this paper is two-folded. First, we propose a Bayesian view of domain knowledge incorporation. In our framework the learning algorithm’s designer is proposed to express available prior knowledge in terms of mixture distributions on both parameters and models envolved into a given type of learning. Then the known Bayesian learning machinery is put into work. Second, we fulfill a detailled case study applying our framework to extract logical rules from neural networks. We analyse our approach doing computational experiments on the difficult problem of protein secondary structure prediction. Besides, for this problem, we propose a Dirichlet mixture modeling of training data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Ortega J., Fisher D. (1995) Flexibly exploiting prior knowledge in empirical learning, in Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montreal, pp. 1041–1047.
Clark P., Matwin S. (1993) Using qualitative models to guide inductive learning, in Proceedings of the Tenth International Conference on Machine Learning, pp. 49–56, Amherst, MA.
Mooney R.J. (1993) Induction over the unexplained: using overly-general theories to aid concept learning, Machine Learning, 10:79–110.
Pazzani M., Kibler D. (1992) The utility of knowledge in inductive learning, Machine Learning, 9(l):57–94.
Towell, G.G., Shavlik, J. (1992). Interpretation of artificial neural networks: mapping knowledge-based neural networks into rules, Advances in Neural Information Processing Systems 4, pp. 977–984, San Mateo, CA, Morgan Kaufman.
Thrun S., Mitchell T. (1993) Integrating inductive neural network learning and explanation-based learning, in Proceedings of the 13th International Joint Conference on Artificial Intelligence, Montreal, pp. 930–936.
Thrun S., Mitchell T. (1995) Learning one more thing, in Proceedings of the 14th International Joint Conference on Artificial Intelligence, pp. 1217–1223.
Jordan M., Jacobs Ft. (1994) Hierarchical mixtures of experts and the EM algorithm, Neural Computation, 6:181–214.
Hinton G.E., Drew van Camp (1993) Keeping neural networks simple by minimizing the description length of the weights, in Proceedings of the Conference on Computational Learning Theory, Santa Cruz.
Weigend A.S., Zimmermann H.G. (1995) The observer-oservation dilemma in neuro-forecasting: reliable models from unreliable data through CLEANING, in Proceedings of the Neural Networks in Capital Markets Conference, London Business School, October 1995, to appear.
Rumelhart D. E., Durbin Ft., Golden Ft., Chauvin Y. (1995) Backpropagation: the basic theory, in Chauvin Y. and Rumelhart D. E., editors, Backpropagation: Theory, Architectures, and Applications, pp. 1–34, Hillsdale, NJ, Lawrence Erlbaum Associates.
Buntine, W. (1991) Classifiers: A theoretical and empirical study, in Proceedings of the 12th International Joint Conference on Artificial Intelligence, Sydney, Morgan Kaufmann, pp.638–644.
Buntine W. (1994) Operations for learning with graphical models, Journal of Artificial Intelligence Research, 2, December 1994.
Mackay D.J.C. (1992) A practical Bayesian framework for backpropagation networks, Neural Computation, pp. 448–472.
Tchoumatchenko I., Ganascia J.-G. (1994) A Bayesian Framework to Integrate Symbolic and Neural Learning, in Proceedings of the 11th International Conference on Machine Learning.
Le Cun Y., Kanter I., Solla S.A. (1990) Second order properties of error surfaces: learning time and generalization, Advances in Neural Information Processing Systems 2, San Mateo, CA, Morgan Kaufman.
Craven, M. W., Shavlik, J.W. (1993). Learning symbolic rules using artificial neural networks, in Proceedings of the 10th International Conference on Machine Learning, Ah-marest, MA.
Rost B., Sander C. (1993) Prediction of protein secondary structure at better than 70% accuracy, Journal of Molecular Biology 232.
Brown M. et al. (1993) Using Dirichlet mixture priors to derive hidden markov models for protein families, in Proceedings of the 1st International Conference on Intelligent Systems for Molecular Biology, pp. 47–55, July 6–9, Bethesda, MD.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media Dordrecht
About this paper
Cite this paper
Tchoumatchenko, I., Ganascia, JG. (1996). Mixture Modeling to Incorporate Meaningful Constraints into Learning. In: Hanson, K.M., Silver, R.N. (eds) Maximum Entropy and Bayesian Methods. Fundamental Theories of Physics, vol 79. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-5430-7_10
Download citation
DOI: https://doi.org/10.1007/978-94-011-5430-7_10
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-010-6284-8
Online ISBN: 978-94-011-5430-7
eBook Packages: Springer Book Archive