Abstract
A revision algorithm is a learning algorithm that identifies the target concept, starting from an initial concept. Such an algorithm is considered efficient if its complexity (in terms of the resource one is interested in) is polynomial in the syntactic distance between the initial and the target concept, but only polylogarithmic in the number of variables in the universe. We give efficient revision algorithms in the model of learning with equivalence and membership queries. The algorithms work in a general revision model where both deletion and addition type revision operators are allowed. In this model one of the main open problems is the efficient revision of Horn sentences. Two revision algorithms are presented for special cases of this problem: for depth-1 acyclic Horn sentences, and for definite Horn sentences with unique heads. We also present an efficient revision algorithm for threshold functions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Blum, A., Hellerstein, L., Littlestone, N.: Learning in the presence of finitely or infinitely many irrelevant attributes. J. of Comput. Syst. Sci. 50, 32–40 (1995) Earlier version in 4th COLT (1991)
Bshouty, N., Hellerstein, L.: Attribute-efficient learning in query and mistakebound models. J. of Comput. Syst. Sci. 56, 310–319 (1998)
Koppel, M., Feldman, R., Segre, A.M.: Bias-driven revision of logical domain theories. Journal of Artificial Intelligence Research 1, 159–208 (1994)
Ourston, D., Mooney, R.J.: Theory refinement combining analytical and empirical methods. Artificial Intelligence 66, 273–309 (1994)
Richards, B.L., Mooney, R.J.: Automated refinement of first-order Horn-clause domain theories. Machine Learning 19, 95–131 (1995)
Towell, G.G., Shavlik, J.W.: Extracting refined rules from knowledge-based neural networks. Machine Learning 13, 71–101 (1993)
Wrobel, S.: Concept Formation and Knowledge Revision. Kluwer, Dordrecht (1994)
Wrobel, S.: First order theory refinement. In: De Raedt, L. (ed.) Advances in ILP, pp. 14–33. IOS Press, Amsterdam (1995)
Goldsmith, J., Sloan, R.H., Turán, G.: Theory revision with queries: DNF formulas. Machine Learning 47, 257–295 (2002)
Goldsmith, J., Sloan, R.H., Szörényi, B., Turán, G.: Theory revision with queries: Horn, read-once, and parity formulas. Artificial Intelligence 156, 139–176 (2004)
Mooney, R.J.: A preliminary PAC analysis of theory revision. In: Petsche, T. (ed.) Computational Learning Theory and Natural Learning Systems. Volume III: Selecting Good Models, pp. 43–53. MIT Press, Cambridge (1995)
Sloan, R.H., Szörényi, B., Turán, G.: Projective DNF formulae and their revision. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT/Kernel 2003. LNCS (LNAI), vol. 2777, pp. 625–639. Springer, Heidelberg (2003)
Doshi, J.U.: Revising Horn formulas. Master’s thesis, Dept. of Computer Science, University of Kentucky (2003)
Angluin, D., Frazier, M., Pitt, L.: Learning conjunctions of Horn clauses. Machine Learning 9, 147–164 (1992)
Valiant, L.G.: Projection learning. Machine Learning 37, 115–130 (1999)
Pinker, S.: The Blank Slate: The Modern Denial of Human Nature. Viking Press (2002)
Valiant, L.G.: A neuroidal architecture for cognitive computation. Journal of the ACM 47, 854–882 (2000)
Valiant, L.G.: Robust logics. Artificial Intelligence 117, 231–253 (2000)
Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning 2, 285–318 (1988)
Uehara, R., Tsuchida, K., Wegener, I.: Optimal attribute-efficient learning of disjunction, parity, and threshold functions. In: Ben-David, S. (ed.) EuroCOLT 1997. LNCS, vol. 1208, pp. 171–184. Springer, Heidelberg (1997)
Hegedüs, T., Indyk, P.: On learning disjunctions of zero-one threshold functions with queries. In: Li, M. (ed.) ALT 1997. LNCS, vol. 1316, pp. 446–460. Springer, Heidelberg (1997)
Hegedüs, T.: On training simple neural networks and small-weight neurons. In: Computational Learning Theory: EuroColt 1993. The Institute of Mathematics and its Applications Conference Series, vol. 53, pp. 69–82. Oxford University Press, Oxford (1994)
Pitt, L., Valiant, L.G.: Computational limitations on learning from examples. J. ACM 35, 965–984 (1988)
Schmitt, M.: On methods to keep learning away from intractability. Proc. International Conference on Artifical Neural Networks (ICANN) 1995 1, 211–216 (1995)
Sloan, R.H., Turán, G.: Learning from incomplete boundary queries using split graphs and hypergraphs. In: Ben-David, S. (ed.) EuroCOLT 1997. LNCS, vol. 1208, pp. 38–50. Springer, Heidelberg (1997)
Angluin, D.: Learning propositional Horn sentences with hints. Technical Report YALEU/DCS/RR-590, Department of Computer Science, Yale University (1987)
Hammer, P.L., Kogan, A.: Quasi-acyclic propositional Horn knowledge bases: optimal compression. IEEE Trans. Knowl. Data Eng. 7, 751–762 (1995)
Arimura, H.: Learning acyclic first-order Horn sentences from entailment. In: Li, M. (ed.) ALT 1997. LNCS, vol. 1316, pp. 432–445. Springer, Heidelberg (1997)
Angluin, D.: Queries and concept learning. Machine Learning 2, 319–342 (1988)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Goldsmith, J., Sloan, R.H., Szörényi, B., Turán, G. (2004). New Revision Algorithms. In: Ben-David, S., Case, J., Maruoka, A. (eds) Algorithmic Learning Theory. ALT 2004. Lecture Notes in Computer Science(), vol 3244. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30215-5_30
Download citation
DOI: https://doi.org/10.1007/978-3-540-30215-5_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-23356-5
Online ISBN: 978-3-540-30215-5
eBook Packages: Springer Book Archive