Abstract
As uncertainty pervades the real world, it seems obvious that the decisions we make, the conclusions we reach, and the explanations we offer are usually based on our judgements of the probability of uncertain events such as success in a new medical treatment or the state of the market. For example, if an agent wishes to employ the expected-utility paradigm of decision theory in order to guide its actions, it must assign subjective probabilities to various assertions. Less obvious, however, is the question of how to elicit such degrees of beliefs.
The standard knowledge representation approach claims that the agent starts its life-cycle by acquiring a pool of knowledge expressing several constraints about its environment, such as properties of objects and relationships among them. This information is stored in some knowledge base using a logical representation language [1,4,8] or a graphical representation language [3,9,11]. After this period of knowledge preparation, the agent is expected to achieve optimal performance by evaluating any query with perfect accuracy. Indeed, according to the well-defined semantics of the representation language, a knowledge base provides a compact representation of a probability measure that can be used to evaluate queries. For example, if we select first-order logic as our representation language, the probability measure is induced by assigning equal likelihood to all models of the knowledge base; the degree of belief of any given query is thus the fraction of those models which are consistent with the query.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bacchus, F., Grove, A.J., Halpern, J.Y., Koller, D.: From statistical knowledge bases to degrees of belief. Artificial Intelligence 87(1-2), 75–143 (1996)
Cumby, C.M., Roth, D.: Relational representations that facilitate learning. In: 17th Int. Conf. on Knowledge Representation and Reasoning, pp. 425–434 (2000)
Jaeger, M.: Relational bayesian networks. In: Proc. of the 13th Conference on Uncertainty in Artificial Intelligence, Providence, RI, pp. 266–273. Morgan Kaufmann, San Francisco (1997)
Kersting, K., De Raedt, L.: Adaptive bayesian logic programs. In: 11th Int. Conference on Inductive Logic Programming, pp. 104–117 (2001)
Khardon, R., Roth, D.: Learning to reason. ACM Journal 44(5), 697–725 (1997)
Khardon, R., Roth, D.: Learning to reason with a restricted view. Machine Learning 35(2), 95–116 (1999)
Kivinen, J., Warmuth, M.K.: Exponentiated gradient versus gradient descent for linear predictors. Information and Computation 132(1), 1–63 (1997)
Poole, D.: Probabilistic horn abduction and Bayesain networks. Artificial Intelligence 64, 81–129 (1993)
Richardson, M., Domingos, P.: Markov logic networks. Machine Learning 62(1-2), 107–136 (2006)
Sang, T., Beame, P., Kautz, H.A.: Performing Bayesian inference by weighted model counting. In: 20h National Conference on Artificial Intelligence (AAAI), pp. 475–482 (2005)
Taskar, B., Abbeel, P., Koller, D.: Discriminative probabilistic models for relational data. In: Proc. of the 18th Conference in Uncertainty in Artificial Intelligence, Edmonton, Alberta, Canada, pp. 485–492. Morgan Kaufmann, San Francisco (2002)
Valiant, L.G.: Robust logics. Artificial Intelligence 117(2), 231–253 (2000)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Koriche, F. (2008). Learning to Assign Degrees of Belief in Relational Domains. In: Blockeel, H., Ramon, J., Shavlik, J., Tadepalli, P. (eds) Inductive Logic Programming. ILP 2007. Lecture Notes in Computer Science(), vol 4894. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78469-2_5
Download citation
DOI: https://doi.org/10.1007/978-3-540-78469-2_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-78468-5
Online ISBN: 978-3-540-78469-2
eBook Packages: Computer ScienceComputer Science (R0)