Abstract
Abduction is inference to the best explanation. Abduction has long been studied in a wide range of contexts and is widely used for modeling artificial intelligence systems, such as diagnostic systems and plan recognition systems. Recent advances in the techniques of automatic world knowledge acquisition and inference technique warrant applying abduction with large knowledge bases to real-life problems. However, less attention has been paid to how to automatically learn score functions, which rank candidate explanations in order of their plausibility. In this paper, we propose a novel approach for learning the score function of first-order logic-based weighted abduction [1] in a supervised manner. Because the manual annotation of abductive explanations (i.e. a set of literals that explains observations) is a time-consuming task in many cases, we propose a framework to learn the score function from partially annotated abductive explanations (i.e. a subset of those literals). More specifically, we assume that we apply abduction to a specific task, where a subset of the best explanation is associated with output labels, and the rest are regarded as hidden variables. We then formulate the learning problem as a task of discriminative structured learning with hidden variables. Our experiments show that our framework successfully reduces the loss in each iteration on a plan recognition dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Hobbs, J.R., Stickel, M., Martin, P., Edwards, D.: Interpretation as abduction. Artificial Intelligence 63, 69–142 (1993)
Fellbaum, C. (ed.): WordNet: an electronic lexical database. MIT Press (1998)
Ruppenhofer, J., Ellsworth, M., Petruck, M., Johnson, C., Scheffczyk, J.: FrameNet II: Extended Theory and Practice. Technical report, Berkeley, USA (2010)
Chambers, N., Jurafsky, D.: Unsupervised Learning of Narrative Schemas and their Participants. In: ACL, pp. 602–610 (2009)
Schoenmackers, S., Davis, J., Etzioni, O., Weld, D.: Learning First-order Horn Clauses from Web Text. In: EMNLP, pp. 1088–1098 (2010)
Hovy, D., Zhang, C., Hovy, E., Penas, A.: Unsupervised discovery of domain-specific knowledge from text. In: ACL, pp. 1466–1475 (2011)
Inoue, N., Inui, K.: ILP-Based Reasoning for Weighted Abduction. In: AAAI Workshop on Plan, Activity and Intent Recognition (2011)
Ovchinnikova, E., Montazeri, N., Alexandrov, T., Hobbs, J.R., McCord, M., Mulkar-Mehta, R.: Abductive Reasoning with a Large Knowledge Base for Discourse Processing. In: IWCS, Oxford, UK, pp. 225–234 (2011)
Dagan, I., Dolan, B., Magnini, B., Roth, D.: Recognizing textual entailment: Rational, evaluation and approaches - Erratum. NLE 16, 105 (2010)
Kate, R.J., Mooney, R.J.: Probabilistic Abduction using Markov Logic Networks. In: PAIRS (2009)
Blythe, J., Hobbs, J.R., Domingos, P., Kate, R.J., Mooney, R.J.: Implementing Weighted Abduction in Markov Logic. In: IWCS, Oxford, UK, pp. 55–64 (2011)
Singla, P., Domingos, P.: Abductive Markov Logic for Plan Recognition. In: AAAI, pp. 1069–1075 (2011)
Richardson, M., Domingos, P.: Markov logic networks. In: ML, pp. 107–136 (2006)
Huynh, T.N., Mooney, R.J.: Max-Margin Weight Learning for Markov Logic Networks. In: Proceedings of the International Workshop on Statistical Relational Learning, SRL 2009 (2009)
Lowd, D., Domingos, P.: Efficient Weight Learning for Markov Logic Networks. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) PKDD 2007. LNCS (LNAI), vol. 4702, pp. 200–211. Springer, Heidelberg (2007)
Charniak, E., Goldman, R.P.: A Probabilistic Model of Plan Recognition. In: AAAI, pp. 160–165 (1991)
Poole, D.: Probabilistic Horn abduction and Bayesian networks. Artificial Intelligence 64 (1), 81–129 (1993)
Raghavan, S., Mooney, R.J.: Bayesian Abductive Logic Programs. In: STARAI, pp. 82–87 (2010)
Ovchinnikova, E.: Integration of World Knowledge for Natural Language Understanding. Atlantis Press (2012)
Charniak, E., Shimony, S.E.: Probabilistic semantics for cost based abduction. In: AAAI, pp. 106–111 (1990)
Ng, H.T., Mooney, R.J.: Abductive Plan Recognition and Diagnosis: A Comprehensive Empirical Evaluation. In: KR, pp. 499–508 (1992)
Inoue, N., Inui, K.: Large-scale Cost-based Abduction in Full-fledged First-order Logic with Cutting Plane Inference. In: Proceedings of the 12th European Conference on Logics in Artificial Intelligence (2012) (to appear)
Poon, H., Domingos, P.: Joint unsupervised coreference resolution with markov logic. In: Proceedings of EMNLP, pp. 650–659 (2008)
Song, Y., Jiang, J., Zhao, W.X., Li, S., Wang, H.: Joint learning for coreference resolution with markov logic. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 1245–1254. ACL (2012)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yamamoto, K., Inoue, N., Watanabe, Y., Okazaki, N., Inui, K. (2013). Discriminative Learning of First-Order Weighted Abduction from Partial Discourse Explanations. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2013. Lecture Notes in Computer Science, vol 7816. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37247-6_44
Download citation
DOI: https://doi.org/10.1007/978-3-642-37247-6_44
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-37246-9
Online ISBN: 978-3-642-37247-6
eBook Packages: Computer ScienceComputer Science (R0)