Advertisement

Towards Probabilistic Argumentation

  • Ingrid Zukerman
Chapter

All arguments share certain key similarities: they have a goal and some support for the goal, although the form of the goal and support may vary dramatically. Human argumentation is also typically enthymematic, i.e., people produce and expect arguments that omit easily inferable information. In this chapter, we draw on the insights obtained from a decade of research to formulate requirements common to computational systems that interpret human arguments and generate their own arguments. To ground our discussion, we describe how some of these requirements are addressed by two probabilistic argumentation systems developed by the User Modeling and Natural Language (UMNL) Group at Monash University: the argument generation system nag (Nice Argument Generator) [18, 19, 20, 38, 39, 40], and the argument interpretation system bias (Bayesian Interactive Argumentation System) [7, 8, 34, 35, 36, 37].

Keywords

Argument Generation Argumentation Strategy Interpretation Graph Explanatory Extension Probabilistic Argumentation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

The author thanks her collaborators on the research described in this chapter: Sarah George, Natalie Jitnah, Kevin Korb, Richard McConachy and Michael Niemann. This research was supported in part by grants A49531227, A49927212 and DP0344013 from the Australian Research Council, and by the ARC Centre for Perceptive and Intelligent Machines in Complex Environments.

References

  1. 1.
    J. R. Anderson. The Architecture of Cognition. Harvard University Press, Cambridge, Massachusetts, 1983.Google Scholar
  2. 2.
    E. Charniak and R. Goldman. A Bayesian model of plan recognition. Artificial Intelligence, 64(1):53–79, 1993.CrossRefGoogle Scholar
  3. 3.
    J. Chu-Carroll and S. Carberry. Response generation in collaborative negotiation. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 136–143, 1995.Google Scholar
  4. 4.
    J. Chu-Carroll and S. Carberry. Conflict resolution in collaborative planning dialogues. International Journal of Human Computer Studies, 6(56):969–1015, 2000.CrossRefGoogle Scholar
  5. 5.
    T. Dean and M. Boddy. An analysis of time-dependent planning. In AAAI88 – Proceedings of the 7th National Conference on Artificial Intelligence, pages 49–54, St. Paul, Minnesota, 1988.Google Scholar
  6. 6.
    J. Evans. Bias in human reasoning: Causes and consequences. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1989.Google Scholar
  7. 7.
    S. George, I. Zukerman, and M. Niemann. Modeling suppositions in users’ arguments. In UM05 – Proceedings of the 10th International Conference on User Modeling, pages 19–29, Edinburgh, Scotland, 2005.Google Scholar
  8. 8.
    S. George, I. Zukerman, and M. Niemann. Inferences, suppositions and explanatory extensions in argument interpretation. User Modeling and User-Adapted Interaction, 17(5):439–474, 2007.CrossRefGoogle Scholar
  9. 9.
    A. Gertner, C. Conati, and K. VanLehn. Procedural help in Andes: Generating hints using a Bayesian network student model. In AAAI98 – Proceedings of the 15th National Conference on Artificial Intelligence, pages 106–111, Madison, Wisconsin, 1998.Google Scholar
  10. 10.
    N. Green and S. Carberry. A hybrid reasoning model for indirect answers. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 58–65, Las Cruces, New Mexico, 1994.Google Scholar
  11. 11.
    J. R. Hobbs, M. E. Stickel, D. E. Appelt, and P. Martin. Interpretation as abduction. Artificial Intelligence, 63(1-2):69–142, 1993.CrossRefGoogle Scholar
  12. 12.
    H. Horacek. How to avoid explaining obvious things (without omitting central information). In ECAI94 – Proceedings of the 11th European Conference on Artificial Intelligence, pages 520–524, Amsterdam, The Netherlands, 1994.Google Scholar
  13. 13.
    E. Horvitz and T. Paek. A computational architecture for conversation. In UM99 – Proceedings of the 7th International Conference on User Modeling, pages 201–210, Banff, Canada, 1999.Google Scholar
  14. 14.
    E. Horvitz, H. Suermondt, and G. Cooper. Bounded conditioning: flexible inference for decision under scarce resources. In UAI89 – Proceedings of the 1989 Workshop on Uncertainty in Artificial Intelligence, pages 182–193, Windsor, Canada, 1989.Google Scholar
  15. 15.
    X. Huang and A. Fiedler. Proof verbalization as an application of NLG. In IJCAI97 – Proceedings of the 15th International Joint Conference on Artificial Intelligence, pages 965–970, Nagoya, Japan, 1997.Google Scholar
  16. 16.
    D. Kahneman, P. Slovic, and A. Tversky. Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, 1982.Google Scholar
  17. 17.
    K. Korb and A. Nicholson. Bayesian Artificial Intelligence. Chapman & Hall/CRC, 2004.Google Scholar
  18. 18.
    K. B. Korb, R. McConachy, and I. Zukerman. A cognitive model of argumentation. In Proceedings of the 19th Annual Conference of the Cognitive Science Society, pages 400–405, Stanford, California, 1997.Google Scholar
  19. 19.
    R. McConachy, K. B. Korb, and I. Zukerman. Deciding what not to say: An attentional-probabilistic approach to argument presentation. In Proceedings of the 20th Annual Conference of the Cognitive Science Society, pages 669–674, Madison, Wisconsin, 1998.Google Scholar
  20. 20.
    R. McConachy and I. Zukerman. Towards a dialogue capability in a Bayesian argumentation system. ETAI 3 – Electronic Transactions of Artificial Intelligence (Section D), pages 89–124, 1999.Google Scholar
  21. 21.
    S. Mehl. Forward inferences in text generation. In ECAI94 – Proceedings of the 11th European Conference on Artificial Intelligence, pages 525–529, Amsterdam, The Netherlands, 1994.Google Scholar
  22. 22.
    H. Ng and R. Mooney. On the role of coherence in abductive explanation. In AAAI90 – Proceedings of the 8th National Conference on Artificial Intelligence, pages 337–342, Boston, Massachusetts, 1990.Google Scholar
  23. 23.
    S. H. Nielsen and S. Parsons. An application of formal argumentation: Fusing Bayesian networks in multi-agent systems. Artificial Intelligence, 171:754–775, 2007.CrossRefMathSciNetGoogle Scholar
  24. 24.
    R. Nisbett, E. Borgida, R. Crandall, and H. Reed. Popular induction: Information is not necessarily informative. In J. Carroll and J. Payne, editors, Cognition and social behavior, pages 113–133. Hillsdale, NJ: LEA, 1976.Google Scholar
  25. 25.
    N. Oren, T. Norman, and A. Preece. Subjective logic and arguing with evidence. Artificial Intelligence, 171:838–854, 2007.CrossRefMathSciNetGoogle Scholar
  26. 26.
    J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann Publishers, San Mateo, California, 1988.Google Scholar
  27. 27.
    A. Quilici. Detecting and responding to plan-oriented misconceptions. In A. Kobsa and W. Wahlster, editors, User Models in Dialog Systems, pages 108–132. Springer-Verlag, 1989.Google Scholar
  28. 28.
    C. Reed and D. Long. Content ordering in the generation of persuasive discourse. In IJCAI97 – Proceedings of the 15th International Joint Conference on Artificial Intelligence, pages 1022–1027, Nagoya, Japan, 1997.Google Scholar
  29. 29.
    G. Rowe and C. Reed. Argument diagramming: The Araucaria project. In A. Okada, S. Buckingham Shum, and A. Sherborne, editors, Knowledge Cartography, pages 163–181. Springer, 2008.Google Scholar
  30. 30.
    R. H. Thomason, J. R. Hobbs, and J. D. Moore. Communicative goals. In Proceedings of ECAI96 Workshop – Gaps and Bridges: New Directions in Planning and NLG, pages 7–12, Budapest, Hungary, 1996.Google Scholar
  31. 31.
    T. van Gelder. Teaching critical thinking: some lessons from cognitive science. College Teaching, 45(1):1–6, 2005.Google Scholar
  32. 32.
    G. Vreeswijk. iacas: An interactive argumentation system. Technical Report CS 94-03, Department of Computer Science, University of Limburg, 1994.Google Scholar
  33. 33.
    C. Wallace. Statistical and Inductive Inference by Minimum Message Length. Springer, Berlin, Germany, 2005.MATHGoogle Scholar
  34. 34.
    I. Zukerman. An integrated approach for generating arguments and rebuttals and understanding rejoinders. In UM01 – Proceedings of the 8th International Conference on User Modeling, pages 84–94, Sonthofen, Germany, 2001.Google Scholar
  35. 35.
    I. Zukerman. Discourse interpretation as model selection – a probabilistic approach. In B. Bouchon-Meunier, C. Marsala, M. Rifqi, and R. Yager, editors, Uncertainty and Intelligent Information Systems, pages 61–73. World Scientific, 2008.Google Scholar
  36. 36.
    I. Zukerman and S. George. A probabilistic approach for argument interpretation. User Modeling and User-Adapted Interaction, Special Issue on Language-Based Interaction, 15(1-2):5–53, 2005.CrossRefGoogle Scholar
  37. 37.
    I. Zukerman, S. George, and M. George. Incorporating a user model into an information theoretic framework for argument interpretation. In UM03 – Proceedings of the 9th International Conference on User Modeling, pages 106–116, Johnstown, Pennsylvania, 2003.Google Scholar
  38. 38.
    I. Zukerman, R. McConachy, and K. B. Korb. Bayesian reasoning in an abductive mechanism for argument generation and analysis. In AAAI98 – Proceedings of the 15th National Conference on Artificial Intelligence, pages 833–838, Madison, Wisconsin, 1998.Google Scholar
  39. 39.
    I. Zukerman, R. McConachy, and K. B. Korb. Using argumentation strategies in automated argument generation. In INLG’2000 – Proceedings of the 1st International Conference on Natural Language Generation, pages 55–62, Mitzpe Ramon, Israel, 2000.Google Scholar
  40. 40.
    I. Zukerman, R. McConachy, K. B. Korb, and D. A. Pickett. Exploratory interaction with a Bayesian argumentation system. In IJCAI99 – Proceedings of the 16th International Joint Conference on Artificial Intelligence, pages 1294–1299, Stockholm, Sweden, 1999.Google Scholar

Copyright information

© Springer-Verlag US 2009

Authors and Affiliations

  1. 1.Faculty of Information TechnologyMonash UniversityClaytonAustralia

Personalised recommendations