Skip to main content
Log in

Forming user models by understanding user feedback

  • Published:
User Modeling and User-Adapted Interaction Aims and scope Submit manuscript

Abstract

An intelligent advisory system should be able to provide explanatory responses that correct mistaken user beliefs. This task requires the ability to form a model of the user's relevant beliefs and to understand and address feedback from users who are not satisfied with its advice. This paper presents a method by which a detailed model of the user's relevant domain-specific, plan-oriented beliefs can gradually be formed by trying to understand user feedback in an on-going advisory dialog. In particular, we consider the problem of constructing an automated advisor capable of participating in a dialog discussing which UNIX command should be used to perform a particular task. We show how to construct a model of a UNIX user's beliefs about UNIX commands from several different classes of user feedback. Unlike other approaches to inferring user beliefs, our approach focuses on inferring only the small set of beliefs likely to be relevant in contributing to the user's misconception. And unlike other approaches to providing advice, we focus on the task of understanding the user's descriptions of perceived problems with that advice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Abelson, R.: 1979, ‘Differences Between Beliefs and Knowledge Systems’.Cognitive Science,3:355–366.

    Google Scholar 

  • Allen, J.F. and C.R. Perrault: 1980, ‘Analyzing Intention in Utterances’.Artificial Intelligence,15:143–178.

    Google Scholar 

  • Alvarado, S., M.G. Dyer, and M. Flowers: 1991, ‘Argument Comprehension and Retrieval for Editorial Text’.Knowledge-Based Systems,3(sn3):134–162.

    Google Scholar 

  • Calistri-Yeh, R.J.: 1991, ‘Utilizing User Models to Handle Ambiguity and Misconceptions in Robust Plan Recognition’.User Modeling and User-Adapted Interaction,1(sn4):289–322.

    Google Scholar 

  • Carberry, S.: 1990,Plan Recognition in Natural Language Dialogue, MIT Press, Cambridge, MA.

    Google Scholar 

  • Carberry, S.: 1988, ‘Modeling the User's Plans and Goals’Computational Linguistics,14(sn3):23–37.

    Google Scholar 

  • Carbonell, J.: 1981,Subjective Understanding: Computer Models of Belief Systems, UMI Research Press, Ann Arbor, MI.

    Google Scholar 

  • Cawsey, A.: 1993, ‘User Modelling in Interactive Explanations’,User Modeling and User-Adapted Interaction,3(sn3):221–247.

    Google Scholar 

  • Cawsey, A., J. Galliers, S. Reece, and K. Sparck Jones: 1992, ‘Automating the Librarian: Belief Revision as a base for System Action and Communication with the User’,The Computer Journal,35(sn3):221–232.

    Google Scholar 

  • Chin, D.: 1989, ‘KNOME: Modeling What the User Knows in UC’. In: A. Kobsa and W. Wahlster (eds.),User Modeling and Dialog Systems, Springer Verlag, New York, NY.

    Google Scholar 

  • Grosz, B.: 1977, ‘The Representation and Use of Focus in a System for Understanding Dialogs’, In:Proceedings of the First Annual International Joint Conference on Artificial Intelligence, Cambridge, MA, pp. 67–76.

  • Hammond, K.J.: 1990, ‘Explaining and Repairing Plans That Fail’,Artificial Intelligence,45(sn1–2): 173–228.

    Google Scholar 

  • Jerrams-Smith, J.: 1989, ‘An Attempt to Incorporate Expertise about Users into an Intelligent Interface for UNIX’.International Journal of Man-Machine Studies,31(sn3):269–292.

    Google Scholar 

  • Kass, A.: 1990, ‘Developing Creative Hypotheses by Adapting Explanations’. Technical Report #6, Institute for the Learning Sciences, Northwestern University, Evanston, IL.

    Google Scholar 

  • Kass, R.J.: 1991, ‘Building a User Model Implicitly from a Cooperative Advisory Dialog’.User Modeling and User-Adapted Interaction,1(sn3):203–258.

    Google Scholar 

  • Lambert, L. and S. Carberry: 1991, A Tripartite, Plan-Based Model of Dialogue.Proceedings of Twenty-Ninth Meeting of the Association of Computational Linguistics, Berkeley, CA, pp. 47–54.

  • Lambert, L. and S. Carberry: 1992, Using Linguistic, World, and Contextual Knowledge in a Plan Recognition Model of Dialogue.Proceedings of the Fifteenth International Conference on Computational Linguistics, Nantes, France, pp. 310–316.

  • Litman, D. and J.F. Allen: 1987, ‘A Plan Recognition Model for Subdialogues in Conversation’.Cognitive Science,11:163–200.

    Google Scholar 

  • McCoy, K.: 1988, ‘Reasoning on a Highlighted User Model to Respond to Misconceptions’.Computational Linguistics,14(sn3): 52–63.

    Google Scholar 

  • Moore, J.D. and C. Paris: 1992, ‘Exploiting User Feedback to Compensate for the Unreliability of User Models’.User Modeling and User-Adapted Interaction,2(sn4):287–330.

    Google Scholar 

  • Moore, J.D. and W.R. Swartout: 1989, ‘A Reactive Approach to Explanation’. In:Proceedings of the 1989 International Joint Conference on Artificial Intelligence, Detroit, Michigan, pp. 1504–1510.

  • Moore, J.D.: 1989,A Reactive Approach to Explanation in Expert and Advice-Giving Systems. PhD Thesis. University of California, Los Angeles, CA.

    Google Scholar 

  • Pilkington, R.: 1992, ‘Question-Answering for Intelligent Help: The Process of Intelligent Responding’.Cognitive Science,16(sn4):455–489.

    Google Scholar 

  • Pollack, M.: 1986a, ‘A Model of Plan Inference that Distinguishes Between the Beliefs of Actors and Observers’. In:Proceedings of 24th meeting of the Association of Computational Linguistics, New York, NY, pp. 207–214.

  • Pollack, M.: 1986b,Inferring Domain Plans in Question-Answering. PhD thesis, Department of Computer Science, University of Pennsylvania, Philadelphia, PA.

    Google Scholar 

  • Quilici, A.: 1992a, ‘Recognizing and Revising Unconvincing Explanations’. In:Proceedings of the 1992 European Conference on Artificial Intelligence, Vienna, Austria, pp. 181–182.

  • Quilici, A.: 1992b, ‘Arguing About Planning Alternatives’. In:Proceedings of the 15th International Conference on Computational Linguistics, Nantes, France, pp. 906–910.

  • Quilici, A.: 1992c, ‘Repairing Rejected Explanations’, In:Proceedings of the 5th Annual Florida AI Symposium, Fort Lauderdale, FL, pp. 218–222.

  • Quilici, A.: 1991,The Correction Machine: Understanding and Producing Belief Justifications in Argumentative Dialogs. PhD Thesis. University of California, Los Angeles, CA.

    Google Scholar 

  • Quilici, A.: 1989a, ‘The Correction Machine: Formulating Explanations for User Misconceptions’. In:Proceedings of the 1989 International Joint Conference on Artificial Intelligence, Detroit, Michigan, pp. 550–555.

  • Quilici, A.: 1989b, ‘AQUA: A System that Detects and Responds to User Misconceptions’. In: A. Kobsa and W. Wahlster (eds.),User Modeling and Dialog Systems, Springer Verlag, New York, NY.

    Google Scholar 

  • Quilici, A., M.G. Dyer, and M. Flowers: 1988, ‘Recognizing and Responding to Plan-oriented Misconceptions’.Computational Linguistics,14(3):38–51.

    Google Scholar 

  • Quilici, A., M.G. Dyer, and M. Flowers: 1986, ‘AQUA: An Intelligent UNIX Advisor’. In:Proceedings of the 1986 European Conference on Artificial Intelligence, Brighton, England, pp. 33–38.

  • Ram, A.: 1991, ‘A Theory of Questions and Question Asking.’Journal of the Learning Sciences,1(3–4):273–318.

    Google Scholar 

  • Ramshaw, L., ‘A Three-Level Model for Plan Exploration.’Proceedings of Twenty-Ninth Meeting of the Association of Computational Linguistics, Berkeley, CA, 1991.

  • Raskutti, B. and I. Zukerman: 1991, ‘Generation and Selection of Likely Interpretations during Plan Recognition in Task-Oriented Consultation Systems’.User Modeling and User-Adapted Interaction,1(4):323–354.

    Google Scholar 

  • Robinson, A.: 1981, ‘Determining Verb Phrase Referents in Dialogs.American Journal of Computational Linguistics, w. 1–18.

  • Schank, R.C. and D. Leake: 1989, ‘Creativity and Learning in a Case-Based Explainer’.Artificial Intelligence,40(l–3):353–385.

    Google Scholar 

  • Schank, R.C: 1986,Explanation Patterns: Understanding Mechanically and Creatively, Lawrence Erhlbaum, Hillsdale, NJ.

    Google Scholar 

  • Sidner, C.: 1985, ‘Plan Parsing for Intended Response Recognition in Discourse’.Computational Intelligence,1(1): 1–10.

    Google Scholar 

  • Sidner, C.: 1983, ‘Focusing in the Comprehension of Definite Anaphora’, In: M. Brady and R. Berwick (eds.),Computational Models of Discourse, MIT Press, Cambridge, MA.

    Google Scholar 

  • Sleeman, D. and J.S. Brown (eds): 1982,Intelligent Tutoring Systems, Academic Press, London, England.

    Google Scholar 

  • Wenger, E.: 1987,Artificial Intelligence and Tutoring Systems, Morgan Kaufman, Los Altos, CA.

    Google Scholar 

  • Wilensky, R., D. Chin, M. Luria, J. Martin, J. Mayfield, and D. Wu: 1988. ‘The Berkeley UNIX Consultant Project.’Computational Linguistics,14(4):35–84.

    Google Scholar 

  • Wu. D.: 1991, Active Acquisition of User Models: Implication for Decision-Theoretic Dialog Planning and Plan Recognition,User Modeling and User-Adapted Interaction,1(4): 149–172.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Quilici, A. Forming user models by understanding user feedback. User Model User-Adap Inter 3, 321–358 (1994). https://doi.org/10.1007/BF01099299

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01099299

Key words

Navigation