Skip to main content
Log in

Exploiting user feedback to compensate for the unreliability of user models

  • Published:
User Modeling and User-Adapted Interaction Aims and scope Submit manuscript

Abstract

Natural Language is a powerful medium for interacting with users, and sophisticated computer systems using natural language are becoming more prevalent. Just as human speakers show an essential, inbuilt responsiveness to their hearers, computer systems must “tailor” their utterances to users. Recognizing this, researchers devised user models and strategies for exploiting them in order to enable systems to produce the “best” answer for a particular user.

Because these efforts were largely devoted to investigating how a user model could be exploited to produce better responses, systems employing them typically assumed that a detailed and correct model of the user was available a priori, and that the information needed to generate appropriate responses was included in that model. However, in practice, the completeness and accuracy of a user model cannot be guaranteed. Thus, unless systems can compensate for incorrect or incomplete user models, the impracticality of building user models will prevent much of the work on tailoring from being successfully applied in real systems. In this paper, we argue that one way for a system to compensate for an unreliable user model is to be able to react to feedback from users about the suitability of the texts it produces. We also discuss how such a capability can actually alleviate some of the burden now placed on user modeling. Finally, we present a text generation system that employs whatever information is available in its user model in an attempt to produce satisfactory texts, but is also capable of responding to the user's follow-up questions about the texts it produces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Appelt, D. E.: 1985,Planning English Sentences. England: Cambridge University Press, Cambridge.

    Google Scholar 

  • Bateman, J. A. and C. L. Paris: 1989, ‘Phrasing a Text in Terms the User Can Understand’. InProceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI, August 20–25, pp. 1511–1517.

  • Bateman, J. A. and C. L. Paris: 1991, ‘Constraining the Deployment of Lexicogrammatical Resources During Text Generation: Towards a Computational Instantiation of Register Theory’. In: Eija Ventola (ed.):Functional and Systemic Linguistics: Approaches and Uses. Mouton de Gruyter: Chapter 5, pp. 81–106.

  • Brown, J. S. and R. R. Burton: 1978, ‘Diagnostic Models for Procedural Bugs in Basic Mathematical Skills’.Cognitive Science 2(2), 155–192.

    Google Scholar 

  • Buchanan, B. G. and E. H. Shortliffe: 1984,Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison-Wesley Publishing Company.

  • Bunt, H. C.: 1990, ‘Modular Incremental Modelling of Belief and Intention’. InProceedings of the Second International Workshop on User Modeling.

  • Cahour, B.: 1990, ‘Competence Modelling in Consultation Dialogs’. In: L. Berlinguet and D. Berthelette (eds.):Proceedings of the International Congress, Work With Dispay Units' 89, Montreal, Canada, September 1990. Amsterdam: North-Holland.

    Google Scholar 

  • Cahour, B.: 1991,La Modélisation de l'Interlocuteur: Elaboration du Modè le et Effets au Cours de Dialogues de Consultation. PhD thesis, Université Paris 8, France.

    Google Scholar 

  • Carberry, S. M.: 1988, ‘Modeling the User's Plans and Goals’.Computational Linguistics 14(3), 23–37.

    Google Scholar 

  • Carbonell, J. R.: 1970, ‘AI in CAI: An Artificial Intelligence Approach to Computer-Aided Instruction’.IEEE Transactions on Man-Machine Systems 11, 190–202.

    Google Scholar 

  • Carr, B. and I. Goldstein: 1977, ‘Overlays: A Theory of Modelling for Computed Aided Instruction’. AI Memo 406.

  • Cawsey, A.: in press, ‘Planning Interactive Explanations’.International Journal of Man-Machine Studies.

  • Chandrasekaran, B. and W. Swartout: 1991, ‘Explanations in Knowledge Systems: The Role of Explicit Representation of Design Knowledge’.IEEE Expert 6(3), 47–50.

    Google Scholar 

  • Chappel, H. and B. Cahour: 1991, ‘User Modeling for Multi-Modal Co-Operative Dialogue with KBS’. Deliverable D3, Esprit Project P2474.

  • Chin, D. N.: 1987,Intelligent Agents as a Basis for Natural Language Interfaces. PhD thesis, University of California at Berkeley.

    Google Scholar 

  • Chin, D. N.: 1989, ‘KNOME: Modeling What the User Knows in UC’. In: A. Kobsa and W. Wahlster (eds.):User Models in Dialog Systems. Symbolic Computation Series. Berlin, Heidelberg, New York, Tokyo: Springer-Verlag.

    Google Scholar 

  • Cohen, R. and M. Jones: 1989, ‘Incorporating User Models into Expert Systems for Educational Diagnosis’. In: A. Kobsa and W. Wahlster (eds.):User Models inDialog Systems, Symbolic Computation Series. Berlin, Heidelberg, New York, Tokyo: Springer-Verlag, pp. 35–51.

    Google Scholar 

  • Falzon, P.: 1987, ‘Les Dialogues de Diagnostic: L'évaluation des Connaissances de l'Interlocuteur’. Technical Report 747, INRIA, Rocquencourt, France.

    Google Scholar 

  • Finin, T. W., A. K. Joshi, and B. L. Webber: 1986, ‘Natural Language Interactions with Artificial Experts’.Proceedings of the IEEE 74(7), July.

  • Haimowitz, I.: 1990, ‘Modeling All Dialogue System Participants to Generate Empathetic Responses’. InProceedings of the Second International Workshop on User Modeling, Honolulu, HI.

  • Hartley, R. and M. Smith: 1988, ‘Question-Answering and Explanation Giving in On-Line Help Systems’. In: J. Self (ed.):Artificial Intelligence and Human Learning, Chapman and Hall, pp. 338–360.

  • Hoeppner, W., K. Morik, and H. Marburger: 1984, ‘Talking it Over: The Natural Dialog System HAM-ANS’. Technical Report ANS-26, Research Unit for Information Science and Artificial Intelligence, University of Hamburg.

  • Jameson, A. and W. Wahlster. 1982, ‘User Modelling in Anaphora Generation: Ellipsis and Definite Description‘. InProceedings of 82 European Conference on Artificial Intelligence, pp. 222–227.

  • Joshi, A., B. Webber, and R. Weischedel: 1984, ‘Living Up to Expectations: Computing Expert Responses’. InProceedings of AAAI-84, pp. 169–175. American Association of Artificial Intelligence.

  • Kasper, R. T.: 1989, ‘A Flexible Interface for Linking Applications to Penman's Sentence Generator‘. InProceedings of the Darpa Workshop on Speech and Natural Language.

  • Kass, R.: 1991, ‘Building a User Model’.User Modeling and User-Adapted Interaction 1(3), 203–258.

    Google Scholar 

  • Kobsa, A.: 1984, ‘Generating a User Model from WH-Questions in the VIE-LANG System’. Technical Report 84-03, Department of Medical Cybernetics, University of Vienna.

  • Kobsa, A.: 1989, ‘A Taxonomy of Beliefs and Goals for User Models in Dialog Systems’. In: A. Kobsa and W. Wahlster (eds.):User Models in Dialog Systems. Symbolic Computation Series. Berlin, Heidelberg, New York, Tokyo: Springer-Verlag.

    Google Scholar 

  • Lehman, J. F. and J. G. Carbonell: 1989, ‘Learning the User's Language: A Step Towards Automated Creation of User Models’. In: A. Kobsa and W. Wahlster (eds.):User Models in Dialog Systems. Berlin, New York: Springer-Verlag.

    Google Scholar 

  • Mann, W. C. and C. M. I. M. Matthiessen: 1985, ‘Nigel: A Systemic Grammar for Text Generation’. In: R. Benson and J. Greaves (eds.):Systemic Perspectives on Discourse: Selected Papers Papers from theNinth International Systemics Workshop. Ablex, London. Also available as USC/ISI Research Report RR-83-105.

    Google Scholar 

  • Mann, W. C. and S. A. Thompson: 1987, ‘Rhetorical Structure Theory: A Theory of Text Organization’. In: L. Polanyi (ed.):The Structure of Discourse. Ablex Publishing Corporation, Norwood, N.J. Also available as USC/Information Sciences Institute Technical Report Number RS-87-190.

    Google Scholar 

  • Mastaglio, T. W.: 1990,User Modelling in Cooperative Knowledge-Based Systems. PhD thesis, Department of Computer Science, University of Colorado, Boulder.

    Google Scholar 

  • Matthiessen, C. M. I. M.: 1984, ‘Systemic Grammar in Computation: The Nigel Case’. InProceedings of the First Conference of the European Association for Computational Linguistics, Pisa, Italy. European Association for Computational Linguistics. Also available as USC/ISI Research Report RR-84-121.

  • Mays, E.: 1980, ‘Correcting Misconceptions about Data Base Structure’. InProceedings 3-CSCSI, Victoria, B. C. Canadian Society of Computational Studies of Intelligence.

  • McCoy, K. F.: 1985,Correcting Object-Related Misconceptions. PhD thesis, University of Pennsylvania, December. Published by University of Pennsylvania as Technical Report MS-CIS-85-57.

  • McCoy, K. F.: 1988, ‘Reasoning on a Highlighted User Model to Respond to Misconceptions’.Computational Linguistics 14(3), 52–63.

    Google Scholar 

  • McKeown, K. R.: 1985,Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text. Cambridge University Press, Cambridge, England.

    Google Scholar 

  • McKeown, K. R.: 1988, ‘Generating Goal-Oriented Explanations’.International Journal of Expert Systems 1(4), 377–395.or]Mittal, V. and C. Paris: 1992, ‘Generating Object Descriptions: Integrating Examples with Text’. InProceedings of the 1992 Canadian Artificial Intelligence Conference. Canadian AI.

    Google Scholar 

  • Moore, J. D. and C. L. Paris: 1989, ‘Planning Text For Advisory Dialogues’. EnProceedings of the Twenty-Seventh Annual Meeting of the Association for Computational Linguistics, pp. 203–211, Vancouver, B.C., Canada, June 26–29.

  • Moore, J. D. and C. L. Paris: forthcoming, ‘Planning Text for Advisory Dialogues: Capturing Intentional, Rhetorical and Attentional Information’.

  • Moore, J. D. and W. R. Swartout: 1989, ‘A Reactive Approach to Explanation’. InProceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI, August 20–25, pp. 1504–1510.

  • Moore, J. D. and W. R. Swartout: 1990, ‘Pointing: A Way Toward Explanation Dialogue’. InProceedings of the National Conference on Artificial Intelligence, Boston, MA, July 29–August 3, pp. 457–464.

  • Moore, J. D.: 1989a,A Reactive Approach to Explanation in Expert and Advice-Giving Systems. PhD thesis, University of California, Los Angeles.

    Google Scholar 

  • Moore, J. D.: 1989b, ‘Responding to “Huh?”: Answering Vaguely Articulated Follow-Up Questions’. InProceedings of the Conference on Human Factors in Computing Systems, Austin, Texas, April 30–May 4, pp. 91–96.

  • Morik, K. and C.-R. Rollinger: 1985, ‘The Real Estate Agent — Modeling Users by Uncertain Reasoning’.AI Maganize 6, 44–52.

    Google Scholar 

  • Neches, R., W. R. Swartout, and J. D. Moore: 1985, ‘Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development’.IEEE Transactions on Software Engineering SE-11(11), 1337–1351.

    Google Scholar 

  • Paris, C. L.: 1987,The Use of Explicit User Models in Text Generation: Tailoring to a User's Level of Expertise. PhD thesis, Columbia University. To be published in the “Communication in Artificial Intelligence” series, Steiner and Fawcett (eds.): Frances Pinter, 1992.

  • Paris, C. L.: 1988, ‘Tailoring Object Descriptions to the User's Level of Expertise’.Computational Linguistics 14(3), 64–78.

    Google Scholar 

  • Paris, C. L.: 1991, ‘Generation and Explanation: Building an Explanation Facility for the Explainable Expert Systems Framework’. In: C. L. Paris, W. R. Swartout, and W. C. Mann (eds.):Natural Language Generation in Artificial Intelligence and Computational Linguistics. Boston, Dordrecht, London: Kluwer Academic Publishers, pp. 49–81.

    Google Scholar 

  • Pollack, M. E.: 1986a, ‘A Model of Plan Inference that Distinguishes between the Beliefs of Actors and Observers’. InProceedings of the ACL-86, pp. 207–214. Association of Computational Linguistics.

  • Pollack, M. E.: 1986b,Inferring Domain Plans in Question-Answering. PhD thesis, University of Pennsylvania. Published by University of Pennsylvania as Technical Report MS-CIS-86-40.

  • Quilici, A., D. Michael, and F. Margot: 1988, ‘Providing Explanatory Responses to Plan-Oriented Misconceptions’.Computational Linguistics 14(3), 38–51.

    Google Scholar 

  • Quilici, A.: 1990, ‘The Correction Machine: Using Common-Sense Planning Knowledge to Construct and Exploit User Models’. InProceedings of the Second International Workshop on User Modeling. AAAI and the University of Hawaii.

  • Reithinger, N.: 1987, ‘Generating Referring Expressions and Pointing Gestures’. In: G. Kempen (ed.):Natural Language Generation: Recent Advances in Artificial Intelligence, Psychology, and Linguistics. Boston, Dordrecht: Kluwer Academic Publishers, pp. 71–81.

    Google Scholar 

  • Rich, E.: 1989, ‘Stereotypes and User Modelling’. In: A. Kobsa and W. Wahlster (eds.):User Models in Dialog Systems. Symbolic Computation Series. Berlin, Heidelberg, New York, Tokyo: Springer-Verlag.

    Google Scholar 

  • Ringle, M. H. and B. C. Bruce: 1981, ‘Conversation Failure’. In: W. G. Lehnert and M. H. Ringle (eds.):Knowledge Representation and Natural Language Processing. Hillsdale, New Jersey: Lawrence Erlbaum Associates, pp. 203–221.

    Google Scholar 

  • Sacerdoti, E. D.: 1975, ‘A Structure for Plans and Behavior’. Technical Report TN-109, SRI.

  • Schuster, E., D. Chin, R. Cohen, A. Kobsa, K. Morik, K. Sparck Jones, and W. Wahlster: 1988, “Discussion Section on the Relationship Between User Models and Discourse Models’.Computational Linguistics 14(3), 5–22.

    Google Scholar 

  • Shifroni, E. and B. Shanon: 1992, ‘Interactive User Modeling: An Integrative Explicit-Implicit Approach’.User Modeling and User-Adapted Interaction 2, 331–365 (this issue).

    Google Scholar 

  • Sleeman, D. H. and J. S. Brown (eds.): 1981,Intelligent Tutoring Systems. London: Academic Press.

    Google Scholar 

  • Sleeman, D. H.: 1983, ‘Inferring Student Models for Intelligent Computer-Aided Instruction’. In: R. S. Michalski, J. G. Carbonell, and T. M. Mitchell (eds.):Machine Learning: An Artificial Intelligence Appproach. Tioga.

  • Sleeman, D. H.: 1985, ‘UMFE: A User Modelling Front End SubSystem’.International Journal of Man-Machine Studies 23, 71–88.

    Google Scholar 

  • Sparck Jones, K.: 1984, ‘User Models and Expert Systems’. Technical Report No. 61, University of Cambridge Computer Laboratory.

  • Sparck Jones, K.: 1989, ‘Realism about User Modelling’. In: A. Kobsa and W. Wahlster (eds.):User Models in Dialog Systems. Symbolic Computation Series. Berlin, Heidelberg, New York, Tokyo: Springer-Verlag.

    Google Scholar 

  • Stevens, A., A. Collins, and S. E. Golding: 1979, ‘Misconceptions in Student's Understanding’.International Journal of Man-Machine Studies 11, 145–156.

    Google Scholar 

  • Swartout, W. R. and S. W. Smoliar: 1987, ‘On Making Expert Systems More Like Experts’.Expert Systems 4(3).

  • Swartout, W. R., C. L. Paris, and J. D. Moore: 1991, ‘Design for Explainable Expert Systems’.IEEE Expert 6(3), 58–64.

    Google Scholar 

  • van Beek, P.: 1986, ‘A Model for User Specific Explanations from Expert Systems’. Technical Report CS-86-42, University of Waterloo.

  • Wenger, E.: 1987,Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge. Los Altos, CA: Morgan Kaufmann Publishers.

    Google Scholar 

  • Wolz, U., K. R. McKeown, and G. E. Kaiser: 1990, ‘Automated Tutoring in Interactive Environments: A Task-Centered Approach’.Machine-Mediated Learning 3(1), 53–79.

    Google Scholar 

  • Wu, D.: 1991, ‘Active Acquisition of User Models: Implications for Decision-Theoretic Dialog Planning and Plan Recognition’.User Modeling and User-Adapted Interaction 1(2), 149–172.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Dr. Johanna D. Moore holds interdisciplinary appointments as an Assistant Professor of Computer Science and as a Research Scientist at the Learning Research and Development Center at the University of Pittsburgh. Her research interests include natural language generation, discourse, expert system explanation, human-computer interaction, user modeling, intelligent tutoring systems, and knowledge representation. She received her MS and PhD in Computer Science from the University of California at Los Angeles, and her BS in Mathematics and Computer Science from the University of California at Los Angeles. She is a member of the Cognitive Science Society, ACL, AAAI, ACM, IEEE, and Phi Beta Kappa. Readers can reach Dr. Moore at the Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260.

Dr. Cecile Paris is the project leader of the Explainable Expert System project at USC's information Sciences Institute. She received her PhD and MS in Computer Science from Columbia University (New York) and her bachelor's degree from the University of California in Berkeley. Her research interests include natural language generation and user modeling, discourse, expert system explanation, human-computer interaction, intelligent tutoring systems, machine learning, and knowledge acquisition. At Columbia University, she developed a natural language generation system capable of producing multi-sentential texts tailored to the users” level of expertise about the domain. At ISI, she has been involved in designing a flexible explanation facility that supports dialogue for an expert system shell. Dr. Paris is a member of the Association for Computational Linguistics (ACL), the American Association for Artificial Intelligence (AAAI), the Cognitive Science Society, ACM, IEEE, and Phi Kappa Phi. Readers can reach Dr. Paris at USC/ISI, 4676 Admiralty Way, Marina Del Rey, California, 90292

Rights and permissions

Reprints and permissions

About this article

Cite this article

Moore, J.D., Paris, C.L. Exploiting user feedback to compensate for the unreliability of user models. User Model User-Adap Inter 2, 287–330 (1992). https://doi.org/10.1007/BF01101108

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01101108

Key words

Navigation