Natural Language Communication Between Human and Artificial Agents

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4088)


The use of a complex form of language for the purpose of communication with others, to exchange ideas and thoughts, to express statements, wishes, goals, and plans, and to issue questions, commands and instructions, is one of the most important and distinguishing aspects humankind. If artificial agents want to participate in a co-operative way in human-agent interaction, they must be able – to a certain degree – to understand and interpret natural language, and translate commands and questions of a human user and map them onto a suitable set of their own primitive actions. In this paper, we outline a general framework and architecture for the development of natural language interfaces for artificial agents. We focus in this paper on task-related communication, in a scenario where the artificial agent performs actions in a co-operative work setting with human partners, who serve as instructors or supervisors. The main aim of this research is to provide a consistent and coherent formal framework for the representation of actions, which can be used for planning, reasoning, and action execution by the artificial agent, and at the same time can serve as a basis for analyzing and generating verbal expressions, i.e. commands, instructions and queries, issued by the human user. The suggested framework is derived from formal methods in knowledge representation, in particular description logics, and involves semantic and ontological descriptive elements taken from linguistics, computer science, and philosophy. Several prototypical agent systems have been developed based on this framework, including simulated agents like an interior design system and a household robot, and a speech controlled toy car as example of a physical agent.


Natural Language Knowledge Representation Agent System Description Logic Artificial Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Allen, J.F., Miller, B.W., Ringger, E.K., Sikorski, T.: A Robust System for Natural Spoken Dialogue. In: Proc. Annual Meeting of the Association for Computational Linguistics (ACL 1996), pp. 62–70 (1996)Google Scholar
  2. 2.
    Allen, J.F., et al.: The TRAINS project: A Case Study in Defining a Conversational Planning Agent. J. of Experimental and Theoretical AI 7, 7–48 (1995)zbMATHCrossRefGoogle Scholar
  3. 3.
    Artale, A., Franconi, E.: A Temporal Description Logic for Reasoning about Actions and Plans. Journal of Artificial Intelligence Research 9, 463–506 (1998)zbMATHMathSciNetGoogle Scholar
  4. 4.
    Baader, F., Calvanese, D., McGuinness, D., Nardi, D., Patel-Schneider, P. (eds.): The Description Logic Handbook. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar
  5. 5.
    Brachman, R.J., Schmolze, J.G.: An Overview of the KL-ONE Knowledge Representation System. Cognitive Science 9(2), 171–216 (1985)CrossRefGoogle Scholar
  6. 6.
    Coyne, B., Sproat, R.: WordsEye: An Automatic Text-to-Scene Conversion System, Siggraph (2001), See also: Semantic Light,
  7. 7.
    Devanbu, P.T., Litman, D.J.: Taxonomic Plan Reasoning. Artificial Intelligence 84, 1–35 (1996)CrossRefGoogle Scholar
  8. 8.
    Di Eugenio, B.: An Action Representation Formalism to Interpret Natural Language Instructions. Computational Intelligence 14, 89–133 (1998)CrossRefGoogle Scholar
  9. 9.
    Baker, C.F., Fillmore, C.J., Lowe, J.B.: The Berkeley FrameNet Project. COLING-ACL, Montreal, Canada (1998)Google Scholar
  10. 10.
    Fillmore, C.: The case for case. In: Emmon Bach and Robert Harms, editors, Universals in Linguistic Theory, pp. 1–90. Holt, Rhinehart and Winston, New York (1968)Google Scholar
  11. 11.
    Jurafsky, D., Martin, J.H.: Speech and Language Processing. Prentice-Hall, Englewood Cliffs (2000)Google Scholar
  12. 12.
    Kemke, C.: Speech and Language Interfaces for Agent Systems. In: Proc. IEEE/WIC/ACM International Conference on Intelligent Agent Technology, September 2004, Beijing, China, pp. 565–566 (2004)Google Scholar
  13. 13.
    Kemke, C.: A Formal Approach to Describing Action Concepts in Taxonomical Knowledge Bases. In: Zhong, N., Raś, Z.W., Tsumoto, S., Suzuki, E. (eds.) ISMIS 2003. LNCS (LNAI), vol. 2871, pp. 657–662. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Kemke, C.: What Do You Know about Mail? Knowledge Representation in the SINIX Consultant. Artificial Intelligence Review 14, 253–275 (2000)zbMATHCrossRefGoogle Scholar
  15. 15.
    Kemke, C.: About the Ontology of Actions. Technical Report MCCS -01-328, Computing Research Laboratory, New Mexico State University (2001)Google Scholar
  16. 16.
    Kemke, C.: Die Darstellung von Aktionen in Vererbungshierarchien (Representation of Actions in Inheritance Hierarchies). In: Hoeppner (ed.) GWAI-1988, Proceedings of the German Workshop on Artificial Intelligence, Springer, Heidelberg (1988)Google Scholar
  17. 17.
    Kemke, C.: Representation of Domain Knowledge in an Intelligent Help System. In: Proc. of the Second IFP Conference on Human-Computer Interaction INTER-ACT 1987, Stuttgart, FRG, pp. 215–200 (1987)Google Scholar
  18. 18.
    Montemerlo, M., Pineau, J., Roy, N., Thrun, S., Verma, V.: Experiences with a Mobile Robotic Guide for the Elderly. In: AAAI National Conference on Artificial Intelligence, Edmonton, Canada (2002)Google Scholar
  19. 19.
    McCarthy, J.: Formalizing Common Sense: Papers by John McCarthy. Ablex Publishing (1990)Google Scholar
  20. 20.
    Patel-Schneider, P.F., Owsnicki-Klewe, B., Kobsa, A., Guarino, N., MacGregor, R., Mark, W.S., McGuiness, D.L., Nebel, B., Schmiedel, A., Yen, J.: Term Subsumption Languages in Knowledge Representation. AI Magazine 11(2), 16–23 (1990)Google Scholar
  21. 21.
    Stent, A., Dowding, J., Gawron, J.M., Owen Bratt, E., Moore, R.: The CommandTalk Spoken Dialogue System. In: Proc. 37th Annual Meeting of the ACL, pp. 183–190. University of Maryland, College Park (1999)Google Scholar
  22. 22.
    Torrance, M.C.: Natural Communication with Robots, S.M. Thesis submitted to MIT Department of Electrical Engineering and Computer Science, January 28, 1994 (1994)Google Scholar
  23. 23.
    Traum, D., Schubert, L.K., Poesio, M., Martin, N., Light, M., Hwang, C.H., Heeman, P., Ferguson, G., Allen, J.F.: Knowledge Representation in the TRAINS-93 Conversation System. Int. J. of Expert Systems, Special Issue on Knowledge Representation and Inference for Natural Language Processing,  9(1), 173–223 (1996)Google Scholar
  24. 24.
    Thrun, S., Beetz, M., Bennewitz, M., Burgard, W., Cremers, A.B., Dellaert, F., Fox, D., Haehnel, D., Rosenberg, C., Roy, N., Schulte, J., Schulz, D.: Probabilistic Algorithms and the Interactive Museum Tour-Guide Robot Minerva. Intern. Journal of Robotics Research 19(11), 972–999 (2000)CrossRefGoogle Scholar
  25. 25.
    Wahlster, W.: VERBMOBIL: Erkennung, Analyse, Transfer, Generierung und Synthese von Spontansprache. Report, DFKI GmbH (June 1997)Google Scholar
  26. 26.
    Walker, E.: An Integrated Planning Algorithm for Abstraction and Decomposition Hierarchies of Actions. Honours Project, Dept. of Computer Science, University of Manitoba (2004)Google Scholar
  27. 27.
    Weida, R., Litman, D.: Subsumption and Recognition of Heterogeneous Constraint Networks. In: Proceedings of CAIA 1994, pp. 381–388 (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of ManitobaWinnipegCanada

Personalised recommendations