Advertisement

What Sort of Architecture is Required for a Human-Like Agent?

  • Aaron Sloman
Part of the Applied Logic Series book series (APLS, volume 14)

Abstract

This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows.

Keywords

Design Space Explore Design Space Niche Space Turing Test Global Architecture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    J. Bates, A. B. Loyall, and W. S. Reilly. Broad agents. In Paper presented atAAAlspringsymposium on integrated intelligent architectures, 1991. (Available in SIGART BULLETIN, 2 (4), Aug. 1991, pp. 38–40 ).Google Scholar
  2. [2]
    L.P. Beaudoin. Goal processing in autonomous agents. PhD thesis, School of Computer Science, The University of Birmingham, 1994.Google Scholar
  3. [3]
    J. Cohen and I. Stewart. The collapse of chaos. Penguin Books, New York, 1994.Google Scholar
  4. [4]
    J. McCarthy. Making robots conscious of their mental states. In AAA’ Spring Symposium on Representing Mental States and Mechanisms, 1995.Google Scholar
  5. [5]
    H. A. Simon. Motivational and emotional controls of cognition, 1967. Reprinted in Models of Thought, Yale University Press, 29–38, 1979.Google Scholar
  6. [6]
    A. Sloman. Interactions between philosophy and ai: The role of intuition and non-logical reasoning in intelligence. In Proc 2nd IJCAI, London, 1971. Reprinted in Artificial Intelligence, pp 209–225, 1971, and in J.M. Nicholas, ed. Images, Perception, and Knowledge. Dordrecht-Holland: Reidel. 1977.Google Scholar
  7. [7]
    A. Sloman. Motives mechanisms and emotions’. Emotion and Cognition, 1(3):217–234, 1987. Reprinted in M.A. Boden (ed), The Philosophy of Artificial Intelligence. Motives mechanisms and emotions’. Emotion and Cognition, 1(3):217–234, 1987. Reprinted in M.A. Boden (ed), The Philosophy of Artificial Intelligence, `Oxford Readings in Philosophy’ Series, Oxford University Press, 231–247, 1990.Google Scholar
  8. [8]
    A. Sloman. On designing a visual system (towards a gibsonian computational model of vision). Journal of Experimental and Theoretical AI, 1(4):289–337, 1989.CrossRefGoogle Scholar
  9. [9]
    A. Sloman. Prolegomena to a theory of communication and affect. In A. Ortony, J. Slack, and O: Stock, editors, Communication from an Artificial Intelligence Perspective: Theoretical and Applied Issues, pages 229–260. Springer, Heidelberg, Germany, 1992.CrossRefGoogle Scholar
  10. [10]
    A. Sloman. Prospects for ai as. the general science of intelligence. In A. Sloman, D. Hogg, G. Humphreys, D. Partridge, and A. Ramsay, editors, Prospects for Artificial Intelligence, pages 1–10. IOS Press, Amsterdam, 1993.Google Scholar
  11. [11]
    A. Sloman. Semantics in an intelligent control system. Philosophical Transactions of the Royal Society: Physical Sciences and Engineering, 349 (1689): 43–58, 1994.CrossRefGoogle Scholar
  12. [12]
    A. Sloman. Exploring design space and niche space. In Proceedings 5th Scandinavian Conference on AI, Trondheim, Amsterdam, 1995. IOS Press.Google Scholar
  13. [13]
    A. Sloman. Musings on the roles of logical and non-logical representations in intelligence. In Janice Glasgow, Hari Narayanan, and Chandrasekaran, editors, Diagrammatic Reasoning: Computational and Cognitive Perspectives, pages 7–33. MIT Press, 1995.Google Scholar
  14. [14]
    A. Sloman. What sort of control system is able to have a personality, 1995. Available at URL ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affectl Aaron.Sloman.vienna.ps.Z, (Presented at Workshop on Designing personalities for synthetic actors, Vienna, June 1995 ).Google Scholar
  15. [15]
    A. Sloman and M. Croucher. Why robots will have emotions. In Proc 7th Int. Joint Conf. on AI, Vancouver, 1981.Google Scholar
  16. [16]
    A. Sloman and R. Poli. Simagent: A toolkit for exploring agent designs. In Mike Wooldridge, Joerg Mueller, and Milind Tambe, editors, Intelligent Agents Vol II (ATAL-95), pages 392–407. Springer-Verlag, 1996.Google Scholar
  17. [17]
    I.P. Wright, A. Sloman, and L.P. Beaudoin. Towards a design-based analysis of emotional episodes. Philosophy Psychiatry and Psychology, 3 (2): 101–126, 1996.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1999

Authors and Affiliations

  • Aaron Sloman
    • 1
  1. 1.University of BirminghamUK

Personalised recommendations