Advertisement

Automating Human Information Agents

  • S. Franklin
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 98)

Abstract

We describe a software agent technology capable of automating the entire functionality of such human information agents as insurance agents, travel agents and loan officers of banks, and many, many others. This includes negotiating in natural language, accessing databases, adhering to numerous policies, and producing products. The technology is based on a psychological theory of consciousness implemented with modules for perception, associative memory, action selection, deliberation, etc. The case study herein describes an agent whose task is to assign new billets to sailors at the end of their current duty assignment.

Keywords

Voluntary Action Autonomous Agent Associative Memory Action Selection Software Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allen, J.J. (1995), Natural Language Understanding. Reswood City CA: Benjamin/Cummings; Benjamin; Cummings.Google Scholar
  2. Anwar, A. and Franklin, S. (submitted), “ Sparse distributed memory for ‘conscious’ software agents.” Anwar, A., Dasgupta, D., and Franklin, S. (1999), “ Using genetic algorithms for sparse distributed memory initialization,” International Conference Genetic and Evolutionary Computation (GECCO),July.Google Scholar
  3. Baars, B.J. (1988), A Cognitive Theory of Consciousness, Cambridge: Cambridge University Press.Google Scholar
  4. Baars, B.J. (1997), In the Theater of Consciousness,Oxford: Oxford University Press.Google Scholar
  5. Barsalou, L.W. (1999), “ Perceptual symbol systems,” Behavioral and Brain Sciences, 22: 577–609.Google Scholar
  6. Bogner, M. (1999), “ Realizing ‘consciousness’ in software agents,” Ph.D. Dissertation, University of Memphis.Google Scholar
  7. Bogner, M., Ramamurthy, U., and Franklin, S. (2000), “ ‘Consciousness’ and conceptual learning in a socially situated agent,” in Dautenhahn, K. (ed.), Human Cognition and Social Agent Technology, Amsterdam: John Benjamins, pp. 113–135.Google Scholar
  8. Edelman, G.M. (1987), Neural Darwinism, New York: Basic Books.Google Scholar
  9. Franklin, S. (1995), Artificial Minds, Cambridge MA: MIT Press.Google Scholar
  10. Franklin, S. (1997a), “ Autonomous agents as embodied AI,” Cybernetics and Systems, 28: 499–520.CrossRefGoogle Scholar
  11. Franklin, S. (1997b), “ Global workspace agents,” Journal of Consciousness Studies, 4: 322–334.Google Scholar
  12. Franklin, S. (2000), “ Learning in ‘conscious’ software agents,” Workshop on Development and Learning, Michigan State University; East Lansing, Michigan, USA: NSF; DARPA; April 5–7.Google Scholar
  13. Franklin, S. and Graesser, A.C. (1997), “ Is it an agent, or just a program?: A taxonomy for autonomous agents,” Intelligent Agents III, Berlin: Springer Verlag.Google Scholar
  14. Franklin, S. and Graesser, A. (1999), “ A software agent model of consciousness,” Consciousness and Cognition, 8: 285–305.CrossRefGoogle Scholar
  15. Franklin, S., Graesser, A., Olde, B., Song, H., and Negatu, A. (1996), “ Virtual Mattie — an intelligent clerical agent,” AAAI Symposium on Embodied Cognition and Action, Cambridge MA, November.Google Scholar
  16. Franklin, S., Kelemen, A., and McCauley, L. (1998), “ IDA: a cognitive agent architecture,” IEEE Conf on Systems, Man and Cybernetics, IEEE Press.Google Scholar
  17. Hofstadter, D.R. and Mitchell, M. (1994), “ The Copycat Project: a model of mental fluidity and analogy-making,” in Holyoak, K.J. and Barnden, J.A. (eds.), Advances in Connectionist and Neural Computation Theory, Vol. 2: Logical Connections, Norwood N.J.: Ablex.Google Scholar
  18. Holland, J.H. (1986), “ A mathematical framework for studying learning in classifier systems,” Physica, 22 D:307–317. (Also in Farmer, J.D., Lapedes, A., Packard, N.H., and Wendroff, B. (eds.), Evolution, Games and Learning, NorthHolland (Amsterdam).)Google Scholar
  19. Jackson, J.V. (1987), “ Idea for a mind,” Siggart Newsletter, vol. 181, pp. 23–26.Google Scholar
  20. James, W. (1890), The Principles of Psychology, Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
  21. Kanerva, P. (1988), Sparse Distributed Memory, Cambridge MA: The MIT Press.MATHGoogle Scholar
  22. Kolodner, J. (1993), Case-Based Reasoning,Morgan Kaufman.Google Scholar
  23. Maes, P. (1989), “ How to do the right thing,” Connection Science, vol. 1, pp. 291–323.MathSciNetCrossRefGoogle Scholar
  24. Maes, P. (1993), “ Modeling adaptive autonomous agents,” Artificial Life, vol. 1, pp. 135–162.CrossRefGoogle Scholar
  25. Maturana, R.H. and Varela, F.J. (1980), Autopoiesis and Cognition: the Realization of the Living, Dordrecht, Netherlands: Reidel.CrossRefGoogle Scholar
  26. Maturana, H.R. (1975), “ The organization of the living: a theory of the living organization,” International Journal of Man-Machine Studies, vol. 7, pp. 313–332.CrossRefGoogle Scholar
  27. McCauley, L., Franklin, S., and Bogner, M. (2000), “ An emotion-based ‘conscious’ software agent architecture,” in Paiva, A. (ed.), Affective Interactions, Lecture Notes on Artificial Intelligence, LNAI 1814, Springer.Google Scholar
  28. McCauley, T.L. and Franklin, S. (1998), “ An architecture for emotion,” AAAI Fall Symposium Emotional and Intelligent: the Tangled Knot of Cognition, Menlo Park, CA: AAAI Press.Google Scholar
  29. Minsky, M. (1985), The Society of Mind, New York: Simon and Schuster.Google Scholar
  30. Mitchell, M. (1993), Analogy-Making as Perception, Cambridge MA: The MIT Press.Google Scholar
  31. Negatu, A. and Franklin, S. (1999), “ Behavioral learning for adaptive software agents,” Intelligent Systems: ISCA 5th International Conference; International Society for Computers and Their Applications — ISCA, Denver, Colorado, June.Google Scholar
  32. Ornstein, R. (1986), Multimind, Boston: Houghton Mifflin.Google Scholar
  33. Ramamurthy, U., Franklin, S., and Negatu, A. (1998), “ Learning concepts in software agents,” in Pfeifer, R., Blumberg, B., Meyer, J.-A., and Wilson, S.W. (eds.), From Animals to Animats 5: Proceedings of The Fifth International Conference on Simulation of Adaptive Behavior, Cambridge, Mass: MIT Press.Google Scholar
  34. Sloman, A. (1987), “ Motives mechanisms emotions,” Cognition and Emotion, vol. 1, pp. 217–234.CrossRefGoogle Scholar
  35. Sloman, A. (1999), “ What sort of architecture is required for a humanlike agent?” in Wooldridge, M. and Rao, A. (ed.), Foundations of Rational Agency, Portland Oregon.Google Scholar
  36. Song, H. and Franklin, S. (2000), “ A behavior instantiation agent architecture,” Connection Science, pp. 1–24.Google Scholar
  37. Valenzuela-Rendon, M. (1991), “ The fuzzy classifier system: a classifier system for continuously varying variables,” Proceedings of the Fourth International Conference on Genetic Algorithms, San Mateo CA: Morgan Kaufmann.Google Scholar
  38. Zadeh, L.A. (1965), “ Fuzzy sets,” Inf. Control, 8: pp. 338–353.MathSciNetMATHCrossRefGoogle Scholar
  39. Zhang, Z., Dasgupta, D., and Franklin, S. (1998a), “ Metacognition in software agents using classifier systems,” Proceedings of the Fifteenth National Conference on Artificial Intelligence, Madison, Wisconsin: MIT Press, pp. 83–88.Google Scholar
  40. Zhang, Z., Franklin, S., Olde, B., Wan, Y., and Graesser, A. (1998b), “ Natural language sensing for autonomous agents,” Proceedings of IEEE International Joint Symposia on Intelligence Systems 98, pp. 374–371.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • S. Franklin

There are no affiliations available

Personalised recommendations