Skip to main content

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Albus, J.S. (1981). Brain, behavior, and robotics. Byte Book, Subsidiary of McGraw-Hill, Chapter 6, pp. 139–179.

    Google Scholar 

  • Brooks, R.A. (1990). Elephants don't play chess. Robotics and Autonomous Systems, 6, 3–15.

    Google Scholar 

  • CoopIS (1996). Proceedings if the First IFCIS International Conference on Cooperative Information Systems, IEEE-CS Press.

    Google Scholar 

  • Bull, L., & Fogarty, T. (1996). Evolutionary computing in cooperative multiagent environments. In (Sen, 1996, pp. 22–27).

    Google Scholar 

  • Carley, K.M., & Prietula, M.J. (Eds.) (1994). Computational organization theory. Lawrence Erlbaum Associates.

    Google Scholar 

  • Drogoul, A., Ferber, J., Corbara, B., & Fresneau, D. (1991). A behavioral simulation model for the study of emergent social structures. In Proceedings of the First European Conference on Artificial Life.

    Google Scholar 

  • Finin, T., McKay, D., Fritzson, R., & McEntire, R. (1993). KQML: An information and knowledge exchange protocol. In International Conference on Building and Sharing of Very Large-Scale Knowledge Bases.

    Google Scholar 

  • Galliers, J.R. (1991). Modeling autonomous belief revision in dialogue. In Y. Demazeau & J.-P. Müller (Eds.), Decentralized Artificial Intelligence 2. Elsevier.

    Google Scholar 

  • Gaspar, G. (1991). Communication and belief changes in a society of agents: Towards a formal model of an autonomous agent. In Y. Demazeau & J.-P. Müller (Eds.), Decentralized Artificial Intelligence 2. Elsevier.

    Google Scholar 

  • Grefenstette, J., & Daley, R. (1991). Methods for competitive and cooperative co-evolution. In (Sen, 1996, pp. 45–50).

    Google Scholar 

  • Harnad, S. (1990). The symbol grounding problem. In Physica D, 42, 335–346.

    Google Scholar 

  • Haynes, T., Lau, K., & Sen, S. (1996). Learning cases to compliment rules for conflict resolution in multiagent systems. In (Sen, 1996, pp. 51–56).

    Google Scholar 

  • Haynes, T., & Sen, S. (1996). Evolving behavioral strategies in predators and prey. In (Weiß & Sen, 1996, pp. 113–126).

    Google Scholar 

  • Ishida, T., & Korf, R.E. (1991). Moving target search. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (pp. 204–210).

    Google Scholar 

  • Lin, L.-J. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8, pp. 293f.

    Google Scholar 

  • Jennings, N.R., Malheiro B., & Oliveira, E. (1994). Belief revision in multiagent systems. In Proceedings of the 11th European Conference on Artificial Intelligence (pp. 294–298).

    Google Scholar 

  • Kitano, H., Asada, M., Kuniyoshi, Y., Noda, I., & Osawa, E. (1995). Robocup: The robot world cup initiative. Working Notes on the IJCAI-95 Workshop on Entertainment and AI/Alife (pp. 19–24).

    Google Scholar 

  • Maes, P., & Kozierok, R. (1990). Learning interface agents. In Proceedings of the Eleventh National Conference on Artificial Intelligence (pp. 459–465).

    Google Scholar 

  • Minsky, M. (1961). Steps towards artificial intelligence. In Proceedings of the IRE (pp. 8–30). Reprinted in E.A. Feigenbaum & J. Feldman (Eds.) (1963), Computers and thought (pp. 406–450), McGraw-Hill.

    Google Scholar 

  • Mitchell, T.M. (1978). Version spaces: An approach to concept learning. Ph.D. Thesis. Computer Science Department, Stanford University.

    Google Scholar 

  • Nagendra Prasad, M.V., Lesser, V.R., & Lander, S. (1995). Learning organizational roles in a heterogeneous multi-agent system. In Proceedings of the Second International Conference on Multiagent Systems (pp. 291–298).

    Google Scholar 

  • Nagendra Prasad, M.V., Lesser, V.R., & Lander, S. (1996). On reasoning and retrieval in distributed case bases. Journal of Visual and Image Representation (Special Issue on Digital Libraries), 7(1), 74–87.

    Google Scholar 

  • Newell, A., & Simon, H.A. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19(3), 113–126.

    Google Scholar 

  • Pollack, M.E., & Ringuette, M. (1990). Introducing the tileworld: experimentally evaluating agent architectures. In Proceedings of the National Conference on Artificial Intelligence (pp. 183–189).

    Google Scholar 

  • Rosen, R. (1985). Anticipatory systems — Philosophical, mathematical and methodological foundations. Pergamon Press.

    Google Scholar 

  • Sen, S. (Ed.) (1996). Adaptation, coevolution and learning in multiagent systems. Papers from the 1996 AAAI Symposium. Technical Report SS-96-01. AAAI Press.

    Google Scholar 

  • Smith, R.G. (1980). The contract net protocol: High-level communication and control in a distributed problem solver. IEEE Transactions on Computers, C-29(12), 357–366.

    Google Scholar 

  • Stone, P., & Veloso, M. (1996). Towards collaborative and adversarial learning: A case study in robotic soccer. In (Sen, 1996, pp. 88–92).

    Google Scholar 

  • Sullivan, J.W., & Tylor, S.W. (Eds.) (1990). Intelligent user interfaces. ACM Press.

    Google Scholar 

  • Tan, M. (1993). Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the Tenth International Conference on Machine Learning (pp. 330–337).

    Google Scholar 

  • Weiß, G. (1994). Some studies in distributed machine learning and organizational design. Technical Report FKI-189-94. Institut für Informatik, Technische Universität München.

    Google Scholar 

  • Weiß, G. (1996). Adaptation and learning in multi-agent systems: Some remarks and a bibliography. In (Weiß & Sen, 1996, pp. 1–21).

    Google Scholar 

  • Weiß, G., & Sen, S. (Eds.) (1996). Adaption and learning in multi-agent systems. Lecture Notes in Artificial Intelligence, Vol. 1042. Springer-Verlag.

    Google Scholar 

  • Whitehead, S., et al. (1993). Learning multiple goal behavior via task decomposition and dynamic policy merging. In J. H. Connell et al. (Eds.), Robot learning. Academic Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Gerhard Weiß

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Weiß, G. (1997). Reader's guide. In: Weiß, G. (eds) Distributed Artificial Intelligence Meets Machine Learning Learning in Multi-Agent Environments. LDAIS LIOME 1996 1996. Lecture Notes in Computer Science, vol 1221. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-62934-3_37

Download citation

  • DOI: https://doi.org/10.1007/3-540-62934-3_37

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-62934-4

  • Online ISBN: 978-3-540-69050-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics