Advertisement

On Behavior Classification in Adversarial Environments

  • Patrick Riley
  • Manuela Veloso
Chapter

Abstract

In order for robotic systems to be successful in domains with other agents possibly interfering with the accomplishing of goals, the agents must be able to adapt to the opponents’ behavior. The more quickly the agents can respond to a new situation, the better they will perform. We present an approach to doing adaptation which relies on classification of the current adversary into predefined adversary classes. For feature extraction, we present a windowing technique to abstract useful but not overly complicated features. The feature extraction and classification steps are fully implemented in the domain of simulated robotic soccer, and experimental results are presented.

Keywords

Feature Extraction Multiagent System Window Length Complex Domain Target Configuration 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Carmel and S. Markovitch. Opponent modeling in multi-agent systems. In G. Weiss and S. Sen, editors, Adaptation and Learning in Multi-Agent Systems ,Lecture Notes in Artificial Intelligence, pages 40–52. Springer, 1995.Google Scholar
  2. 2.
    K. Han and M. Veloso. Automated robot behavior recognition applied to robotic soccer. In Proceedings of IJCAI-99 Workshop on Team Behaviors and Plan Recognition ,1999.Google Scholar
  3. 3.
    M. J. Matati. Learning in multi-robot systems. In G. Weiss and S. Sen, editors, Adaptation and Learning in Multi-Agent Systems ,Lecture Notes in Artificial Intelligence, pages 206–217. IJCAI’95 Workshop, Springer, 1995.Google Scholar
  4. 4.
    I. Noda, H. Matsubara, K. Hiraki, and I. Prank. Soccer server: A tool for research on multiagent systems. Applied Artificial Intelligence ,12:233–250, 1998.CrossRefGoogle Scholar
  5. 5.
    J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.Google Scholar
  6. 6.
    M. Ramoni, P. Sebastiani, P. Cohen, J. Warwick, and J. Davis. Bayesian clustering by dynamics. Technical Report KMi-TR-78, Knowledge Media Institute, The Open University, United Kindgdom MK7 6AA, February 1999.Google Scholar
  7. 7.
    P. Riley. Classifying adversarial behaviors in a dynamic, inaccessible, multiagent environment. Technical Report CMU-CS-99-175, Carnegie Mellon University, 1999.Google Scholar
  8. 8.
    S. Sen and M. Sekaran. Using reciprocity to adapt to others. In G. Weiss and S. Sen, editors, Adaptation and Learning in Multi-Agent Systems ,Lecture Notes in Artificial Intelligence, pages 206–217. IJCAI’95 Workshop, Springer, 1995.Google Scholar
  9. 9.
    P. Stone, M. Veloso, and P. Riley. The CMUnited-98 champion simulator team. In M. Asada and H. Kitano, editors, RoboCup-98: Robot Soccer World Cup II ,pages 61–76. Springer Verlag, Berlin, 1999.CrossRefGoogle Scholar
  10. 10.
    B. Thom. Learning models for interactive melodic improvisation. In International Conference on Computer Music ,China, October 1999.Google Scholar

Copyright information

© Springer-Verlag Tokyo 2000

Authors and Affiliations

  • Patrick Riley
    • 1
  • Manuela Veloso
    • 1
  1. 1.Computer Science DepartmentCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations