An Empirical Study of Coaching

  • Patrick Riley
  • Manuela Veloso
  • Gal Kaminka


In simple terms, one can say that team coaching in adversarial domains consists of providing advice to distributed players to help the team to respond effectively to an adversary. We have been researching this problem to find that creating an autonomous coach is indeed a very challenging and fascinating endeavor. This paper reports on our extensive empirical study of coaching in simulated robotic soccer. We can view our coach as a special agent in our team. However, our coach is also capable of coaching other teams other than our own, as we use a recently developed universal coach language for simulated robotic soccer with a set of predefined primitives. We present three methods that extract models from past games and respond to an ongoing game: (i) formation learning, in which the coach captures a team’s formation by analyzing logs of past play; (ii) set-play planning, in which the coach uses a model of the adversary to direct the players to execute a specific plan; (iii) passing rule learning, in which the coach learns clusters in space and conditions that define passing behaviors. We discuss these techniques within the context of experimental results with different teams. We show that the techniques can impact the performance of teams and our results further illustrate the complexity of the coaching problem.


Multiagent System Formation Learning Intelligent Tutor System Learning Agent Extensive Empirical Study 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    P. Bakker and Y. Kuniyoshi. Robot see, robot do: An overview of robot imitation. In AISB96 Workshop on Learning in Robots and Animals, pages 3–11, Brighton, UK, 1996.Google Scholar
  2. 2.
    T. Balch, P. Stone, and G. Kraetzschmar, editors. RoboCup-2000: Robot Soccer World Cup IV. Springer Verlag, Berlin, 2001.MATHGoogle Scholar
  3. 3.
    P. Cheeseman, J. Kelly, M. Self, J. Stutz, W. Taylor, and D. Freeman. Auto-class: A bayesian classification system. In ICML-88, pages 54–64, San Francisco, June 1988. Morgan Kaufmann.Google Scholar
  4. 4.
    J. Clouse. Learning from an automated training agent. In D. Gordon, editor, Working Notes of the ICML ′95 Workshop on Agents that Learn from Other Agents, Tahoe City, CA, 1995.Google Scholar
  5. 5.
    A. B. S. Coradeschi and S. Tadokoro, editors. RoboCup-2001: Robot Soccer World Cup V. Springer Verlag, Berlin, 2002.MATHGoogle Scholar
  6. 6.
    K. Dautenhahn. Getting to know each other—artificial social intelligence for autonomous robots. Robotics and Autonomous Systems, 16:333–356, 1995.CrossRefGoogle Scholar
  7. 7.
    R. Dechter, I. Meiri, and J. Pearl. Temporal constraint networks. Artificial Intelligence, 49:61–95, 1991.MathSciNetMATHCrossRefGoogle Scholar
  8. 8.
    R. Maclin and J. W. Shavlik. Creating advice-taking reinforcement learners. Machine Learning, 22:251–282, 1996.Google Scholar
  9. 9.
    M. S. Miller, J. Yin, R. A. Volz, T. R. Ioerger, and J. Yen. Training teams with collaborative agents. In ITS-2000, pages 63–72, 2000.Google Scholar
  10. 10.
    I. Noda, H. Matsubara, K. Hiraki, and I. Frank. Soccer server: A tool for research on multiagent systems. Applied Artificial Intelligence, 12:233–250, 1998.CrossRefGoogle Scholar
  11. 11.
    M. Paolucci, D. D. Suthers, and A. Weiner. Automated advice-giving strategies for scientific inquiry. In ITS-96, pages 372–381, 1996.Google Scholar
  12. 12.
    J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.Google Scholar
  13. 13.
    T. Raines, M. Tambe, and S. Marsella. Automated assistant to aid humans in understanding team behaviors. In Agents-2000, 2000.Google Scholar
  14. 14.
    P. Riley and M. Veloso. Planning for distributed execution through use of probabilistic opponent models. In AIPS-2002, 2002. (to appear).Google Scholar
  15. 15.
    P. Riley, M. Veloso, and G. Kaminka. Towards any-team coaching in adversarial domains. In AAMAS-02, 2002. (extended abstract) (to appear).Google Scholar
  16. 16.
    RoboCup Federation, Soccer Server Manual, 2001.Google Scholar
  17. 17.
    C. Sammut, S. Hurst, D. Kedzier, and D. Michie. Learning to fly. In ICML-92, Aberdeen, 1992. Morgan Kaufmann.Google Scholar
  18. 18.
    P. Stone, P. Riley, and M. Veloso. The CMUnited-99 champion simulator team. In Veloso, Pagello, and Kitano, editors, RoboCup-99: Robot Soccer World Cup III, pages 35–48. Springer, Berlin, 2000.CrossRefGoogle Scholar
  19. 19.
    P. Stone and M. Veloso. Task decomposition, dynamic role assignment, and low-bandwidth communication for real-time strategic teamwork. Artificial Intelligence, 110(2):241–273, June 1999.MATHCrossRefGoogle Scholar
  20. 20.
    D. Šuc and I. Bratko. Skill reconstruction as induction of LQ controllers with subgoals. In IJCAI-97, pages 914–919, 1997.Google Scholar

Copyright information

© Springer-Verlag Tokyo 2002

Authors and Affiliations

  • Patrick Riley
    • 1
  • Manuela Veloso
    • 1
  • Gal Kaminka
    • 1
  1. 1.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations