The Orchestration of Behaviours using Resources and Priority Levels

  • F. Lamarche
  • S. Donikian
Part of the Eurographics book series (EUROGRAPH)


Reproducing daily behaviours requires the ability to schedule behaviours depending on resources (body parts for example) and priority (intentions or physiological parameters) constraints. A simple way is to say that behaviours which are using the same resources are mutually exclusive. This approach is not sufficient to achieve realism purpose, as in real life, humans are able to combine them in a much microscopic way. All day long, humans mix different behaviours, as for example reading a newspaper while drinking a coffee and smoking a cigarette. If all behaviours using common resources were mutually exclusive, an agent could not reproduce this example, except if a specific behaviour is created. This solution becomes rapidly too complex and has motivated the work presented in this paper. It consists in an extension of HPTS, our behavioural model, by the introduction of resources and priority levels. In the contrary of some previous approaches, it is not necessary to specify exhaustively all behaviours that are mutually exclusive; this is done implicitely by attaching resources to nodes and a priority function to each state machine, and by using a scheduler.


State Machine Autonomous Agent Priority Level Mutual Exclusion Priority Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    O. Ahmad, J. Cremer, S. Hansen, J. Kearney, and P. Willemsen. Hierarchical, concurrent state machines for behavior modeling and scenario control. In Conference on AI, Planning, and Simulation in High Autonomy Systems, Gainesville, Florida, USA, 1994.Google Scholar
  2. 2.
    S. Ambroszkiewicz and J. Komar. Formal Models of Agents, volume 1760 of Lecture Notes in Artificial Intelligence, chapter A Model of BOI-Agent in Game-Theoretic Framework, pages 8–19. Springer, 2000.CrossRefGoogle Scholar
  3. 3.
    N.I. Badler, B.D. Reich, and B.L. Webber. Towards personalities for animated agents with reactive and planning behaviors. Lecture Notes in Artificial Intelligence, Creating Personalitiesfor synthetic actors, (1195):43–57, 1997.Google Scholar
  4. 4.
    R. Bindiganavale, W. Schuler, J. Allbeck, N.I. Badler, A.K. Joshi, and M. Palmer. Dynamically altering agent behaviors using natural language instructions. In C. Sierra, M. Gini, and J.S. Rosenschein, editors, International Conference on Autonomous Agents, pages 293–300, Barcelona, Spain, June 2000. ACM Press.Google Scholar
  5. 5.
    F. Brazier, B. Dunin-Keplicz, J. Treur, and R. Verbrugge. Formal Models of Agents, volume 1760 of Lecture Notes in Artificial Intelligence, chapter Modelling Internal Dynamic Behaviour of BDI Agents, pages 36–56. Springer, 2000.CrossRefGoogle Scholar
  6. 6.
    V. Decugis and J. Ferber. Action selection in an autonomous agent with a hierarchical distributed reactive planning architecture. In Autonomous Agents’ 98, pages 354–361, Minneapolis, USA, 1998. ACM.Google Scholar
  7. 7.
    S. Donikian. HPTS: a behaviour modelling language for autonomous agents. In Fifth International Conference on Autonomous Agents, Montreal, Canada, May 2001. ACM Press.Google Scholar
  8. 8.
    S. Donikian and E. Rutten. Reactivity, concurrency, data-flow and hierarchical preemption for behavioural animation. In E.H. Blake R.C. Veltkamp, editor, Programming Paradigms in Graphics’ 95, Eurographics Collection. Springer-Verlag, 1995.Google Scholar
  9. 9.
    J. Funge, X. Tu, and D. Terzopoulos. Cognitive modeling: Knowledge, reasoning and planning for intelligent characters. In SIGGRAPH’ 99, pages 29–38, Los Angeles, August 1999.Google Scholar
  10. 10.
    P. Maes. Situated agents can have goals. Robotics and Autonomous Systems, 6:49–70, 1990.CrossRefGoogle Scholar
  11. 11.
    J.J. Ch. Meyer and P.Y. Schobbens, editors. Formal Models of Agents, volume 1760 of Lecture Notes in Artificial Intelligence. Springer, 2000.Google Scholar
  12. 12.
    A. Newell. Unified Theories of Cognition. Harvard University Press, 1990.Google Scholar
  13. 13.
    H. Noser and D. Thalmann. Sensor based synthetic actors in a tennis game simulation. In Computer Graphics International’ 97, pages 189–198, Hasselt, Belgium, June 1997. IEEE Computer Society Press.Google Scholar
  14. 14.
    B. J. Rhodes. PHISH-Nets: Planning Heuristically In Situated Hybrid Networks. PhD thesis, Massachusetts Institute of Technology, 1996.Google Scholar
  15. 15.
    A. Sloman. What sort of control system is able to have a personality. In R Trappl and P. Petta, editors, Creating Personalities for Synthetic Actors, volume 1195 of Lecture Notes in Artificial Intelligence, pages 166–208. Springer-Verlag, 1997.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • F. Lamarche
    • 1
  • S. Donikian
    • 1
  1. 1.IRISA, Campus de BeaulieuRennesFrance

Personalised recommendations