Reasoning with the Outcomes of Plan Execution in Intentional Agents
Intentional agents must be aware of their success and failure to truly assess their own progress towards their intended goals. However, our analysis of intentional agent systems indicate that existing architectures are inadequate in this regard. Specifically, existing systems provide few, if any, mechanisms for monitoring for the failure of behavior. This inability to detect failure means that agents retain an unrealistically optimistic view of the success of their behaviors and the state of their environment. In this paper we extend the solution proposed in  in three ways. Firstly, we extend the formulation to handle cases in which an agent has conflicting evidence regarding the causation of the effects of a plan or action. We do this by identifying a number of policies that an agent may use in order to alleviate these conflicts. Secondly, we provide mechanisms by which the agent can utilize its failure handling routines to recover when failure is detected. Lastly, we lift the requirement that all the effects be realized simultaneously and allow for progressive satisfaction of effects. Like the original solution these extensions can be applied to existing BDI systems.
KeywordsAgents AI architectures Reasoning about Actions and Change
Unable to display preview. Download preview PDF.
- 1.Cleaver, T., Sattar, A., Wang, K.: Reasoning about success and failure in intentional agents. In: Proceedings of the 2005 Pacific Rim International Workshop on Multi-Agents (2005)Google Scholar
- 2.Bordini, R., Bazzan, A., Jannone, R., Basso, D., Vicari, R., Lesser, V.: Agentspeak(XL): Efficient Intention Selection in BDI Agents via Decision-Theoretic Task Scheduling. In: Proceedings of the first International Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 1294–1302 (2002)Google Scholar
- 4.Georgeff, M., Ingrand, F.: Decision-making in an Embedded Reasoning System. In: Proceedings of the eleventh International Joint Conference on Artificial Intelligence, pp. 972–978 (1989)Google Scholar
- 5.Ingrand, F., Chatila, R., Alami, R., Robert, F.: PRS: A High Level Supervision and Control Language for Autonomous Mobile Robots. In: Proceedings of the IEEE International Conference on Robotics and Automation (1996)Google Scholar
- 6.Lee, J., Huber, M., Kenny, P., Durfee, E.: UM-PRS: An Implementation of the Procedural Reasoning System for Multi-robot Applications. In: Proceedings of the AIAA/NASA Conference on Intelligent Robots in Field, Factory, Service, and Space, pp. 842–849 (1994)Google Scholar
- 7.Huber, M.: JAM: A BDI-Theoretic Mobile Agent Architecture. In: Proceedings of the third International Conference on Autonomous Agents, pp. 236–243 (1999)Google Scholar
- 8.d’Inverno, M., Kinny, D., Luck, M., Wooldridge, M.: A Formal Specification of dMARS. In: Agent Theories, Architectures and Languages, pp. 115–176 (1997)Google Scholar
- 10.Busetta, P., Rönnquist, R., Hodgson, A., Lucas, A.: Jack Intelligent Agents - Components for Intelligent Agents in Java. AgentLink News Letter (1999)Google Scholar
- 11.Howden, N., Rönnquist, R., Hodgson, A., Lucas, A.: Jack Intelligent Agents - Summary of an Agent Infrastructure. In: Proceedings of the fifth International Conference on Autonomous Agents (2001)Google Scholar