Abstract
The goals and the beliefs of an agent are not independent of each other. In order for an agent to be autonomous an agent must control its beliefs as well as tasks/goals. The agent’s beliefs about itself, others and the environment are based on its derived models of perceived and communicated information. The degree of an agent’s belief autonomy is its degree of dependence on others to build its belief models. We propose source trustworthiness, coverage and cost as factors an agent should use to determine on which sources to rely. Trustworthiness represents how reliable other agents are. Coverage is a measure of an information source’s contribution to an agent’s information needs. Cost of getting information from a source is defined by the timeliness of information delivery from the source. Since a respective agent only knows about a limited amount of information sources which can communicate with the agent, it must also rely on other agents to share their models describing sources they know about. These factors along with the degree to which a respective agent shares its knowledge about its neighbors are represented in this paper and proposed as contributions to agent’s decisions regarding its belief autonomy for respective goals.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Wooldridge, M.J., Jennings, N.R.: Intelligent Agents: Theory and Practice. Knowledge Engineering Review 10, 115–152 (1995)
Castelfranchi, C.: Guarantees for Autonomy in Cognitive Agent Architecture. In: Jennings, N.R. (ed.) Intelligent Agents: ECAI 1994 Workshop on Agents Theories, Architectures, and Languages, pp. 56–70. Springer, Berlin (1995)
Barber, K.S., Kim, J.: Soft security: Isolating unreliable agents from society. In: Falcone, R., Barber, S.K., Korba, L., Singh, M.P. (eds.) AAMAS 2002. LNCS (LNAI), vol. 2631, pp. 224–233. Springer, Heidelberg (2003)
Dragoni, A.F., Giorgini, P.: Learning Agents’ Reliability through Bayesian Conditioning: a simulation study. Presented at Learning in DAI Systems (1997)
Falcone, R., Pezzulo, G., Castelfranchi, C.: A fuzzy approach to a belief-based trust computation. In: Falcone, R., Barber, S.K., Korba, L., Singh, M.P. (eds.) AAMAS 2002. LNCS (LNAI), vol. 2631, pp. 73–86. Springer, Heidelberg (2003)
Schillo, M., Funk, P., Rovatsos, M.: ‘Using Trust for Detecting Deceitful Agents in Artificial Societies. The Applied Artificial Intelligence Journal, Special Issue on Deception, Fraud and Trust in Agent Societies, 825–848 (2000)
Barber, K.S., Park, J.: Autonomy Affected by Beliefs: Building Information Sharing Networks with Trustworthy Providers. Presented at Workshop for Autonomy, Delegation and Control at the 2nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2003), Melbourne, Australia (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Barber, K.S., Park, J. (2004). Agent Belief Autonomy in Open Multi-agent Systems. In: Nickles, M., Rovatsos, M., Weiss, G. (eds) Agents and Computational Autonomy. AUTONOMY 2003. Lecture Notes in Computer Science(), vol 2969. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-25928-2_2
Download citation
DOI: https://doi.org/10.1007/978-3-540-25928-2_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22477-8
Online ISBN: 978-3-540-25928-2
eBook Packages: Springer Book Archive