A formal framework for reasoning about opportunistic propensity in multiagent systems
 160 Downloads
Abstract
Opportunism is an intentional behavior that takes advantage of knowledge asymmetry and results in promoting agents’ own value and demoting others’ value. It is important to eliminate such selfish behavior in multiagent systems, as it has undesirable results for the participating agents. In order for monitoring and eliminating mechanisms to be put in the right place, it is needed to know in which context agents are likely to perform opportunistic behavior. In this paper, we develop a formal framework to reason about agents’ opportunistic propensity. Opportunistic propensity refers to the potential for an agent to perform opportunistic behavior. Agents in the system are assumed to have their own value systems and knowledge. With value systems, we define agents’ state preferences. Based on their value systems and incomplete knowledge about the state, they choose one of their rational alternatives to perform, which might be opportunistic behavior. We then characterize the situation where agents are likely to perform opportunistic behavior and the contexts where opportunism is impossible to occur, and prove the computational complexity of predicting opportunism.
Keywords
Opportunism Propensity Logic Reasoning Decision theory1 Introduction
In the electronic market, buyers are cautious that they will receive products in bad quality. This is because only sellers on the other side of the market know whether the products are good enough before buyers receive them. The sellers can exploit the situation of knowledge asymmetry between seller and buyers to achieve their own gain at the expense of the buyers. Such behavior, which is intentionally performed by the sellers, was named opportunistic behavior (or opportunism) by economist Williamson [1]: Selfinterest seeking with guile. It is always in the form of cheating, lying, betrayal, etc. Freeriding and adverse selection are two classical cases of opportunistic behavior that are most referred to [2]. A large amount of research from social science was done to investigate opportunistic behavior from its own perspective [3, 4, 5], providing a descriptive theoretical foundation to the study of opportunism. In this paper, by using the notion of values as abstract standards of agents’ preferences over states, we further interpret the original definition as a selfish behavior that takes advantage of relevant knowledge asymmetry and results in promoting one’s own value and demoting others’ value [6].
It is interesting and important to study opportunism in the context of multiagent systems. Social concepts are often used to construct artificial societies, and interacting agents are designed to behave in a humanlike way characterized with selfinterest: egoistic agents are designed to care about their own benefits more than other agents’. Besides, it is normal that knowledge is distributed among participating agents in the system. It is a context that creates the ability and the desire for the agents to behave opportunistically. We want to eliminate such a selfish behavior, as it has undesirable results for other agents in the system. Before we design any mechanism for eliminating opportunism, it is important that we can estimate whether it will happen in the future. Evidently, not every agent is likely to be opportunistic in any context. In economics, ever since the theory about opportunism was proposed by economist Williamson, it has gained a large amount of criticism due to overassuming that all economic players are opportunistic. Chen et al. [7] highlights the challenge on how to predict opportunism ex ante and introduces a cultural perspective to better specify the assumptions of opportunism. In multiagent systems, we also need to investigate the interesting issue about opportunistic propensity so that the appropriate amount of monitoring [8] and eliminating mechanisms [9] can be put in place.
Based on the decision theory, an agent’s decision on what to do depends on the agent’s ability and preferences. If we apply it to opportunistic behavior, an agent will perform opportunistic behavior when the precondition is satisfied and it is in his interest to do it, i.e. when he can do it and he prefers doing it. Those are the two issues that we consider in this paper without discussing any normative issues. Based on the assumption, we develop a framework which is a transition system extended with value systems. Our framework can be used to predict whether an agent is likely to perform opportunistic behavior and specify under what circumstances an agent will perform opportunistic behavior. A monitoring mechanism for opportunism benefits from this result as monitoring devices may be set up in the occasions where opportunism will potentially occur. We can also design mechanisms for eliminating opportunism based on the understanding of how agents decide to behave opportunistically.
In this paper, we introduce a logicbased formal framework to reason about agents’ opportunistic propensity. Opportunistic propensity refers to the potential for an agent to perform opportunistic behavior. More precisely, agents in the system are assumed to have their own value systems and knowledge. We specify an agent’s value system as a strict total order over a set of values, which are encoded within our logical language. Using value systems, we define agents’ state preferences. Moreover, agents have partial knowledge about the true state where they are residing. Based on their value systems and incomplete knowledge, they choose one of their rational alternatives, which might be opportunistic. We thus provide a natural bridge between logical reasoning and decisionmaking, which is used for reasoning about opportunistic propensity. We then characterize the situation where agents are likely to perform opportunistic behavior and the contexts where opportunism is impossible to happen, and prove the computational complexity of predicting opportunism. It is a basic logical framework for reasoning about opportunistic propensity in the sense that we consider onetime decisionmaking of agents without involving any social mechanisms such as trust and reputation. Besides, even though we are not aware of agents’ value systems as system designers, we are cautious about the occurence of opportunism given possible value sytems of participating agents.
The structure of this paper is organized as follows. Section 2 introduces the logical framework, which is a transition system extended with agents’ epistemic relations. Section 3 introduces the basis of agents’ decisionmaking, which is their value systems and limited knowledge about the system. Section 4 defines opportunism using our framework. Section 5 characterizes the situation where agents are likely to perform opportunistic behavior and the contexts where opportunism is impossible to happen. We discuss our framework in Sect. 6 and conclude the paper in Sect. 8.
2 Framework
We use Kripke structures as our basic semantic models of multiagent systems. A Kripke structure is a directed graph whose nodes represent the possible states of the system and whose edges represent accessibility relations. Within those edges, equivalence relation \({\mathcal {K}}(\cdot ) \subseteq S \times S\) represents agents’ epistemic relation, while relation \({\mathcal {R}} \subseteq S \times Act \times S\) captures the possible transitions of the system that are caused by agents’ actions. We use \(s_0\) to denote the initial state of the system. It is important to note that, because in this paper we only consider opportunistic behavior as an action performed by an agent, we do not model concurrent actions so that every possible transition of the system is caused by an action instead of joint actions. We use \(\varPhi =\{p,q,\ldots \}\) of atomic propositional variables to express the properties of states S. A valuation function \(\pi \) maps each state to a set of properties that hold in the corresponding state. Formally,
Definition 1

\(Agt=\{1,\ldots ,n\}\) is a finite set of agents;

S is a finite set of states;

Act is a finite set of actions;

\(\pi : S \rightarrow {\mathcal {P}}(\varPhi )\) is a valuation function mapping a state to a set of propositions that are considered to hold in that state;

\({\mathcal {K}}: Agt \rightarrow 2^{S \times S}\) is a function mapping an agent in Agt to a reflexive, transitive and symmetric binary relation between states; that is, given an agent i, for all \(s \in S\) we have \(s {\mathcal {K}}(i) s\); for all \(s,t,u \in S\)\(s {\mathcal {K}}(i) t\) and \(t {\mathcal {K}}(i) u\) imply that \(s {\mathcal {K}}(i) u\); and for all \(s,t \in S\)\(s {\mathcal {K}}(i) t\) implies \(t {\mathcal {K}}(i) s\); \(s{\mathcal {K}}(i) s^\prime \) is interpreted as state \(s^\prime \) is epistemically accessible from state s for agent i. For convenience, we use \({\mathcal {K}}(i,s) = \{ s^\prime \mid s {\mathcal {K}}(i) s^\prime \}\) to denote the set of epistemically accessible states from state s;

\({\mathcal {R}} \subseteq S \times Act \times S\) is a relation between states with actions, which we refer to as the transition relation labeled with an action; we require that for all \(s \in S\) there exists an action \(a \in Act\) and one state \(s^\prime \in S\) such that \((s,a,s^\prime ) \in {\mathcal {R}}\), and we ensure this by including a stuttering action sta that does not change the state, that is, \((s,sta,s) \in {\mathcal {R}}\); we restrict actions to be deterministic, that is, if \((s,a,s^\prime ) \in {\mathcal {R}}\) and \((s,a,s^{\prime \prime }) \in {\mathcal {R}}\), then \(s^\prime =s^{\prime \prime }\); since actions are deterministic, sometimes we denote state \(s^\prime \) as \(s\langle a \rangle \) for which it holds that \((s,a,s\langle a \rangle ) \in {\mathcal {R}}\). For convenience, we use \(Ac(s) = \{ a \mid \exists s^\prime \in S : (s,a,s^\prime ) \in {\mathcal {R}} \}\) to denote the available actions in state s.

\(s_0 \in S\) denotes the initial state.

\({\mathcal {T}},s \models p\) iff \(p \in \pi (s)\);

\({\mathcal {T}},s \models \lnot \varphi \) iff \({\mathcal {T}},s \not \models \varphi \);

\({\mathcal {T}},s \models \varphi _1 \vee \varphi _2\) iff \({\mathcal {T}},s \models \varphi _1\) or \({\mathcal {T}},s \vDash \varphi _2 \);

\({\mathcal {T}},s \models K_i \varphi \) iff for all t such that \(s{\mathcal {K}}(i) t\), \({\mathcal {T}},t \models \varphi \);

\({\mathcal {T}},s \models \langle a \rangle \varphi \) iff there exists \(s^\prime \) such that \((s,a,s^\prime ) \in {\mathcal {R}}\) and \({\mathcal {T}},s^\prime \models \varphi \);
Example 1
Consider the following example: Fig. 1 shows a Kripke structure \({\mathcal {T}}\) for agent i. In state s, agent i considers state s and \(s'\) as his epistemic alternatives. Formula u, \(\lnot v\) and \(\lnot w\) hold in both state s and \(s^\prime \), meaning that agent i knows u, \(\lnot v\) and \(\lnot w\) in state s. By the performance of action \(a_1\), state s and \(s^\prime \) result in state \(s \langle a_1 \rangle \) and \(s^\prime \langle a_1 \rangle \) respectively, where formula \(\lnot u\), \(\lnot v\) and w hold.
3 Value systems and rational alternatives
Agents in the system are assumed to have their own value systems and knowledge. Based on their value systems and incomplete knowledge about the system, agents form their rational alternatives for the action they are going to perform.
3.1 Value systems
Given several (possibly opportunistic) actions available to an agent, it is the agent’s decision to perform opportunistic behavior. Basic decision theory applied to intelligent agents relies on three things: agents know what actions they can carry out, the effects of each action and agents’ preference over the effects [14]. In this paper, the effects of each action are expressed by our logical language, and we will specify agents’ abilities and preferences in this section. It is worth noting that we only study a single action being opportunistic in this paper, so we will apply basic decision theory for oneshot (onetime) decision problems, which concern the situations where a decision is experienced only once.
One important feature of opportunism is that it promotes agents’ own value but demotes others’ value. Agents’ value systems work as the basis of practical reasoning. A value can be seen as an abstract standard according to which agents define their preferences over states. For instance, if we have a value denoting equality, we prefer the states where equal sharing or equal rewarding hold. Because of the abstract feature of a value, we interpret a value in more detail as a state property, which is represented as a \({\mathcal {L}}_{{\text {KA}}}\) formula. The most basic value we can construct is simply a proposition p, which represents the value of achieving p. More complex values can be interpreted such as of the form \(\langle a \rangle \varphi \wedge \langle a' \rangle \lnot \varphi \), which represents the value that there is an option in the future to either achieve \(\varphi \) or \(\lnot \varphi \). Such a value corresponds to freedom of choice. A formula of a value can also be in the form of \(K\varphi \), meaning that it is valuable to achieve knowledge. In this paper, we denote values with v, and it is important to remember that v is a formula from the language \({\mathcal {L}}_{{\text {KA}}}\). However, not every formula from \({\mathcal {L}}_{{\text {KA}}}\) can be intuitively classified as a value.
We argue that agents can always compare any two values. When an agent has two different values with the same importance, we can combine them as one value. For example, two values that my husband is healthy (p) and my kids are healthy (q) can be expressed as my family members are healthy (\(p \wedge q\)). In this way, every element in the set of values is comparable to each other and none of them is logically equivalent to each other for a given agent. Based on it, we define a value system as a strict total order over a set of values, representing the degree of importance, which are inspired by the preference lists in [15] the goal structure in [16]. Having this definition makes it easier for us to specify agents’ preferences over any two different states, and it is also consistent with the way of specifying state preferences in [6].
Definition 2
(Value system) A value system \(V = ({\text {Val}},\prec )\) is a tuple consisting of a finite set \({\text {Val}} = \{v, \ldots , v'\} \subseteq {\mathcal {L}}_{{\text {KA}}}\) of values together with a strict total ordering \(\prec \) over \({\text {Val}}\). When \(v \prec v'\), we say that value \(v'\) is more important than value v.
We also use a natural number indexing notation to extract the value of a value system, so if V gives rise to the ordering \(v \prec v' \prec \dots \), then \(V[0]=v\), \(V[1]=v'\), and so on. Since a value is represented as a \({\mathcal {L}}_{{\text {KA}}}\) formula and it can be promoted or demoted by an action, value promotion and demotion along a state transition can be defined as follows:
Definition 3
An agent’s value v gets promoted along the state transition \((s,a,s^\prime )\) if and only if v doesn’t hold in state s and holds in state \(s^\prime \); an agent’s value v gets demoted along the state transition \((s,a,s^\prime )\) if and only if v holds in state s and doesn’t hold in state \(s^\prime \). Note that in principle an agent is not always aware that his or her value gets demoted or promoted, i.e. it might be the case where objectively agent i’s value gets promoted, i.e. \(s \models {\text {promoted}}(v,a)\) but he is not aware of it, i.e. \(s \models \lnot K_i {\text {promoted}}(v,a)\).
We now define agents’ preferences over two states in terms of values, which will be used for modeling the effect of opportunism. A value system is modeled as a strict total order over a set of values, and the truth values of some formulas, which correspond to some values in the value system, will change in a state transition. In this paper, agents consider the value that they most care about (namely with the highest index in the order) within all the values that change in a state transition for specifying their state preferences. In order to model this specification, we first define a function \({\mathrm{highest(i,s,s^{\prime })}}\) that maps a value system and two different states to the most preferred value that changes when going from state s to \(s^{\prime }\) from the perspective of agent i. In other words, it returns the value that the agent most cares about within all the values that change in the state transition.
Definition 4
Note that function \({\mathrm{highest(i,s,s^{\prime })}}\) can return the value that stays in both s and \(s^{\prime }\) when \({\mathrm{highest(i,s,s^{\prime })}} = V_i[0]\), i.e. the function returns the agent’s least preferred value. When it happens, agent i is indifferent between s and \(s^{\prime }\). Moreover, it is not hard to see that \({\mathrm{highest(i,s,s^{\prime })}} = {\mathrm{highest(i,s^{\prime },s)}}\), meaning that the function is symmetric for the two state arguments.
With this function we can easily define agents’ preference over two states. We use a binary relation “\(\precsim \)” over states to represent agents’ preferences.
Definition 5
As standard, we also define \(s \sim _i s^{\prime }\) to mean \(s \precsim _i s^{\prime }\) and \(s^{\prime } \precsim _i s\), and \(s \prec _i s^{\prime }\) to mean \(s \precsim _i s^{\prime }\) and \(s \not \sim _i s^{\prime }\). The intuitive meaning of the definition of \(s \precsim _i s^{\prime }\) is that agent i weakly prefers state \(s^{\prime }\) to s if and only if the agent’s most preferred value does not get demoted (either stays the same or gets promoted). In other words, agent i weakly prefers state \(s^{\prime }\) to s: if \({\mathrm{highest(i,s,s^{\prime })}}\) holds in state s, then it must also hold in state \(s^{\prime }\), and if \({\mathrm{highest(i,s,s^{\prime })}}\) does not hold in state s, then it does matter whether it holds in state \(s^{\prime }\) or not. Agent i is indifferent between state s and state \(s^{\prime }\) if the truth value of \({\mathrm{highest(i,s,s^{\prime })}}\) stays the same in both state s and state \(s^{\prime }\). Furthermore, the interpreted meaning of \(s \sim _i s^{\prime }\) is that state s and \(s^{\prime }\) are subjectively equivalent to agent i, not necessarily that they objectively refer to the same state. Thus, given an agent’s state preference, a set of states can be classified into different groups with an ordering in between. Clearly there is a correspondence between state preferences and promotion or demotion of values, which we can make formal with the following proposition.
Proposition 1
Proof
Firstly we prove the third one. We define \(s \sim _i s\langle a \rangle \) to mean \(s \precsim _i s\langle a \rangle \) and \(s\langle a \rangle \precsim _i s\). \(s \precsim _i s\langle a \rangle \) means that value \(v^*\) doesn’t get demoted when going from s to \(s\langle a \rangle \), and \(s\langle a \rangle \precsim _i s\) means that value \(v^*\) doesn’t get demoted when going from \(s\langle a \rangle \) to s. Hence, value \(v^*\) doesn’t get promoted or demoted (stays the same) by action a. Secondly we prove the first one. We define \(s \prec _i s\langle a \rangle \) to mean \(s \precsim _i s\langle a \rangle \) and \(s \not \sim _i s\langle a \rangle \). \(s \precsim _i s\langle a \rangle \) means that value \(v^*\) doesn’t get demoted when going from s to \(s\langle a \rangle \), and \(s \not \sim _i s^{\prime }\) means that either value \(v^*\) gets promoted or demoted by action a. Hence, value \(v^*\) gets promoted by action a. We can prove the second one in a similar way. \(\square \)
Additionally, apart from the fact that \(s \prec _i s\langle a \rangle \) implies that the value that agent i most cares about gets promoted, we also have that no other value which is more preferred gets demoted or promoted. We have the result that the \(\precsim _i\) relation obeys the standard properties we expect from a preference relation.
Proposition 2

Reflexive: \(\forall s \in S: s \precsim _i s\);

Transitive: \(\forall s, s^{\prime }, s^{{\prime }{\prime }} \in S:\) if \(s \precsim _i s^{\prime }\) and \(s^{\prime } \precsim _i s^{\prime \prime }\), then \(s \precsim _i s^{\prime \prime }\);

Total: \(\forall s, s^{\prime } \in S\): \(s \precsim _i s^{\prime }\) or \(s^\prime \precsim _i s\).
Proof
The proof follows Definition 5 directly. In order to prove \(\precsim _i\) is reflexive, we have to prove that for any arbitrary state s we have \(s \precsim _i s\). From Definitions 4 and 5 we know \({\mathrm{highest(i,s,s^\prime )}} = V_i[0]\) when \(s=s'\), and for any arbitrary state s we always have \({\mathcal {M}},s \models V_i[0]\) implies \({\mathcal {M}},s \models V_i[0]\). Therefore, \(s \precsim _i s\) and we can conclude that \(\precsim _i\) is reflexive.

If \(u^* \prec _i w^*\), we know that \(w^*\) is the highest value that changes and gets promoted when going from \(s^\prime \) to \(s^{\prime \prime }\), but stays the same between s and \(s^\prime \). Hence, we can conclude that \({\mathcal {M}},s \models \lnot w^*\) and \({\mathcal {M}},s^{\prime \prime } \models w^*\), and that \(w^* = v^*\) (i.e., \(w^*\) is the highest value that changes between s and \(s^{\prime \prime }\)). Hence we have \({\mathcal {M}},s \models v^*\) implies \({\mathcal {M}},s^{\prime \prime } \models v^*\).

If \(u^* \succ _i w^*\), we know that \(u^*\) is the highest value that changes and gets promoted when going from s to \(s^\prime \), but stays the same between \(s^\prime \) and \(s^{\prime \prime }\). Hence, we can conclude that \({\mathcal {M}},s \models \lnot u^*\) and \({\mathcal {M}},s^{\prime \prime } \models u^*\), and that \(u^* = v^*\) (i.e. \(v^*\) is the highest value that changes between s and \(s^{\prime \prime }\)). Hence, we have \({\mathcal {M}},s \models v^*\) implies \({\mathcal {M}},s^{\prime \prime } \models v^*\).
Our approach explicitly states the value that agent i most cares about when comparing two different states, which is \({\mathrm{highest(i,s,s^\prime )}}\) that has different truth values in state s and s’ with highest index in the linear ordering. Certainly, there are other ways of deriving these preferences from a value system. Instead of only considering the value change that is most cared about in the state transition, it is also possible to take into account all the value changes in the state transition. For example, we can define a function that tells whether and to what extent a state transition promotes or demotes an agent’s overall value by attaching weights to values, and the weights can be the indexes of values in a value system. Then we sum all the weights for the state transition. The summation can tell whether and to what extent a state transition promotes or demotes an agent’s overall value. With this approach, an agent considers all the values that are either promoted or demoted in the state transition. The higher index the value has, the more the agent values it. For opportunism, what we want to stress is that opportunistic agents ignore (rather than consider less) other agents’ interest, which has a lower index in the agent’s value system. In order to align with this aspect, we use the most preferred value approach in this paper.
3.2 Rational alternatives
Since we have already defined values and value systems as agents’ basis for decisionmaking, we can start to apply decision theory to reason about agents’ decisionmaking. Given a state in the system, there are several actions available to an agent, and he has to choose one in order to go to the next state. We can see the consideration here as onetime decisionmaking. While agents might make a decision based on their longterm benefits brought from multiple actions instead of their shortterm benefits brought from an action, we here only consider onetime decisionmaking in order to simplify our predictive model. In decision theory, if agents only act for one step, a rational agent should choose an action with the highest (expected) utility without reference to the utility of other agents [14]. Within our framework, this means that a rational agent will always choose a rational alternative based on his value system. We will introduce the notion of rational alternatives below.
Before choosing an action to perform, an agent must think about which actions are available to him. We have already seen that, for a given state s, the set of available actions is Ac(s). However, since an agent only has partial knowledge about the state, we argue that the actions that an agent knows to be available are only part of the actions that are physically available to him in a state. For example, an agent can call a person if he knows the phone number of the person; without this knowledge, he is not able to do it, even though he is holding a phone. Recall that the set of states that agent i considers as being the actual state in state s is the set \({\mathcal {K}}(i,s)\). Given an agent’s partial knowledge about a state as a precondition, he knows what actions he can perform in that state, which is the intersection of the sets of actions physically available in the states in this knowledge set.
Definition 6
Because a stuttering action sta is always included in Ac(s) for any state s, we have that \(sta \in Ac(i,s)\) for any agent i. When only sta is in Ac(i, s), we say that the agent cannot do anything because of his limited knowledge. Obviously an agent’s subjectively available actions are always part of his physically available actions (\(Ac(i,s) \subseteq Ac(s)\)). Based on agents’ rationality assumptions, he will choose an action based on his partial knowledge of the current state and the next state. Given a state s and an action a, an agent considers the next possible states as the set \({\mathcal {K}}(i,s \langle a \rangle )\). For another action \(a^\prime \), the set of possible states is \({\mathcal {K}}(i,s \langle a^\prime \rangle )\). The question now becomes: How do we compare these two possible set of states? Clearly, when we have \({\mathcal {K}}(i,s \langle a \rangle ) \prec _i {\mathcal {K}}(i,s \langle a^\prime \rangle )\), meaning that all alternatives of performing action \(a^\prime \) are more desirable than all alternatives of choosing action a, it is always better to choose action \(a^\prime \). However, in some cases it might be that some alternatives of action a are better than some alternatives of action \(a^\prime \) and viceversa. In this case, an agent cannot decisively conclude which of the actions is optimal, which implies that the preferences over actions (namely sets of states) are not total. This leads us to the following definition:
Definition 7
The set \(a^*_i(s)\) are all the actions for agent i in state s which are available to him and are not dominated by another action which is available to him. In other words, it contains all the actions which are rational alternatives for agent i. Since it is always the case that Ac(i, s) is nonempty because of the stuttering action sta, and since it is always the case that there is one action which is nondominated by another action, we conclude that \(a^*_i(s)\) is nonempty. We can see that the actions that are available to an agent not only depend on the physical state, but also depend on his knowledge about the state and the next state. The more he knows, the better he can judge what his rational alternative is. In other words, an agent tries to make a best choice based on his value system and incomplete knowledge. The following proposition shows how an agent removes an action with our approach.
Proposition 3
Proof
Agents remove all the options (actions) that are always bad to do, and there is no possibility to be better off by choosing a dominated action. The following proposition connects Definition 7 with stuttering action and state preferences.
Proposition 4
Proof
We prove it by contradiction. Statement \(\lnot (\forall a \in a^*(s): s \prec _i s\langle a \rangle )\) is equivalent to statement \(\exists a \in a^*(s): s \succsim _i s\langle a \rangle \). We will make the proof with the situations where \(\exists a \in a^*(s): s \succ _i s\langle a \rangle \) and \(\exists a \in a^*(s): s \sim _i s\langle a \rangle \). If there exists an action \(a \in a^*(s)\) such that agent i’s value will get demoted by performing it (\(\exists a \in a^*(s): s \succ _i s\langle a \rangle \)), it will be dominated by the stuttering action sta. Since sta is not in \(a^*(s)\), action a is not in \(a^*(s)\) as well. If there exists an action \(a \in a^*(s)\) such that agent i’s value will keep agent i’s values neutral (\(\exists a \in a^*(s): s \sim _i s\langle a \rangle \)), sta will also be in \(a^*(s)\), because all the actions in agent i’s rational alternatives are equivalent to agent i and sta has the same effect as action a. Contradiction! \(\square \)
If the stuttering action sta is not in the set of rational alternatives for agent i, meaning that it is dominated by an action (not necessarily in the set of rational alternatives), agent i can always promote his value by performing any action in his rational alternatives. In the real life, it is common to use this approach for practical reasoning given the limited knowledge about the real world: suppose an agent only knows there is a bag of money in toilet A or toilet B, the agent cannot decide which toilet he should go to for the money, so going to toilet A or toilet B are equivalent to him.
Our approach to comparing two sets of states resulting from two different actions is proposed with the assumption that an agent knows what he knows and what he doesn’t know, which are the properties of positive introspection and negative introspection of agents’ epistemic relations. Certainly, there are multiple ways of doing it. For instance, instead of removing all the options that are always bad to do, we can also do it merely with our limited knowledge about the actions. As we know, given a state \(s^\prime \) from agent i’s knowledge set \({\mathcal {K}}(i,s)\), it results in \(s^\prime \langle a \rangle \) and \(s^\prime \langle a^\prime \rangle \) by action a and action \(a^\prime \) respectively. Action a is dominated by action \(a^\prime \) if and only if for all the states \(s^\prime \) from \({\mathcal {K}}(i,s)\) we have \(s^\prime \langle a \rangle \prec _i s^\prime \langle a^\prime \rangle \). In this pairwise comparison approach, agent i compares two states resulting from the same state, which means that he only takes into account what he knows and ignores what he doesn’t know for removing dominated actions. In this paper, we remove the actions by which agents are impossible to be better off, because it has natural ties to game theory in the context of (non)dominated strategies [17]. We will illustrate the above definitions and our approach through the following example.
Example 1
(continued) We extend Example 1 as follows: Fig. 2 shows a transition system \({\mathcal {M}}\) for agent i. State s and \(s'\) are agent i’s epistemic alternatives, that is, \({\mathcal {K}}(i,s)=\{s,s^\prime \}\). Now consider the actions that are physically available and subjectively available to agent i. \(Ac_i(s)=\{a_1,a_2,a_3,sta\}\), \(Ac_i(s')=\{a_1,a_2,sta\}\). Because \(Ac(i,s)=Ac_i(s) \cap Ac_i(s')\), agent i knows that only sta, \(a_1\) and \(a_2\) are available to him in state s .
Next we talk about agent i’s rational alternatives in state s. Given agent i’s value system \(V_i=(u \prec v \prec w)\), and the following valuation: u, \(\lnot v\) and \(\lnot w\) hold in \({\mathcal {K}}(i,s)\), \(\lnot u\), \(\lnot v\) and w hold in \({\mathcal {K}}(i,s\langle a_1 \rangle )\), and u, v and \(\lnot w\) hold in \({\mathcal {K}}(i,s\langle a_2 \rangle )\), we then have the following state preferences: \({\mathcal {K}}(i,s) \prec {\mathcal {K}}(i,s\langle a_1 \rangle )\), \({\mathcal {K}}(i,s) \prec {\mathcal {K}}(i,s\langle a_2 \rangle )\) and \({\mathcal {K}}(i,s\langle a_2 \rangle ) \prec {\mathcal {K}}(i,s\langle a_1 \rangle )\), meaning that action \(a_2\) and the stuttering action sta are dominated by action \(a_1\). Thus, we have \(a^*_i(s)=\{a_1\}\).
4 Defining opportunism
Before reasoning about opportunistic propensity, we should first formally know what opportunism actually is. Opportunism is a social behavior that takes advantage of relevant knowledge asymmetry and results in promoting one’s own value and demoting others’ value [6]. It means that it is performed with the precondition of relevant knowledge asymmetry and the effect of promoting agents’ own value and demoting others’ value. Firstly, knowledge asymmetry is defined as follows.
Definition 8
It holds in a state where agent i knows \(\phi \) while agent j does not know \(\phi \) and this is also known by agent i. It can be the other way around for agent i and agent j. But we limit the definition to one case and omit the opposite case for simplicity. Recall that value systems are common knowledge among all the agents in the system, but agents have asymmetric knowledge about the current state, leading to the possibility of opportunistic behavior. We can define opportunism as follows:
Definition 9
This definition specifies that if the precondition KnowAsym is satisfied in \({\mathcal {M}},s\), then the performance of action a will be opportunistic behavior. The asymmetric knowledge that agent i has is that the transition by action a will promote value \(v^*\) but demote value \(w^*\) along, where \(v^*\) and \(w^*\) are the values that agent i and agent j most care about along the transition respectively. It follows that agent j is partially or completely not aware of it. Compared to the definition of opportunism in [6], Definition 9 models the precondition of performing opportunistic behavior in an explicit way while value opposition is derived by Proposition 5 as a property. As is stressed in [6], opportunistic behavior is performed by intent rather than by accident. In this paper, instead of explicitly modeling intention with a modality as we did in [6], we interpret intention from agents’ rationality that agents always intentionally promote their own values. We acknowledge that this is just one logical way of defining opportunism and one can refer to [6] for more ways concerning multiple actions and norms.
Proposition 5
Proof
As we mentioned before, in principle an agent is not always aware that his value gets promoted or demoted. We objectively say that agent i’s most preferred value gets promoted and agent j’s most preferred value gets demoted by opportunistic behavior a, but agent j is not aware of it even after opportunistic behavior a is performed due to the nolearning restriction on agents’ epistemic relations: agent j won’t gain any extra knowledge about the effect of opportunistic behavior a after agent i performs it, so there is still knowledge asymmetry between agent i and agent j in state \(s \langle a \rangle \). That is, if \({\mathcal {M}},s \models K_j w^* \wedge \lnot K_j \langle a \rangle \lnot w^*\) for \({\mathcal {M}},s \models \lnot K_j {\text {demoted}}(w^*,a)\), then \({\mathcal {M}},s \models \langle a \rangle \lnot K_j \lnot w^*\).
Proposition 6
Proof
We prove it by contradiction. We denote \(v^*={\mathrm{highest(i,s,s\langle a \rangle )}}\) and \(w^*={\mathrm{highest(j,s,s\langle a \rangle )}}\), for which \(v^*\) and \(w^*\) are the property changes that agent i and agent j most care about in the state transition. If \(V_i = V_j\), then \(v^* = w^*\). However, because \({\mathcal {M}},s \models K_i({\text {promoted}}(v^*,i) \wedge {\text {demoted}}(w^*,j))\), and thus \({\mathcal {M}},s \models K_i(\lnot v^* \wedge w^*)\), and because knowledge is true, we have \({\mathcal {M}},s \models \lnot v^* \wedge w^*\). But, since \(v^* = w^*\), we have \({\mathcal {M}},s \models \lnot v^* \wedge v^*\). Contradiction! \(\square \)
From this proposition we can see that agent i and agent j care about different things based on their value systems about the transition.
Proposition 7
Proof
We can prove it by contradiction. Knowledge set is the set of states that an agent considers as possible in a given actual state. \(\forall t \in {\mathcal {K}}(i,s)\), agent i considers state t as a possible state where he is residing. The same with \({\mathcal {K}}(j,s)\) for agent j. If \({\mathcal {K}}(j,s) \not \subseteq {\mathcal {K}}(i,s)\) is false, we have \({\mathcal {K}}(j,s) \subseteq {\mathcal {K}}(i,s)\) holds, which means that agent j knows more than or exactly the same as agent i. However, Definition 9 tells that agent i knows more about the transition by action a than agent j. So \({\mathcal {K}}(j,s) \subseteq {\mathcal {K}}(i,s)\) is false, meaning that \({\mathcal {K}}(j,s) \not \subseteq {\mathcal {K}}(i,s)\) holds. Further, because from \({\mathcal {M}},s \models {\text {Opportunism}}(i,j,a)\) we have \({\mathcal {M}},s \models K_i(\langle a \rangle v^* \wedge \langle a \rangle \lnot w^*)\), by the semantics of \(\langle a \rangle v^*\) and \(\langle a \rangle \lnot w^*\), for all \(t \in {\mathcal {K}}(i,s)\) there exists \((t,a,s^\prime ) \in R\). Thus, we have \(a \in Ac(i,s)\). \(\square \)
These three propositions are three properties that we can derive based on Definition 9. The first one shows that opportunistic behavior results in value opposition for the agents involved; the second one tells that the two agents involved in the relationship evaluate the transition based on different value systems; the third one indicates the asymmetric knowledge that agent i has for behaving opportunistically.
Example 2
5 Reasoning about opportunistic propensity
In this section, we will characterize the situation where agents are likely to perform opportunistic behavior and the contexts where opportunism is impossible to happen based on our decisionmaking framework and our definition of opportunistic propensity. As system designers, we usually have no access to agents’ internals thus we are not aware of agents’ value systems. However, it is still possible that we have a set of value systems that we can consider. We are cautious that an agent will act opportunistically to another agent if it has the ability and desire to do so given two possible value systems for them respectively. In other words, we assume the worst case when reasoning about opportunistic propensity.
5.1 Having opportunism
Agents will perform opportunistic behavior when they have the ability and the desire of doing it. The ability of performing opportunistic behavior can be interpreted by its precondition: it can be performed whenever its precondition is fulfilled. Agents have desire to perform opportunistic behavior whenever it is a rational alternative.
There are also relations between agents’ ability and desire of performing an action. As rational agents, firstly they think about what actions they can perform given the limited knowledge they have about the state, and secondly they choose the action that may maximize their utilities based on their partial knowledge. This practical reasoning in decision theory can also be applied to reasoning about opportunistic propensity. Given the asymmetric knowledge an agent has, there are several (possibly opportunistic) actions available to him, and he may choose to perform the action which is a rational alternative to him, regardless of the result for the other agents. Based on this understanding, we have the following theorem, which implies agents’ opportunistic propensity:
Theorem 1
 1.
\(\forall t \in {\mathcal {K}}(i,s): {\mathcal {M}},t \models {\text {promoted}}(v^*,a) \wedge {\text {demoted}}(w^*,a)\), \(\exists t \in {\mathcal {K}}(j,s): {\mathcal {M}},t \models \lnot ({\text {promoted}}(v^*,a) \wedge {\text {demoted}}(w^*,a))\), where \(v^*={\mathrm{highest(i,s,s\langle a \rangle )}}\) and \(w^*={\mathrm{highest(j,s,s\langle a \rangle )}}\);
 2.
\(s \prec _i s\langle a \rangle \) and \(s \succ _j s\langle a \rangle \).
 3.
\(\lnot \exists a^\prime \in Ac(i,s): a \not = a^\prime \) and \(a^\prime \) dominates a.
Proof
Forwards: If action a is opportunistic behavior, we can immediately have statement 1 by the definition of Knowledge Set. Because action a is in agent i’s rational alternatives in state s (\(a \in a^*_i(s)\)), by Definition 7, action a is not dominated by any action in Ac(i, s). Also because action a is opportunistic, by Proposition 5 it results in promoting agent i’s value but demoting agent j’s value (\(s \prec _i s\langle a \rangle \) and \(s \succ _j s\langle a \rangle \)). Backwards: Statement 1 means that there is knowledge asymmetry between agent i and agent j about the formula \({\text {promoted}}(v^*,a) \wedge {\text {demoted}}(w^*,a)\). From this we can see the knowledge asymmetry is the precondition of action a. If this precondition is satisfied, agent i can perform action a. Moreover, by statement 2, because action a promotes agent i’s value but demotes agent j’s value, we can conclude that action a is opportunistic behavior. By statement 3, because action a is not dominated by any action in Ac(i, s), it is a rational alternative for agent i in state s to perform action a. \(\square \)
Given an opportunistic behavior a, in order to predict its performance, we should first check the asymmetric knowledge that agent i has for enabling its performance. Based on agent i’s and agent j’s value systems, we also check if it is not dominated by any actions in Ac(i, s) and its performance can promote agent i’s value but demote agent j’s value. It is important to stress that Theorem 1 doesn’t state that an agent will for sure perform opportunistic behavior if the three statements are satisfied. Instead, it states opportunism is likely to happen because it is one of the agent’s rational alternatives. The agent will perform one action, which might be opportunistic behavior, from his rational alternatives.
5.2 Not having opportunism
As Theorem 1 shows, we need much information about the system to predict opportunism, and it might be difficult to achieve all of them. Fortunately, in some cases it is already sufficient to know that opportunism is impossible to occur. An example might be detecting opportunism: if we already know in which context agents cannot perform opportunistic behavior, there is no need to set up any monitoring mechanisms for opportunism in those contexts. The following propositions characterize the contexts where there is no opportunism:
Proposition 8
Proof
When \({\mathcal {K}}(i,s) = {\mathcal {K}}(j,s)\) holds, which means that both agent i and agent j have the same knowledge. In this context, Statement 1 in Theorem 1 is not satisfied, so action a is not opportunistic behavior. \(\square \)
Proposition 9
Proof
When \(V_i = V_j\) holds, which means that both agent i and agent j have the same value system. In this case, the values of both agents don’t go opposite, that is, Statement 2 in Theorem 1 is not satisfied. So action a is not opportunistic behavior. \(\square \)
The above two propositions show that opportunism is impossible to occur when there is no knowledge asymmetry between agents and they share the same value systems. After we defined opportunism, we had Proposition 6 showing that two agents have different value systems as a property of opportunism. Together with Propositions 8 and 9, it looks like once having two different value systems and knowledge asymmetry about the value changes are satisfied one agent will perform opportunistic behavior to the other agent. Now let us go back to the example of selling a broken cup, the buyer’s value gets demoted along the state transition, because he wants to have a good cup for use, which he finally doesn’t have. Suppose the buyer only cares about appearance in the deal: as we show in Fig. 4, the buyer knows it is a pretty cup before he buys it, denoted as pc, and he gets a pretty cup (probably not for use) after the seller sells it. In this case, the behavior performed by the seller will not be seen as opportunistic behavior. From this variation, we notice that sometimes an action might not be seen as opportunistic behavior even though the agents involved have different value systems, because the two value systems are compatible rather than conflicting. This brings us to the notion of compatibility. Intuitively, compatibility describes a state in which two or more things are able to exist or work together in combination without problems or conflict. We then propose the notion of compatibility of value systems with respect to a state transition.
Definition 10
(Compatibility of value systems) Given a multiagent system \({\mathcal {M}}\), a state transition \((s,a,s^\prime )\) and two value systems \(V_i\) and \(V_j\) (\(V_i \not = V_j\)), the two value systems are compatible with respect to transition \((s,a,s^\prime )\) if and only if \({\mathcal {M}},s \models \lnot ({\text {promoted}}(v^*,a) \wedge {\text {demoted}}(w^*,a))\), where \(v^*={\mathrm{highest(i,s,s^\prime )}}\) and \(w^*={\mathrm{highest(j,s,s^\prime )}}\).
From this definition we have \(s \prec _i s^\prime \) and \(s \succ _j s^\prime \) don’t hold at the same time, which means that the values of two agents don’t go opposite (one gets promoted and the other one gets demoted) along a transition if their value systems are compatible with respect to the transition. Now we can relate the notion of compatibility of value systems to predicting opportunism. The following proposition characterize another context where opportunistic behavior will not occur:
Proposition 10
Proof
This proposition holds because two compatible value systems with respect to transition \((s,a,s^\prime )\) will not lead to the result that one agent’s value get promoted and the other agent’s value get demoted (\(s \prec _i s^\prime \) and \(s \succ _j s^\prime \)). By Theorem 1, it implies that action a will not be opportunistic behavior. \(\square \)
5.3 Computational complexity
PREDICTING OPPORTUNISM  

Given: Multiagent system \({\mathcal {M}}\).  
Question: Does there exist opportunistic behavior between agents for \({\mathcal {M}}\)? 
Theorem 2
Given a multiagent system \({\mathcal {M}}\), the problem that whether there exists opportunistic behavior between agents for \({\mathcal {M}}\) can be verified in \(O(nmk^2)\) time, where n is the number of transitions, m is the maximum number of available actions in any given state and k is the maximum size of an S5 equivalence class.
Proof
In order to prove it, we need to find an algorithm that allows us to solve the decision problem in polynomialtime. We design Algorithm 1 for verifying opportunistic behavior in a multiagent system \({\mathcal {M}}\) based on Theorem 1. The algorithm loops through all the possible transitions in the system, which has complexity O(n), where \(n={\mathcal {R}}\). Notice that transitions are executed by hypothetical agents, meaning that the value systems we consider for the transition is assumed to be known once the transition is given. For each transition, it verifies the statements listed in Theorem 1 one by one. Line 2124 is to verify whether there is no action \(a^\prime \) that dominates action a. Based on the definition of dominance between actions, the algorithm has to perform the comparison \({\mathcal {K}}(i,s\langle a \rangle )\) with \({\mathcal {K}}(i,s \langle a^\prime \rangle )\) for all \(a^\prime \) in Ac(i, s). If for all \(s^\prime \in {\mathcal {K}}(i,s\langle a \rangle )\) and for all \(s^{\prime \prime } \in {\mathcal {K}}(i,s\langle a^\prime \rangle )\) we have \(s^\prime \prec s^{\prime \prime }\), then action a is dominated by action \(a^\prime \). Hence, the complexity of executing line 2124 is \(O(mk^2)\), where \(m=Ac(i,s)\) and \(k={\mathcal {K}}(i,s)\). The computational complexity of the whole algorithm is \(O(nmk^2)\), which implies that Algorithm 1 can check whether there exists opportunistic behavior between agents for a given multiagent system in polynomialtime. \(\square \)
In this section, we specified the situation where agents are likely to perform opportunistic behavior and characterized the contexts where opportunism is impossible to happen. This information is essential not only for the system designers to identify opportunistic propensity, but also for an agent to decide whether to participate in the system given his knowledge about the system and his value system, as his behavior might be regarded as opportunistic. Finally, we prove the computation complexity of predicting opportunism given a multiagent system.
6 Discussion
From the definition of Function \({\text {highest}}\), we know that agent i only cares about the value change that he most prefers and ignores other value changes for defining his state preference. Hence, if we interpret value promotion as happiness and value demotion as sadness, this approach can be seen as the weight between the agent’s happiness and sadness from the states: he prefers state \(s^\prime \) rather than state s because his most preferred value gets promoted thus the happiness he gets is more than the sadness for being in state \(s^\prime \) instead of state s. When talking about actions, \(s \prec _i s \langle a \rangle \) for instance, because among all the value changes agent i’s most preferred value gets promoted when going from state s to state \(s\langle a \rangle \), we can say that he feels more happy than sad by performing action a (apparently \(a \not = sta\)) instead of doing nothing. This interpretation is of great importance for the design of mechanisms for eliminating opportunism: if we want to make it not optimal for an agent to be opportunistic, the sadness he will get from it must be higher than the happiness, which implies that the value change that is most cared about by the agent must be demotion.
7 Related work
In order to investigate the interaction between different types of agents, agent are designed to be egoistic or altruistic depending on whether their internal decision processes are noncooperative or cooperative. For example, [18] experiments on iterated prisoner’s dilemma with a society of agents that are egoistic, reciprocating or altruistic. Golle et al. [19] designs incentive mechanisms for sharing with a population consisting of altruistic and egoistic agents, and [20] identifies egoistic or altruistic parties in terms of trust in open systems. In this paper, we presented agents’ decisionmaking based on their value systems, which might have different ranking on their own value and others’ value. Since opportunistic agents try to promote their own value at most but ignore other agents’ value, they can be categorized as egoistic agents.
The technical framework we used in this paper is a transition system extended with value systems. As standards for specifying preferences, people usually use goals rather than value (e.g. [21, 22]) in logicbased formalization and utilities in decision theory and game theory (e.g. [23, 24]) for the same purpose. Only some work in the area of argumentation reasons about agents’ preferences and decision making by values (e.g. [25, 26, 27]). Goals are concrete and should be specified with time, place and objects, while value is relatively stable and not limited to be applied in a specific situation. Since state transitions are caused by the performance of actions, we can evaluate actions by whether our value is promoted or demoted in the state transition. For representing agents’ evaluation on states, Keeney and Raiffa proposed MultiAttribute Utility Theory (MAUT) in which states are described in terms of a set of attributes and the utilities of the states are calculated by the sum of the scores on each attribute based on agents’ value systems [28]. Apparently, not everything can be evaluated with numbers, which is one of the reasons why people consider using value systems as an alternative. BenchCapon et al. [25] already pointed out that utilitybased decisionmechanisms in game theory cannot represent agents’ decision theory in a real way. A value system is like a box that allows us to define its content as we need. In this paper, we use values and value systems as the basis for agents’ choice. A value is modeled as a formula in our language and a value system is constructed as a total order over a set of values. Instead of calculating the utility of states, agents specify their preferences over states by evaluating the truth value of the state property that they most care about.
Our decision theory is extended with knowledge and value systems, which correspond to concepts from game theory [29]. In game theory, agents can be situated in a game which is not fully observable. Hence, it is natural to study agents’ decisionmaking through combining game theory and epistemic logic. The notion of information sets is introduced to represent the states that the agent cannot distinguish [17]. In this paper, we use a similar concept knowledge set to represent the set of states that the agent considers as possible. Based on the representation of uncertainty, we use the notion of dominance to compare two different actions: a dominated action is an action that is always bad to perform regardless of the uncertainty about the system, which is an approach bridging to (non)dominated strategies in game theory. Typically, the concept of rational alternatives is tightly related with the concept of weak rationality as defined in the context of epistemic game theory. As specified in [30, 31, 32], players may not know their action is best, but they can know that there is no alternative action which they know to be better, given their limited knowledge about the current state. Both concepts represent a set of actions that are not dominated by other actions. It is thus already seen that we can apply techniques from game theory based on the concept similarities to enrich the existing decision theory and enhance the reasoning capabilities on agents’ opportunistic propensity.
8 Conclusion and future work
The investigation about opportunism is still new in the area of multiagent system. We ultimately aim at designing mechanisms to eliminate such selfish behavior in the system. In order to avoid overassuming the performance of opportunism so that monitoring and eliminating mechanism can be put in place, we need to know in which context agents are likely to perform opportunistic behavior. In this paper, we argue that agents will behave opportunistically when they have the ability and the desire of doing it. With this idea, we developed a framework of multiagent systems to reason about agents’ opportunistic propensity without considering normative issues. Agents in the system were assumed to have their own value systems. Based on their value systems and incomplete knowledge about the state, agents choose one of their rational alternatives, which might be opportunistic behavior. With our framework and our definition of opportunism, we characterized the situation where agents are (not) likely to perform opportunistic behavior and prove the computational complexity of predicting opportunism. It is developed assuming that system designers are aware of agents’ value systems. For sure system designers have no access to agent internals, but system designers have possible agent internals modeled by their value systems, which allows them to reason about the possibility of opportunism in the system. It is also important to stress that what we are trying to address in this paper is not what opportunism is, but whether we can somehow specify under which circumstance such a phenomenon can occur. In other words, we set the foundation of predicting whether a certain system is desired in terms of not having opportunism. Certainly there are multiple ways to extend our work. One interesting way is to enrich our formalization of value system over different sets of values, and the enrichment might lead to a different notion of the compatibility of value systems and different results about opportunistic propensity. The assumption that the value systems are common knowledge among the agents can be relaxed. We have no doubt that there exists alternative solutions to deal with opportunism such as through monitoring and enforcement norms. However, we need a predictive model to reason about opportunistic propensity in order to better allocate those mechanisms. Because of this predictive nature, it is a natural choice to use logical framework to reason about hypothetical situations. We presented a basic logical framework to reason about opportunistic propensity without considering any social mechanisms. Future work can consider issues such as norms, reputation, warranties and contracts in combination with the ability and the desire of being opportunistic. Most importantly, this paper set up a basic framework to design mechanisms for eliminating opportunism.
Notes
Acknowledgements
The research is supported by China Scholarship Council. We would like to thank Allan van Hulst and anonymous reviewers for their helpful comments.
References
 1.Williamson, O. (1983). Markets and hierarchies: Analysis and antitrust implications: A study in the economics of internal organization. A study in the economics of internal organization. New York: Free Press.Google Scholar
 2.Bachmann, R., & Akbar, Z. (Eds.). (2006). Handbook of trust research. Cheltenham: Edward Elgar Publishing.Google Scholar
 3.Conner, K. R., & Prahalad, C. K. (1996). A resourcebased theory of the firm: Knowledge versus opportunism. Organization Science, 7(5), 477–501.CrossRefGoogle Scholar
 4.Jiraporn, P., et al. (2008). Is earnings management opportunistic or beneficial? An agency theory perspective. International Review of Financial Analysis, 17(3), 622–634.CrossRefGoogle Scholar
 5.CabonDhersin, M.L., & Ramani, S. V. (2007). Opportunism, trust and cooperation: A game theoretic approach with heterogeneous agents. Rationality and Society, 19(2), 203–228.CrossRefGoogle Scholar
 6.Luo, J., & Meyer, J. J. (2016). A formal account of opportunism based on the situation calculus. AI & Society, 4, 1–16.Google Scholar
 7.Chen, C. C., Peng, M. W., & Saparito, P. A. (2002). Individualism, collectivism, and opportunism: A cultural perspective on transaction cost economics. Journal of Management, 28(4), 567–583.CrossRefGoogle Scholar
 8.Luo, J., Meyer, J. J., & Knobbout, M. (2016). Monitoring opportunism in multiagent systems. In Coordination, Organizations, Institutions, and Norms in Agent Systems XII (pp. 119–138). Cham: Springer.Google Scholar
 9.Luo, J., Knobbout, M., & Meyer, J.J. (2018). Eliminating opportunism using an epistemic mechanism. In Proceedings of the 17th international conference on autonomous agents and multiagent systems (pp. 1450–1458). International foundation for autonomous agents and multiagent systems.Google Scholar
 10.Moore, R. C. (1980). Reasoning about knowledge and action. Menlo Park: SRI International.Google Scholar
 11.Moore, R. C. (1984). A formal theory of knowledge and action. Technical report, DTIC Document.Google Scholar
 12.Scherl, R. B., & Levesque, H. J. (2003). Knowledge, action, and the frame problem. Artificial Intelligence, 144(12), 1–39.MathSciNetCrossRefzbMATHGoogle Scholar
 13.Shapiro, S., et al. (2000). Iterated belief change in the situation calculus. In KR.Google Scholar
 14.Poole, D. L., & Mackworth, A. K. (2010). Artificial intelligence: Foundations of computational agents. Cambridge: Cambridge University Press.CrossRefzbMATHGoogle Scholar
 15.Bulling, N., & Dastani, M. (2016). Normbased mechanism design. Artificial Intelligence, 239, 97–142.MathSciNetCrossRefzbMATHGoogle Scholar
 16.Ågotnes, T., van der Hoek, W., & Wooldridge, M. (2007). Normative system games. In Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems (p. 129). ACM.Google Scholar
 17.Dixit, A. K., & Nalebuff, B. (2008). The art of strategy: A game theorist’s guide to success in business & life. New York: WW Norton & Company.Google Scholar
 18.Bazzan, A. L. C., Bordini, R. H., & Campbell, J. A. (2002). Evolution of agents with moral sentiments in an iterated Prisoner’s Dilemma exercise. In Game theory and decision theory in agentbased systems (pp. 43–64). Boston: Springer.Google Scholar
 19.Golle, P., et al. (2001). Incentives for sharing in peertopeer networks. In Electronic commerce (pp. 75–87). Berlin: Springer.Google Scholar
 20.Schillo, M., Funk, P., & Rovatsos, M. (2000). Using trust for detecting deceitful agents in artificial societies. Applied Artificial Intelligence, 14(8), 825–848.CrossRefGoogle Scholar
 21.Cohen, P. R., & Levesque, H. J. (1990). Intention is choice with commitment. Artificial Intelligence, 42(2–3), 213–261.MathSciNetCrossRefzbMATHGoogle Scholar
 22.Rao, A. S., & Georgeff, M. P. (1991). Modeling rational agents within a BDIarchitecture. In KR 91 (pp. 473–484).Google Scholar
 23.Steele, K., & Stefánsson, H. O. (2016). Decision theory. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (winter 2016 edition). https://plato.stanford.edu/archives/win2016/entries/decisiontheory/.
 24.Von Neumann, J., & Morgenstern, O. (2007). Theory of games and economic behavior. Princeton: Princeton University Press.zbMATHGoogle Scholar
 25.BenchCapon, T., Atkinson, K., & McBurney, P. (2012). Using argumentation to model agent decision making in economic experiments. Autonomous Agents and MultiAgent Systems, 25(1), 183–208.CrossRefGoogle Scholar
 26.Van der Weide, T. (2011). Arguing to motivate decisions. Ph.D. thesis, Utrecht University.Google Scholar
 27.Pitt, J., & Artikis, A. (2015). The open agent society: Retrospective and prospective views. Artificial Intelligence and Law, 23(3), 241–270.CrossRefGoogle Scholar
 28.Keeney, R. L., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and value tradeoffs. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
 29.Myerson, R. (1991). Game theory: Analysis of conflict. Cambridge: Harvard University Press.zbMATHGoogle Scholar
 30.Van Benthem, J. (2007). “Erratum:” Rational dynamics and epistemic logic in games. International Game Theory Review, 9(02), 377–409.MathSciNetCrossRefzbMATHGoogle Scholar
 31.Lorini, E., & Schwarzentruber, F. (2010). A modal logic of epistemic games. Games, 1(4), 478–526.MathSciNetCrossRefzbMATHGoogle Scholar
 32.Bonanno, G. (2008). A syntactic approach to rationality in games with ordinal payoffs. In Proceeding of LOFT (pp. 59–86).Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.