# A whitelist and blacklist-based co-evolutionary strategy for defensing against multifarious trust attacks

- 471 Downloads
- 1 Citations

## Abstract

With electronic commerce becoming increasingly popular, the problems of trust have become one of the main challenges in the development of electronic commerce. Although various mechanisms have been adopted to guarantee trust between customers and sellers (or platforms), trust and reputation systems are still frequently attacked by deceptive, collusive, or strategic agents. Therefore, it is difficult to keep these systems robust. It has been mentioned that a combined usage of both trust and distrust propagation can lead to better results. However, little work has been known to realize this insight successfully. Besides, literatures either use a social network with trust/distrust information or use one advisor list in evaluating all sellers, which leads to the lack of pertinence and inaccuracy of evaluation. This paper proposes a defensing strategy called *WBCEA*, in which, each buyer agent is modeled with two attributes (i.e., the trustworthy facet and the untrustworthy facet) and two lists (i.e., the whitelist and the blacklist). Based on the social network that are constructed and maintained according to its whitelist and blacklist, the honest buyer agent can find trustable buyers and evaluate the candidate sellers according to its own experience and ratings of trustable buyers. Experiments are designed and implemented to verify the accuracy and robustness of this strategy. Results show that our strategy outperforms existing ones, especially when majority of buyers are dishonest in the electronic market.

## Keywords

Trust and reputation systems Attack defense Whitelist and blacklist Co-evolutionary algorithm## 1 Introduction

As electronic commerce becomes increasingly popular, more and more people are involved. The problems of trust have become the main challenges in the development of electronic commerce. In multi-agent based electronic commerce, self-interested agents may be deceptive, collusive or strategic. Unfair rating attacks (such as the collusive unfair ratings, *Sybil*, *Camouflage*, *Whitewashing*, and discrimination attacks [7, 23]) from dishonest reviewer render reputation systems’ vulnerable to mislead buyers to transact with dishonest sellers. Dishonest reviewers may also employ sophisticated attacking strategies (such as a combination of various unfair rating strategies) to avoid being detected. Lots of trust models have been proposed to cope with unfair/false ratings. However, these models are not completely robust against various strategic attacks, i.e., they have limitations in defending against some kinds of strategic attacks. To address such problems, we design a new robust algorithm called *WBCEA* for improving intelligent agents’ capabilities in accurately estimating the trustworthiness of sellers under various types of attacks and further reducing the risk of purchase.

The main steps of *WBCEA* are as follows. First, based on historical ratings, the defending buyer agent should evaluate the trustworthiness of reviewers who rated these recommended sellers according to its own experiences or the trustable buyers’ experiences (the trustable buyers are evolved based on the buyer agent’s whitelist and blacklist). And then, the defending buyer generates a list of most trustworthy reviewers as advisors (based on their trustworthiness) for each recommended seller agent. Thirdly, it evaluates each seller’s trustworthiness according to its own experience and advisors’ ratings, and selects the most trustworthy seller as trading partner. Finally, the defending buyer agent updates its own whitelist and blacklist.

In contrast to existing strategy, the novel features of this strategy are as follows. First, considering that each buyer agent has both a trustworthy facet and an untrustworthy facet, *WBCEA* considers both trustworthy facet and untrustworthy facet of each reviewer. Moreover, each buyer agent maintains two lists (i.e., a whitelist and a blacklist) to keep track of the most trustworthy advisors (i.e. reviewers) and most untrustworthy advisors that are evolved according to its own experience. Secondly, based on blacklist and whitelist, a customized optimal advisors list is generated for evaluating each recommended seller, which is similar to *PEALGA* while different from *MET* (which adopts one advisor list for evaluating all sellers). However, the agents in each optimal advisor lists of *PEALGA* are selected from all the buyers in the system, while the agents in each optimal advisor lists of *WBCEA* are selected from those buyers who have traded with the recommended seller. Therefore, the latter optimal advisor lists are targeted more. Thirdly, *WBCEA* considers both trust and distrust information in social network of buyers. That is different from *PEALGA*, which only considers trust information.

The rest of this paper is developed as follows. Section 2 reviews related literature. Section 3 gives a framework for multi-agent based electronic commerce platform. Section 4 illustrates the *WBCEA* strategy in detail. Section 5 verifies the performance of our approach using experiments. Section 6 concludes this paper with future work directions.

## 2 Literature review

The aim of this section is to review the models which are designed for detecting and defending against malicious agents. Since our work is based on a local (buyer’s) view, not on the global (electronic commerce platform) view, we only review models designed from buyers’ viewpoints. In general, the defending models can be divided into two categories, i.e., trust-based approaches, and the trust and distrust-based approaches. The following paragraphs illustrate these two kinds of models in detail.

It is widely agreed that trust means the confidence that one or many entities behave as they expected [16]. Based on trust information, many defending models such as *BRS* [19], *iCLUB* [11, 12], *TRAVOS* [18], *ReferralChain* [20, 21], *Personalized* [22], *MET* [6], *PEALGA* [5] have been designed. These models can effectively defend against some kinds of attacks. However, *BRS* becomes inefficient and*iCLUB* is unstable when the majority of buyers are dishonest because both of them employ the “majority-rule”. When dishonest advisors adopt shifty attacks, *TRAVOS* does not work well because it assumes that each advisor’s rating behavior is consistent. *ReferralChain* sets 1 to the initial trust of each new buyer (advisor) which provides a chance for dishonest advisors to abuse the initial trust (i.e., *Whitewashing*). *Personalized* model is vulnerable when buyers have insufficient experiences with advisors and the majority of advisors are dishonest (i.e., combination of *Sybil* and *Whitewashing*).*MET* evolves one advisor list, which is not necessarily suitable enough for estimating the trustworthiness of all the sellers. *PEALGA* pre-evolves a customized advisor list for evaluating each candidate seller. However, this algorithm still only considers trust information between buyers.

Distrust is recognized to play an equally important role as trust [3]. Though some attack defending models such as *GBR* [13] and *Multi-faceted* [4] are designed, the investigation of utilizing distrust is still in its infancy [14, 15]. *GBR* is proposed to combat web spam employing the “majority-rule”. Therefore, it has strong bias towards seed pages if a small seed set is used. However, it is time consuming to get a large seed set artificially. Therefore, it is difficult for researchers to tradeoff between the number of seed pages and time complexity. *Multi-faceted* considers both interpersonal and impersonal aspects that may bring in redundant and even noisy information. Moreover, it cannot be very prominent in the fight against most kinds of attacks, especially *Sybil* and *Whitewashing* attack. Both *GBR* and *Multi-faceted* only generate one advisor list for evaluating all sellers, which leads to problems such as lack of pertinence or inaccuracy of evaluation.

As links among users in a social network are consist of trust as well as distrust connections, both of these information can be used in the design of defending strategies. However, existing models have some shortcomings. For example, the performance of *GBR* depends on training data (i.e., the number of seed pages). In contrast to *GBR*, our strategy *WBCEA* also relies on data as its performance can only keep stable when enough transaction experience is accumulated. Of course, we can use the rolling incremental updated transaction and rating data as input of our strategy, which are easier to acquire compared to the seed pages in *GBR*.

## 3 A framework for multi-agent-based electronic commerce platform whitelist and blacklist mechanism

Symbols used in the framework and their meanings

Symbols | Meaning of symbols | Symbols | Meaning of symbols |
---|---|---|---|

| The set of buyers | \(r_{b_{i} ,s_{j}} \) | The rating of |

| The set of sellers | \(B_{b_{i}}^{T} \) | The set of those buyers who are trusted by |

\(B_{s_{j}}^{H} \) | The set of those buyers who have interacted with | \(B_{b_{i}}^{D} \) | The set of those buyers who are distrusted by |

\(S_{b_{i}}^{H} \) | The set of those sellers who have interacted with | \(R_{b_{i} ,T} (b_{j} )\) | The trust value of |

| The set of those sellers who are recommended | \(R_{b_{i} ,D} (b_{j} )\) | The distrust value of |

by search agent | |||

\(TN_{b_{i}} \) | The social network of | \(STD_{b_{i}} (b_{j} )\) | The synthetic value of |

\(WL_{b_{i}} \) | The whitelist of | \(A_{b_{i}}^{s_{j}} \) | The best advisor list of |

\(BL_{b_{i}} \) | The blacklist of |

In order to explicitly define the research scope and background of this paper, we assume that the defense agents in the electronic commerce platform with whitelist and blacklist mechanism follows the following assumptions.

**Assumption 1**

Buyer agents pay more attention to the ratings that are given by the advisors who have similar opinions to their own. Some inexperience buyer agents often need to consider other buyer agents’ advices. Similar to the experience of humans, buyer agents are more incline to refer to the advisors who have similar opinions and believe that agents who often give positive/negative ratings belong to the same category.

**Assumption 2**

Buyer agents’ acceptance toward other reviewers’ ratings decrease over time. The more recent the rating time, the more accurate the rating can reflect current trustworthiness of the seller and the more significance it is to the seller’s reputation prediction.

**Assumption 3**

We assume that the buyer agents are not competitive in general and are willing to share their whitelist and blacklist with others. This is quite common in major e-marketplaces and travel agent portals.

**Assumption 4**

We rule out the influence of various prices in the selection of trading seller in this study as we concentrate on the effect of trustworthiness computation under the environment that the prices of the provided products or services are similar.

## 4 The whitelist and blacklist co-evolutionary strategy

*S*

^{ c a n d i d a t e }and each seller’s recent reviewers \(B_{s_{j}^{candidate}}^{H} \) to the honest buyer

*b*

_{ i }, it will adopt the whitelist and blacklist co-evolutionary defending strategy (abbr.

*WBCEA*) and implement following steps.

- (1)
Based on whitelist and blacklist, the honest buyer

*b*_{ i }constructs its social network \(TN_{b_{i}} \). And then, through the propagation of trust and distrust in the social network \(TN_{b_{i}} \), buyer*b*_{ i }tries to find the reviewers (i.e. buyers) \(B_{b_{i}}^{T} \) that can be trusted by it (see Algorithm 1). - (2)
For each reviewer who rates the candidate seller \(s_{j}^{candidate} \) in the \(B_{s_{j}^{candidate}}^{H} \), if it also traded with some sellers who traded with buyer

*b*_{ i },*b*_{ i }can evaluate the reviewer’s trustworthiness according to its own experience directly. Otherwise,*b*_{ i }will seek trustworthy buyers in \(B_{b_{i}}^{T} \) (that is gained from algorithm 1) for reference. Based on the synthetized trustworthiness of each reviewer, the defending buyer*b*_{ i }further generates an optimal advisors list \(A_{b_{i}}^{s_{j}^{candidate}} \) for evaluating each recommended seller \(s_{j}^{candidate} \in S^{candidate}\)(see Algorithm 2). - (3)
The honest buyer

*b*_{ i }evaluates each candidate seller \(s_{j}^{candidate} \)’s trustworthiness according to its own experience and optimal advisors’ advices based on Algorithm SRCA. - (4)
The honest buyer

*b*_{ i }rates the selected seller after the transaction, and updates its’ whitelist and blacklist according to this experience of trading (see Algorithm 3).

The following sub-sections illustrate these steps in detail.

### 4.1 The trust network construction algorithm

An agent’s trust network is constructed based on its social network. In this subsection, we first explain the concepts of social network, distance, and layer. And then, we illustrate the principle of finding trustable buyers through an example intuitively. Finally, the trust network construction algorithm is given.

**Definition 1**

A social network of a buyer are networks that are constructed based on its whitelist and blacklist. It is composed of many vertexes and directed edges. Each vertex represents a buyer agent and each directed edge represents a trust or distrust relationship between the two connected agents.

*b*

_{ i }), all the members in its whitelist and blacklist are added to its social network by solid and dotted arrows respectively. We assume in this paper that the buyer agents are willing to share their whitelists and blacklists with other buyer agents (Assumption 3). We note that the members in

*b*

_{ i }’s whitelist and blacklist also have their respective whitelists and blacklists. Hence, we can say that an agent trusts another agent if the latter is in the whitelist of the former, or an agent distrusts another agent if the latter is in the blacklist of the former. Meanwhile, the six degrees of separation theory asserts that “any two persons in the world can be connected by at most six persons”. Based on this theory, we construct a social network for an agent

*b*

_{ i }by including all other agents that have at most 6 trust or distrust relation with

*b*

_{ i }. From an honest buyer’s view, the trustworthiness and untrustworthiness of a buyer in its social network is determined by the distance that the latter buyer to the honest buyer in the trust/distrust chains. Example 1 explains these concepts of distance and layer.

*Example 1*

*b*

_{ i }is an honest buyer.

*b*

_{ i }’s whitelist \(WL_{b_{i}} \) is {

*b*

_{ a },

*b*

_{ b },

*b*

_{ c }},

*b*

_{ a }’s whitelist \(WL_{b_{a}} \) is {

*b*

_{ d },

*b*

_{ e }},

*b*

_{ f }is in

*b*

_{ b }’s whitelist and in

*b*

_{ e }’s blacklist simultaneously,

*b*

_{ g }is in

*b*

_{ d }’s whitelist and in

*b*

_{ e }’s blacklist simultaneously, and

*b*

_{ h }is in

*b*

_{ f }’s whitelist and

*b*

_{ c }’s blacklist simultaneously. The whitelist or blacklist of each buyer that is not listed above is empty. Obviously, we can find a chain <

*b*

_{ i },

*b*

_{ a },

*b*

_{ e },

*b*

_{ g }> from Fig. 3a. According to the basics of graph theory, the distance from

*b*

_{ i }to itself is zero, and the distance from

*b*

_{ i }to

*b*

_{ a }is 1. Therefore, the distance from

*b*

_{ i }to

*b*

_{ g }in chain <

*b*

_{ i },

*b*

_{ a },

*b*

_{ e },

*b*

_{ g }> is 3. In the social network of

*b*

_{ i }, the agents who have equal distance from

*b*

_{ i }are located in the same layer. In Fig. 3a, if

*b*

_{ i }is located in the first layer,

*b*

_{ a },

*b*

_{ b }and

*b*

_{ c }are located in the second layer. Moreover, an agent may belong to multiple chains with similar or different distance (or layer). For another example,

*b*

_{ f }is located in the third layer of chain <

*b*

_{ i },

*b*

_{ b },

*b*

_{ f }> and in the fourth layer of chain <

*b*

_{ i },

*b*

_{ a },

*b*

_{ e },

*b*

_{ f }> simultaneously. Similarly,

*b*

_{ g }is located in the fourth layer of chains <

*b*

_{ i },

*b*

_{ a },

*b*

_{ d },

*b*

_{ g }> and <

*b*

_{ i },

*b*

_{ a },

*b*

_{ e },

*b*

_{ g }> respectively.

*trust will be weakening in the chain of propagation*” and “

*in the case where an agent receives conflicting recommended trust, e.g. both trust and distrust, it needs some methods for combining these conflicting recommendations*”. Based on these conclusions, in this paper, we define the following rules to determine the trustworthiness or untrustworthiness of an agent in social network. These rules consider the fact that “

*trust will be weakening in the chain of propagation*”, therefore, it is believed the shorter the propagation chain, the less the weakening effect. Therefore, “layer” (or distance to the honest agent) of agents is considered in these rules. Example 2 explains these two rules intuitively.

- Rule 1:
If an agent is trusted and distrusted by (i.e., in the whitelist and the blacklist of) different buyers who are located in the same layer simultaneously, this agent’s trustworthiness is considered to be uncertain.

- Rule 2:
If an agent is trusted and distrusted by (i.e., in the whitelist and the blacklist of) different agents who are located in different layer, the agent’s trustworthiness can be judged based on the trust/distrust of its upper layer agents. If the upper agent whose layer is smallest trust this agent, this agent is considered as trustworthy, otherwise, it is believed to be not untrustworthy.

*Example 2*

In Fig. 3a, *b* _{ g } is trusted by *b* _{ d } and distrusted by *b* _{ e } simultaneously, *b* _{ d } and *b* _{ e } are located in the same layer (i.e., 3). Therefore, *b* _{ g } is located in the same layer of chains < *b* _{ i },*b* _{ a },*b* _{ d },*b* _{ g } > and < *b* _{ i },*b* _{ a },*b* _{ e },*b* _{ g } > . Moreover, *b* _{ g } is trusted by *b* _{ d } and distrusted by *b* _{ e } simultaneously. Hence, according to Rule 1, *b* _{ i } cannot give certain judge whether *b* _{ g } is trustworthy or not. For another example, in Fig. 3a, *b* _{ f } is located in the third layer and the forth layer of chains < *b* _{ i }, *b* _{ b }, *b* _{ f } > and < *b* _{ i }, *b* _{ a }, *b* _{ e }, *b* _{ f } > respectively. Moreover, *b* _{ b } trust *b* _{ f }, while *b* _{ e } distrust *b* _{ f }. As *b* _{ f }’s upper layer agents, *b* _{ b } and *b* _{ e }’s layer is 2 and 3 respectively. As *b* _{ b }’s layer is the smallest one of *b* _{ f }’s upper layer agents, therefore, according to Rule 2, *b* _{ f } is considered as trustworthy based on *b* _{ b }’s trust.

*Q*

_{ t }and

*Q*

_{ d }.

*Q*

_{ t }is used to temporarily store agents who are in whitelist, and

*Q*

_{ d }is used to temporarily store agents who are in blacklist. Moreover, we define a variable named

*depthLimit*to represent the upper bound of chain length (i.e. 6) in the resulting trust network. The main steps of this algorithm are as follows. First, the queues

*Q*

_{ t }and

*Q*

_{ d }(see step (1) in Algorithm 1) are initialized. Secondly, we find the trustworthy buyers \(B_{b_{i}}^{T} \) for

*b*

_{ i }(see step (3-5) in Algorithm 1). Thirdly, we find the buyers that are not trustworthy (denoted as \(B_{b_{i}}^{D} )\) for buyer

*b*

_{ i }(see step (6-8) in Algorithm 1). Finally, the queues

*Q*

_{ t }and

*Q*

_{ d }(see step (9-11) in Algorithm 1) updated.

If we execute algorithm 1 over agent *b* _{ i }’s social network shown in Fig. 3a, we can get the resulting trust network shown in Fig. 3b. From this figure, we can see that *b* _{ a }, *b* _{ b }, *b* _{ c }, *b* _{ d }, *b* _{ e }, and *b* _{ f } are the agents who are trusted by *b* _{ i }. In comparison, according to the trust network construction method given in previous work [5, 6, 20, 21] that do not consider “distrust” labels, all the agents *b* _{ a }, *b* _{ b }, *b* _{ c }, *b* _{ d }, *b* _{ e }, *b* _{ f }, *b* _{ g } and *b* _{ h } will be added into the trust network. Therefore, algorithm 1 further purifies the trust network using “distrust” information.

### 4.2 The optimal advisor lists generation algorithm

*optimal advisor list*) for evaluating a recommended seller we have proposed in literature [5], this paper also tries to find an

*optimal advisor list*for each seller. Algorithm 2 shows the main idea of the optimal advisor lists generation algorithm based on whitelist and blacklist, which is composed of four steps. First, buyer

*b*

_{ i }calculates pair-wised similarities between the itself and any buyer that rated

*S*

^{ c a n d i d a t e }(e.g.

*b*

_{ k }) according to equation (1) or (2) (see step (3)-(6) in Algorithm 2). Psychology research [10] showed that people have both the trustworthy aspect and the untrustworthy aspect. Based on their results, we randomly set two initial values (denoted as \(R_{b_{i} ,T} (b_{k} )\in [0,1]\) and \(R_{b_{i} ,D} (b_{k} )\in [0,1]\) respectively )to each buyer agent for denoting these two aspects. Moreover, we specify that buyer

*b*

_{ i }updates

*b*

_{ k }’s trustworthy aspect and untrustworthy aspect according to equation (3) (see step (7) in Algorithm 2). Thirdly, considering the trustworthy aspect and the untrustworthy aspect simultaneously, the synthetized trustworthiness of

*b*

_{ k }is calculated according to equation (4) (see step (8) in Algorithm 2). Finally, buyer

*b*

_{ i }updates and generates the optimal advisor lists for evaluating the recommended seller (see step (10) in Algorithm 2).

The following definitions illustrate the formulas used in this algorithm. It should be noted that we do not consider *b* _{ k }’s layer (or distance) in social network in definitions 2-4. That is caused by two reasons. One is that, for any buyer *b* _{ k } in \(B_{s_{j}^{candidate}}^{H} \) while not in \(B_{b_{i}}^{T} \), there is not any relationship between *b* _{ k } and *b* _{ i }, let alone layers. That is to say, layers do not exist in all the similarity calculation cases. The other reason is that, similarity of trustor and trustee’s viewpoints affects their trust, while it is not necessarily true inversely. For example, the degree that a trustor trusts a trustee is affected by factors such as the familiarity between the trustor and the trustee, their interaction frequency, the consistency/similarity of their viewpoints, the number of common friends, and so on. However, the similarity of two persons’ behaviors or viewpoints are not necessarily affected by their trust. In real life, a person’s behavior or viewpoints may be similar to the ones of distant strangers or even his/her competitors and enemies.

**Definition 2**

*b*

_{ i }and any reviewer agent

*b*

_{ k }once traded with same sellers, the similarity between agent

*b*

_{ i }and agent

*b*

_{ k }is determined by the ratings they rated these sellers, the average values of their ratings. Equation (1) defined the calculation method.

*b*

_{ i }as well as

*b*

_{ k }, \(r_{b_{i} ,s_{j}} \) is the rating that

*b*

_{ i }rated

*s*

_{ j }(

*s*

_{ j }∈

*S*

_{ b }

_{ i },

*b*

_{ k }), \(\overline {r_{b_{i}}} \) represents the average of ratings that

*b*

_{ i }rated its trading partners, \(r_{b_{k} ,s_{j}} \) represents the rating that agent

*b*

_{ k }rated

*s*

_{ j }(

*s*

_{ j }∈

*S*

_{ b }

_{ i },

*b*

_{ k }), \(\overline {r_{b_{k}}} \) represents the average of ratings that

*b*

_{ k }scored its trading partners.

**Definition 3**

*b*

_{ i }and any reviewer agent

*b*

_{ k }did not trade with any same seller in their histories, the similarity between them is determined by characteristics of

*b*

_{ k }’s rating and the ratings given by the buyers who are trusted by

*b*

_{ i }. Equation (2) defines the calculation method.

*b*

_{ k }, \(r_{b_{k} ,s_{j}} \) is the rating that

*b*

_{ k }rated \(s_{j} (s_{j} \in S_{b_{k}}^{H} )\), \(\overline {r_{b_{k}}} \) is the average of ratings that agent

*b*

_{ k }rated its trading partners, \(\overline {r_{s_{j}}} \) is the average of ratings that \(B_{b_{i}}^{T} \)(i.e., the set of agents who are trusted by agent

*b*

_{ i }) scored

*s*

_{ j }, \(\overline r \) is the average of ratings that all the members in \(B_{b_{i}}^{T} \) scored the sellers that they traded with.

**Definition 4**

*b*

_{ i }’s viewpoint, the trustworthy aspect and untrustworthy aspect of agent

*b*

_{ k }can be updated based on their similarity. Equations (3) and (4) defines the updating method respectively.

*b*

_{ k }from

*b*

_{ i }’s viewpoint, \(R_{b_{i} ,D} (b_{k} )\in [0,1]\) represents the untrustworthy aspect of

*b*

_{ k }from

*b*

_{ i }’s viewpoint;

*s*

*i*

*m*(

*b*

_{ i },

*b*

_{ k }) ∈[0,1] is calculated according to equation (4) (i.e., the similarity between

*b*

_{ i }and

*b*

_{ k }is the normalized value of

*s*

*i*

*m*

_{1}(

*b*

_{ i },

*b*

_{ k }) if

*b*

_{ i }and

*b*

_{ k }have traded with same sellers, otherwise, their similarity is a normalized value of

*s*

*i*

*m*

_{2}(

*b*

_{ i },

*b*

_{ k });

*ω*∈(0,1) is a classification factor to divide the growth of trustworthy aspect and untrustworthy aspect into positive, negative and zero;

*β*

_{1}and

*β*

_{2}are factors to control the increment speed of trust and distrust respectively.

In experiment, *ω* ∈(0,1) is set to 0.5. This setting ensures that the trustworthy aspect and untrustworthy aspect of *b* _{ k } keep unchanged if the similarity between *b* _{ i } and *b* _{ k } is 0.5 (which means their similarity is not obviously very large or very small). If the similarity between *b* _{ i } and *b* _{ k } is larger than 0.5(which means that their similarity is obviously very large), the trustworthy/untrustworthy aspect of *b* _{ k } increases/decreases by a certain amount; otherwise, the trustworthy/untrustworthy aspect of *b* _{ k } decreases/increases by a certain amount. It is also important to note that factors *β* _{1}, *β* _{2} should satisfy the condition 0 < *β* _{2} < *β* _{1} < 1. The constraints *β* _{2} < *β* _{1} is set to ensure that the speed of increase in the trustworthy aspect is less than that of the untrustworthy aspect, and the speed of decrease in the trustworthy aspect is greater than that of the untrustworthy aspect. This constraint is consistent with the research result that “*people devote more attention to negative information than to positive information*” [17].

*𝜃*

_{1}(0 <

*𝜃*

_{1}< 1) and

*𝜃*

_{2}(0 <

*𝜃*

_{2}< 1) are introduced based on perception of humans that: a person is trustable when his/her trustworthy aspect is greatly larger than his/her untrustworthy facet; a person is not trustable when his/her trustworthy facet is slightly smaller than his/her untrustworthy facet [10]. Therefore,

*𝜃*

_{1}should be larger than

*𝜃*

_{2}. In experiment,

*𝜃*

_{1}and

*𝜃*

_{2}are assigned with 0.8 and 0.2 respectively. That is to say, if a person’s trustworthy aspect is larger than its untrustworthy aspect by 0.8 (i.e., \(R_{b_{i} ,T} (b_{k} )-R_{b_{i} ,D} (b_{k} )>\theta _{1} \),

*𝜃*

_{1}= 0.8), his/her synthesized trustworthiness is believed to be 1. If a person’s trustworthy aspect is smaller than its untrustworthy aspect by 0.2 (i.e., \(R_{b_{i} ,T} (b_{k} )-R_{b_{i} ,D} (b_{k} )<-\theta _{2} \),

*𝜃*

_{2}= 0.2), his/her synthesized trustworthiness is believed to be 0. Otherwise, the synthesized trustworthiness of this agent is calculated according to formula \(\frac {\text {1}}{\theta _{1} +\theta _{2}} (R_{b_{i} ,T} (b_{k} )-R_{b_{i} ,D} (b_{k} )+\theta _{2} )\text {} \) (see third case of formula (5)). This formula is constructed by linear connection of the two points (

*𝜃*

_{1},1) and (-

*𝜃*

_{2},0). Figure 4 illustrates the construction principle visually. Its horizontal axis is \(R_{b_{i} ,T} (b_{k} )-R_{b_{i} ,D} (b_{k} )\), and vertical axis is the synthesized trustworthiness of a person.

**Definition 5**

*b*

_{ k }can be synthetized to one value by

*b*

_{ i }. Equation (4) defines the synthetization method.

*b*

_{ k }’s trustworthy aspect from

*b*

_{ i }’s view, \(R_{b_{i} ,D} (b_{k} )\) represents

*b*

_{ k }’s untrustworthiness from

*b*

_{ i }’s aspect;

*𝜃*

_{1}(0 <

*𝜃*

_{1}< 1) and

*𝜃*

_{2}(0 <

*𝜃*

_{2}< 1)are two thresholds.

### 4.3 Seller’s reputation evaluation algorithm

To decrease purchase risk, honest buyers often evaluate sellers’ reputation first. Therefore, buyer *b* _{ i } must be endowed with the ability of evaluating sellers. For accurately evaluating each seller’s reputation, an honest buyer comprehensively considers its private trust (which is gained by their own experience) to the seller and the public reputation of the seller (which is calculated according to their optimal advisors’ comment). Algorithm 3 illustrates the new seller’s reputation calculation algorithm [5]. The main idea of this algorithm is as follows: (1) Buyer *b* _{ i } first calculates each seller’s private trustworthiness according to its own trading experience with this seller (see step (2)-(5) in Algorithm 3). (2) Buyer *b* _{ i } calculates each seller’s public reputation according to the optimal advisor lists obtained from Algorithm 3(see step (6) in Algorithm 3). (3) The private trustworthiness and the public reputation are combined to get the perceived reputation of each seller (see step (7) in Algorithm 3). The formulas used in this algorithm are defined in the following definitions, which are similar to the ones given by Zhang and Cohen [22] and Ji et al. [5].

**Definition 6**

*b*

_{ i }’s private trust to seller \(s_{j}^{candidate} \)is calculated according to

*b*

_{ i }’s rating to this seller. Formula (6) defines the calculation method [22].

*b*

_{ i }rated to \(s_{j}^{candidate} \), \(N_{b_{i} ,neg}^{s_{j}^{candidate}} \) represents the number of negative rating that

*b*

_{ i }rated \(s_{j}^{candidate} \),

*λ*is a discount factor,

*t*(

*t*= 1,2,…

*n*) is the time windows of rating.

**Definition 7**

*b*

_{ i }’s public reputation to seller \(s_{j}^{candidate} \) is calculated according to the ratings of \(A_{b_{i}}^{s_{j}^{candidate}} \). Formula (7) defines the calculation method.

*a*

_{ k }rated \(s_{j}^{candidate} \), and

*λ*is a discount factor,

*t*(

*t*= 1,2,…

*n*) is the time windows of rating. The calculation principle of \(P_{a_{k} ,pos}^{s_{j}^{candidete}} \) and \(P_{a_{k} ,neg}^{s_{j}^{candidete}} \) are presented in formula (8), which are adapted from Zhang and Cohen [22] and the formulas given by Jøsang and Ismail [9] and Yu and Singh [20] based on Dempster-Shafer theory.

*a*

_{ k }’s synthetic trustworthiness estimated by buyer

*b*

_{ i }, \(N_{b_{i}, pos}^{a_{k}} \) represents the number of positive rating that

*b*

_{ i }rated to

*a*

_{ k }, and \(N_{b_{i} ,neg}^{a_{k}} \) represents the number of negative rating that

*b*

_{ i }rated to

*a*

_{ k }.

**Definition 8**

*b*

_{ i }of a given seller \(s_{j}^{candidate} \) is the weighted combination of

*b*

_{ i }’s private trustworthiness and public reputation given by \(A_{b_{i}}^{s_{j}^{candidate}} \), as defined by formula (9) [22].

*w*is calculated according to formulas (10) and (11).

*b*

_{ i }for the seller \(s_{j}^{candidate} \) , and \(N_{\min } \) is a threshold calculated according to formula (11), which is similar to that defined in literature [22]. If \(N_{all}^{B_{r}} \ge N_{\min } \), buyer

*b*

_{ i }will be confident about the private trustworthiness estimated based on its own ratings, therefore, the weight of private trustworthiness is simply assigned as 1. Otherwise,

*b*

_{ i }will also consider public reputation estimated based on advisors ratings.

*ε*represents the maximal level of error that can be accepted,

*η*represents confidence.

### 4.4 The whitelist and blacklist updating algorithm

After the transaction, the defending buyer *b* _{ i } can rate the selected seller and then update its’ own whitelist and blacklist according to its experience. This updating makes the whitelist and the blacklist always a timely record of the buyers, the honest buyer trust, and distrust. The main idea of this updating process (see Algorithm 4) is as follows: (1) considering whether each member in \(B_{s_{_{j}}^{candidate}}^{H} \)(which is denoted as *b* _{ k }) should be exchanged into *b* _{ i }’s whitelist and blacklist(see step (1) in algorithm 4); (2) if *b* _{ k }’s syntactic trust is larger than the one of the most untrustworthy buyer in \(WL_{b_{i}} \)(denoted as *b* _{ m u }), *b* _{ k } will replace *b* _{ m u }(see step (2-4) in Algorithm 4); if *b* _{ k }’s synthetic trust is smaller than the one of the most trustworthy buyer in \(BL_{b_{i}} \)(denoted as *b* _{ m t }), *b* _{ k } will replace *b* _{ m t }(see step (5-7) in Algorithm 4).

## 5 Experimental results

To verify the performance of *WBCEA* given in Section 4, we design a set of experiments. Similar to experiments designed in previous studies [5, 6], 6 typical kinds of attacks including *AlwaysUnfair*, *Camouflage*, *Sybil*, *Whitewashing*, *Sybil-Camouflage*, and *Sybil-Whitewashing* are selected to attack the reputation system. *AlwaysUnfair* attackers always give high reputation to dishonest sellers while rate low to the honest seller. *Camoufl* age attackers intermittently tell the truth or call white black (i.e., give unfairly high scores to dishonest sellers and unfairly low scores to honest sellers). In experiments of this paper, each *Camoufl* age attacker will rate honestly in the first 20 days, while give unfair ratings to both the duopoly dishonest and honest seller in following days. Similar to *AlwaysUnfair* attackers, buyers who adopt *Sybil* attack always called white black. Different from *AlwaysUnfair* attack, the number of dishonest buyers in *Sybil* is greatly larger than the one in *AlwaysUnfair* attack. The buyers who use the *Whitewashing* attack strategy always whiten their low reputation by recreating a new account. In experiments, the *Whitewashing* attackers provide an unfair rating each day and then recreate a new account in later day to whitewash their sham action.

Naive strategy and the oracle strategy are designed as baselines. Naive strategy means that the buyers who adopt this strategy believe all raters are good and their ratings are true. Oracle strategy assumes that the buyers are omniscient. Therefore, they always know each seller’s real reputation. Moreover, strategies such as *iClub*, *Personalized*, *MET*, *PEALGA*, *GBR* and *Multi-faceted* are selected to compare with our strategy (*WBCEA*). *iClub* [11, 12], *Personalized* [22], *MET* [6] are typical strategies that only consider trust factor in sellers’ and buyers’ evaluation process. These strategies belong to the filtering, discounting and evolutionary category respectively. *PEALGA* [5] is another evolutionary strategy that only considers trust. But it constructs different customized optimal advisor list for different candidate sellers. *GBR* [13] and *Multi-faceted* [4] are compared because they consider both trust and distrust information in the evaluation of sellers and buyers. In particular, this paper constructs an algorithm named *WBCEA_S*. The only difference between *WBCEA* and*WBCEA_S* is that the latter algorithm only evolves one advisor list to evaluate all the candidate sellers. This algorithm is constructed for verifying whether customized optimal advisor list can outperform one advisor list in evaluating candidate sellers when both trust/distrust are considered and whitelist/blacklist are maintained.

### 5.1 Experimental settings

*Sybil*, there are 12 dishonest buyers and 28 honest buyers in the market. Under

*Sybil*attack, the dishonest buyers and honest buyers are 28 and 12 respectively. Besides, 100 days of transactions are simulated in total. It should be noted that the initial trustworthy aspect (i.e., \(R_{b_{i} ,T} (b_{k} ))\) and untrustworthy aspect (i.e., \(R_{b_{i} ,D} (b_{k} ))\) of reviewers is randomly assigned with a value range from 0 to 1 respectively. In each day, every buyer makes one transaction with a partner. The ratings they score sellers are ranged from 0 to 1. It also should be noted that, recommendation algorithm design is not the topic of this paper, the recommendation list about sellers is randomly generated in experiments. The settings of parameters in

*WBCEA*are listed in Table 3.

Parameters in simulation

Key parameters | Values |
---|---|

Number of dishonest duopoly sellers | 1 |

Number of honest duopoly sellers | 1 |

Number of dishonest common sellers | 99 |

Number of honest common sellers | 99 |

Number of dishonest buyers(| | 12/28* |

Number of honest buyers(| | 28/12* |

Simulation days(Days) | 100 |

Dominance Ratio(Ratio) | 0.5 |

Setting and meaning of variables or parameters used in *WBCEA*

Parameters | Meanings | Value |
---|---|---|

m | Length of whitelist | 4 |

n | Length of blacklist | 4 |

r | The number of recent reviewers who rate each sellers | 20 |

Depth | The depth of Network | 6 |

| Discounted factor for trust value | 0.4 |

| Discounted factor for distrust value | 0.3 |

| Threshold | 0.8 |

| Threshold | 0.2 |

Ratio | ratio for selecting duopoly sellers | 0.5 |

| Discounted factor for historical rating | 0.9 |

| Maximal level of error that can be accepted | 0.25 |

### 5.2 Evaluative criteria

To compare the experimental results, we choose similar criteria given in Jiang et al. [6] to evaluate the performance of each strategy. One criterion is robustness, which is used to evaluate the feasibility (i.e., anti-attack ability) of each defense strategy from macroscopic scale. Formula (12) defines the function of robustness. According to this formula, the value of robustness ranges from -1 to 1. The more transactions a defense agent trades with the honest duopoly seller, the higher its correct selection rate is, the larger the value of robustness is, and therefore the better the defending ability is.

**Definition 9**

*T*

*r*

*a*

*n*(

*s*

^{ H }) is the transaction volume of duopoly honest seller,

*T*

*r*

*a*

*n*

*s*(

*s*

^{ D }) is the transaction volume of duopoly dishonest seller,

*B*

^{ H }is the number of honest buyers,

*D*

*a*

*y*

*s*is the total transaction days, and

*Ratio*is the selection probability of duopoly sellers.

The mean absolute error (abbr., MAE) of seller’s reputation is used to measure the accuracy of trust models in modeling seller’s reputation. Formula (13) defines the calculation function of MAE. The smaller the MAE, the accuracy the defense strategy’s prediction is, and therefore the better the defense strategy is.

**Definition 10**

*s*

_{ j }’s reputation is defined according to its real reputation and estimated reputation when buyer adopt a given defense strategy. Formula (13) [6] defines the calculation function.

*B*

^{ H }is the number of honest buyers,

*D*

*a*

*y*

*s*is the total transaction days,

*R*

^{ t }(

*s*

_{ j }) is the actual reputation of seller

*s*

_{ j }in day

*t*(

*t*∈ [0,

*D*

*a*

*y*

*s*]), and \(\tilde {{R}}_{b_{i}}^{t} (s_{j} )\) is the estimated reputation of seller

*s*

_{ j }in day

*t*, which is calculated according to the ratings of advisors of

*b*

_{ i }∈

*B*

^{ H }.

### 5.3 Results and analysis

#### 5.3.1 Robustness analysis

Robustness of compared strategies

Classifications | Strategies | alwaysUnfair | Camouflage | Whitewashing | Sybil | Sybil&Camouflage | Sybil&Whitewashing |
---|---|---|---|---|---|---|---|

Baselines | | 0.89 ± 0.03 | 0.93 ± 0.02 | 0.88 ± 0.19 | -0.98 ± 0.07 | -0.50 ± 0.07 | -1.00 ± 0.07 |

| 1.00 ± 0.00 | 1.00 ± 0.00 | 1.00 ± 0.00 | 1.00 ± 0.00 | 1.00 ± 0.00 | 1.00 ± 0.00 | |

Trust | | | 0.98 ± 0.02 | 0.76 ± 0.15 | 0.23 ± 0.32 | 0.92 ± 0.10 | 0.26 ± 0.30 |

| 0.98 ± 0.02 | | 0.98 ± 0.03 | 0.78 ± 0.30 | 0.94 ± 0.09 | -0.97 ± 0.21 | |

| 0.99 ± 0.03 | | | 0.79 ± 0.26 | 0.92 ± 0.07 | 0.87 ± 0.15 | |

| 0.98 ± 0.02 | 0.98 ± 0.02 | 0.98 ± 0.03 | | | | |

Trust & distrust | | 0.96 ± 0.03 | 0.96 ± 0.03 | 0.97 ± 0.03 | 0.84 ± 0.43 | 0.91 ± 0.12 | 0.75 ± 0.08 |

| 0.98 ± 0.03 | 0.93 ± 0.04 | 0.76 ± 0.09 | 0.91 ± 0.10 | 0.80 ± 0.10 | 0.40 ± 0.21 | |

| 0.98 ± 0.02 | 0.98 ± 0.02 | 0.98 ± 0.03 | 0.88 ± 0.09 | 0.97 ± 0.06 | 0.74 ± 0.45 | |

| 0.98 ± 0.03 | 0.98 ± 0.03 | 0.98 ± 0.03 | 0.96 ± 0.08 | | 0.97 ± 0.08 |

The naïve strategy assumes that all raters are good and their ratings are true. If majority of reviewers are attackers, following these reviewers’ advices, naïve strategy may make the defending agent falsely judge the trustworthiness of sellers and falsely choose dishonest seller to trade with. The *oracle* strategy assumes agents always know the real trustworthiness of each reviewer and can always choose the honest seller according to honest reviewers’ advices. Therefore, *o* *racle* can always reach the highest robustness. *Oracle* and *naïve* values can be regarded as baselines. The nearer a robustness to *oracle*, the more robust the corresponding strategy is.

Comparing all the strategies, we can find that *PEALGA* and *WBECA* achieve the best result or nearly the best result upon defending various pure and combined attacks. In particular, *PEALGA* and *WBCEA* achieve the best performance when defending against *Sybil* and *Sybil & whitewashing* attacks. Other compared strategies score poorly on one or more scenarios. For example, though *GBR* and *Multi-faceted* strategies consider trust and distrust information simultaneously, they cannot defend the attacks including *Sybil* very well. *WBCEA_S* is greatly inferior to *PEALGA* and *WBCEA* when defending against *Sybil* (0.88 ± 0.09) and *Sybil*&*Whitewashing* (0.74 ± 0.45) attacks. The high performances of *PEALGA* (only considers trust in the pre-evolution of the optimal customized advisor list) and *WBCEA* (which considers trust and distrust factors in the co-evolution of the whitelist and blacklist) are caused by the fact that both of them emphasize the idea of evolving an optimal customized advisor list to evaluate each candidate trading seller, which enable defenders to accurately predict duopoly sellers’ reputations and accurately choose the honest duopoly seller as transaction partner. Therefore, we can conclude that the idea of generating an optimal customized advisor list is more influential than the idea of simultaneously consideration of trust and distrust and maintaining of whitelist and blacklist in accurately evaluating sellers’ trustworthiness.

#### 5.3.2 Accuracy analysis

*oracle*strategy assumes that agent knows each seller’s real reputation, agents who are oracles can always accurately predict seller’ reputation (i.e., MAE equals to zero). Therefore,

*oracle*strategy can be taken as the lowest (best) baseline. The closer a strategy’s MAE to

*oracle*strategy, the more accurate it is. In contrast, as

*naïve*strategy assumes that agents trust all the buyers’ ratings are true, therefore, difference between real reputation and predicted reputation gained by

*naïve*strategy is large (i.e., MAE is very large). Especially, in predicting the reputation of dishonest duopoly seller, the MAE is as large as 0.76 in some cases. Therefore,

*naïve*strategy can be taken as the highest (worst) baseline. The closer a strategy’s MAE to the one of

*naïve*strategy, the more inaccurate the strategy is. From Tables 5 and 6, we can see that the MAEs of

*PEALGA and WBCEA*can approach the ones of

*oracle*strategy very well under almost all the attacks.

*WBCEA_S*also perform well when predicting duopoly seller’s reputation under attacks except for

*Sybil&Whitewashing*.

*GBR*and

*Multi-faceted*strategies can quite accurately predict honest duopoly seller’s reputation, especial when attacks without

*Sybil*. However, their prediction about dishonest duopoly seller is not accurate (i.e., the MAE is as large as 0.52 ± 0.10, 0.46 ± 0.06, 0.32 ± 0.10).

MAE and variance of dishonest duopoly seller’s reputation

Classifications | Strategies | alwaysUnfair | Camouflage | Whitewashing | Sybil | Sybil&Camouflage | Sybil&Whitewashing |
---|---|---|---|---|---|---|---|

Baselines | | 0.76 ± 0.01 | 0.65 ± 0.02 | 0.76 ± 0.05 | 0.54 ± 0.01 | 0.49 ± 0.02 | 0.55 ± 0.02 |

| 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | |

Trust | | 0.85 ± 0.20 | 0.73 ± 0.11 | 0.80 ± 0.08 | 0.06 ± 0.01 | 0.70 ± 0.11 | 0.06 ± 0.01 |

| 0.45 ± 0.06 | 0.46 ± 0.06 | 0.82 ± 0.04 | 0.23 ± 0.07 | 0.59 ± 0.08 | 0.25 ± 0.04 | |

| 0.02 ± 0.01 | 0.02 ± 0.01 | 0.02 ± 0.01 | 0.10 ± 0.06 | 0.12 ± 0.03 | 0.36 ± 0.15 | |

| | | | 0.02 ± 0.01 | | 0.04 ± 0.02 | |

Trust & distrust | | 0.13 ± 0.07 | 0.10 ± 0.05 | 0.06 ± 0.02 | 0.52 ± 0.10 | 0.46 ± 0.06 | 0.19 ± 0.04 |

| 0.06 ± 0.11 | 0.07 ± 0.05 | 0.14 ± 0.05 | 0.04 ± 0.03 | 0.15 ± 0.04 | 0.32 ± 0.10 | |

| | | 0.08 ± 0.04 | 0.03 ± 0.00 | 0.05 ± 0.01 | 0.27 ± 0.06 | |

| 0.01 ± 0.01 | 0.02 ± 0.00 | 0.02 ± 0.01 | | 0.06 ± 0.02 | |

MAE and variances of honest duopoly seller’s reputation

Classifications | Strategies | alwaysUnfair | Camouflage | Whitewashing | Sybil | Sybil&Camouflage | Sybil&Whitewashing |
---|---|---|---|---|---|---|---|

Baselines | | 0.19 ± 0.01 | 0.10 ± 0.01 | 0.20 ± 0.08 | 0.98 ± 0.01 | 0.47 ± 0.02 | 0.97 ± 0.01 |

| 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | |

Trust | | | | 0.12 ± 0.07 | 0.35 ± 0.16 | | 0.34 ± 0.15 |

| 0.02 ± 0.00 | | 0.06 ± 0.00 | 0.18 ± 0.13 | 0.05 ± 0.00 | 0.96 ± 0.07 | |

| | | 0.02 ± 0.01 | 0.13 ± 0.13 | 0.10 ± 0.03 | 0.14 ± 0.10 | |

| | | | | 0.04 ± 0.01 | | |

Trust & distrust | | 0.07 ± 0.01 | 0.05 ± 0.01 | 0.04 ± 0.01 | 0.33 ± 0.15 | 0.23 ± 0.01 | 0.19 ± 0.03 |

| 0.02 ± 0.00 | 0.04 ± 0.02 | 0.12 ± 0.04 | 0.05 ± 0.03 | 0.12 ± 0.04 | 0.30 ± 0.09 | |

| | | 0.07 ± 0.03 | 0.03 ± 0.00 | 0.05 ± 0.01 | 0.28 ± 0.16 | |

| | | | 0.02 ± 0.01 | 0.05 ± 0.01 | 0.02 ± 0.02 |

*oracle*can always accurately estimate honest and dishonest sellers’ reputation (the real reputation of dishonest seller is 0, and the real reputation of honest seller is 1). Therefore, the closer a curve of given strategy to the

*oracle*curve, the more accurate this strategy’s estimation is. Inversely, as

*naïve*strategy assumes the defender trust all others’ ratings, it can be regarded as the worst baseline. The following paragraph analyzes and compares these strategies’ ability in defending against various attacks in detail.

In general, *PEALGA* and *WBCEA* outperform all the trust strategies and all the trust and distrust strategies, under almost all attacks, especially under attacks such as *Sybil*, *Sybil&whitewashing*, and *Sybil&camouflage*. Moreover, *PEALGA* and *WBCEA* perform similarly and achieve the best performance under all attacks as there is very little difference between their reputation MAE (see the 7 ^{th} row and the 11 ^{th} row in Tables 5 and 6). That can also be further explained according to Figs. 5 and 6, in which *PEALGA* and *WBCEA* always approach the *oracle* baseline best under all attacks. What’s more, *WBCEA* outperform other strategies including *PEALGA* in defending *Sybil* as it is the quickest strategies to reach their real reputation (see Figs. 5d and 6d). These results are achieved because simultaneous consideration of trust and distrust can improve defending strategy’s prediction accuracy slightly, while the adaptation of customized optimal advisor list for evaluating each candidate seller can improve its prediction accuracy greatly.

## 6 Conclusions

In this paper, based on psychology research result of Lewicki et al. [10], we assume that each buyer agent has both the trustworthy aspect and the untrustworthy aspect, and propose to assign two scores to each buyer agent for denoting its trustworthy aspect and untrustworthy aspect respectively. The synthetization of these two aspects can be used as criteria for evaluating a buyer’s synthetized trustworthiness. Besides, each buyer maintains a whitelist and a blacklist, which is evolved according to the new algorithm called *WBCEA* for defensing multifarious attacks. The *WBCEA* algorithm is composed of several sub-algorithms, such as the trust network construction algorithm, the optimal advisor lists generation algorithm, the seller’s reputation calculation algorithm and the whitelist and blacklist updating algorithm. According to the whitelist and blacklist, a buyer can construct its own trust network (see algorithm 1). By doing so, the buyer can select trustworthy advisors for evaluating each candidate seller and chose the most trustworthy seller as trading partner.

A set of experiments is designed and implemented to compare the performance of our strategy with recent typical defending strategies and baseline ones. Experimental results show that the *WBCEA* strategy and the *PEALGA* strategy have similar performance in defending against all attacks. Moreover, they can outperform existing related trust strategies and all the trust and distrust strategies in robustness and MAE of seller’s reputation when defending various attacks. In particular, *WBCEA* slightly outperform *PEALGA* when defending against *Sybil* attack.

The strategies compared in this paper are suitable for B2B electronic market, where sellers’ behaviors are not changing frequently. Whether the strategy proposed in this paper is suitable for the C2C market in which the sellers’ behaviors often change is a problem that needs to be explored. Moreover, it is significant for us to study the stability of this strategy when the configuration (number of sellers, the ratio of dishonest buyers) changes. Besides, using real data to verify the accuracy, robustness and stability of this strategy is our future target.

## Notes

### Acknowledgments

This paper is supported in part by the Natural Science Foundation of China (Nos. 71403151, 61572035, 61402011, 61433012, 61502281), the Natural Science Foundation of Shandong Province (Nos.ZR2013FM023,ZR2013FQ030, ZR2014FP011, ZR2011FL002), China’s Post-doctoral Science Fund (No. 2014M561948), Postdoctoral innovation project special funds of Shandong Province (No. 201403007), Applied research project for Qingdao postdoctoral researcher, the Qingdao Science and Technology Development Project (KJZD-13-29-JCH), Project of Shandong Province Higher Educational Science and Technology Program (J14LN33), the Leading talent development program of Shandong University of Science and Technology and Special Project fund of Taishan scholar of Shandong Province.

## References

- 1.Abdul-Rahman A, Hailes S (2000) Supporting trust in virtual communities. In: Hawaii international conference on system science. IEEE Computer Society, vol 6, p 6007Google Scholar
- 2.McKnight D H, Chervany N L (2001) Trust and distrust definitions: one bite at a time. Lect Notes Comput Sci 107(12):27–54CrossRefMATHGoogle Scholar
- 3.McKnight D H, Choudhury V (2006) Distrust and trust in B2C e-commerce: do they differ? International Conference on Electronic Commerce: the New E-Commerce - Innovations for Conquering Current Barriers, Obstacles and Limitations to Conducting Successful Business on the Internet, 2006, Fredericton, New Brunswick, Canada, pp 482–491Google Scholar
- 4.Fang H, Guo G, Zhang J (2015) Multi-faceted trust and distrust prediction for recommender systems. Decis Support Syst 71(C):37–47CrossRefGoogle Scholar
- 5.Ji S, Ma H, Zhang S, Leung H F, Chiu D, Zhang CJ, et al. (2016) A pre-evolutionary advisor list generation strategy for robust defensing reputation attacks. Knowl-Based Syst 103(C):1–18CrossRefGoogle Scholar
- 6.Jiang S, Zhang J, Ong Y (2013) An evolutionary model for constructing robust trust networks. In: Proceedings of the 12th international conference on autonomous agents and multiagent systems (AAMAS), pp 813–820Google Scholar
- 7.Jøsang A (2012) Robustness of trust and reputation systems: does it matter? Ifip Advances in Information & Communication Technology, pp 253–262Google Scholar
- 8.Jøsang A, Gray E, Kinateder M (2003) Analysing topologies of transitive trust. In: Proceedings of the workshop of formal aspects of security & trustGoogle Scholar
- 9.Jøsang A., Ismail R. (2002). The beta reputation system. Bled Conference on Electronic Commerce. pp. 324–337.Google Scholar
- 10.Lewicki R J, Mcallister D J, Bies RJ (1998) Trust and distrust: new relationships and realities. Acad Manag Rev 23(3):438–458CrossRefGoogle Scholar
- 11.Liu S, Zhang J, Miao C, Theng Y, Kot AC (2011) ICLUB: an integrated clustering-based approach to improve the robustness of reputation systems. Inter Conf Auton Agents Multiagent Syst 3:1151–1152Google Scholar
- 12.Liu S, Zhang J, Miao C, Theng Y, Kot AC (2014) An integrated clustering-based approach to filtering unfair multi-nominal testimonies. Comput Intell 30(2):316–341MathSciNetCrossRefGoogle Scholar
- 13.Liu X, Wang Y, Zhu S, Lin H (2013) Combating Web spam through trust–distrust propagation with confidence. Pattern Recogn Lett 34(13):1462–1469CrossRefGoogle Scholar
- 14.Victor P, Cornelis C, Cock M D, et al. (2011) Trust-and Distrust-based recommendations for controversial reviews. IEEE Intell Syst 26(1):48–55CrossRefGoogle Scholar
- 15.Victor P, Verbiest N, Cornelis C, Cock M D (2013) Enhancing the trust-based recommendation process with explicit distrust. ACM Trans Web (TWEB) 7(2):42–59Google Scholar
- 16.Singh S, Bawa S (2007) Privacy, trust and policy based authorization framework for services in distributed environment. Int J Comput Sci 2(2):85–92Google Scholar
- 17.Smith NK, Larsen JT, Chartrand TL, Cacioppo JT, Katafiasz HA, Moran KE (2006) Being bad isn’t always good: affective context moderates the attention bias toward negative information. J Pers Soc Psychol 90(2):210–20CrossRefGoogle Scholar
- 18.Teacy W T L, Patel J (2006) TRAVOS: Trust and reputation in the context of inaccurate information sources. Auton Agent Multi-Agent Syst 12(2):183–198CrossRefGoogle Scholar
- 19.Whitby A, Jøsang A, Indulska J (2004) Filtering out unfair ratings in Bayesian reputation systems. Int Joint Conf Auton Agent Syst 37(3):106–117Google Scholar
- 20.Yu B, Singh MP (2002) Detecting deception in reputation management Second International Joint Conference on Autonomous Agents & Multiagent Systems, vol 2, pp 73–80Google Scholar
- 21.Yu B, Singh MP (2002). An evidential model of distributed reputation management. In: Proceedings of the first international joint conference on autonomous agents and multiagent systems: part. ACM pp 294–301Google Scholar
- 22.Zhang J, Cohen R (2011) A framework for trust modeling in multiagent electronic marketplaces with buying advisors to consider varying seller behavior and the limiting of seller bids. ACM Trans Intell Syst Technol 4(2):55–73Google Scholar
- 23.Zhang L, Jiang S, Zhang J, Ng WK (2012). Robustness of trust models and combinations for handling unfair ratings. In: Ifip advances in information & communication technology, pp 36–51Google Scholar

## Copyright information

**Open Access**
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, duplication,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons
license and indicate if changes were made.