Encyclopedia of Social Network Analysis and Mining

Living Edition
| Editors: Reda Alhajj, Jon Rokne


  • Abigail ParadiseEmail author
  • Rami Puzis
  • Asaf Shabtai
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4614-7163-9_110212-1


Fake profiles; Infiltration; Social network security; Socialbots


Advanced persistent threat (APT)

A class of sophisticated cyber-attacks that target organizations


A means of compromising the social network graph by connecting with a large number of users; socialbots can be executed to infiltrate social networks

Influence bots

A bot that tries to influence conversation on a specific topic


An artificial, machine-operated profile in a social network that mimics human users, looks genuine, and behaves in a sophisticated manner


A computer program designed to help send spam

Sybil attack

A type of attack in which a malicious user creates multiple fake identities (Sybils) in order to unfairly increase power and influence within a target community


In recent years, online social networks (OSNs) are becoming an essential part of our lives. However, OSNs have also been abuses by cyber criminals that exploit the platform for malicious purposes including spam and malware distribution, harvesting personal information, infiltration of organizations, and spreading rumors.

Socialbots are artificial, machine-operated profiles controlled by malicious users that are using deceptive techniques to act and look like human accounts. This allows socialbots to avoid detection and cause prolonged harm.

It is important to understand the security risks posed by socialbots in order to design effective mechanisms to identify and detect them.


Online social networks (OSNs) are a popular, important, and powerful tool. OSNs play a central role in modern social life, as a helpful means of communication, sharing opinions and information, finding and disseminating information (Kwak et al. 2010), expanding connections (Gilbert and Karahalios 2009), and promoting businesses.

OSNs also have become the target of exploitation and abuse and have begun to attract unethical and illegal activities. Spreading spam, rumors, and malicious content (Burghouwt et al. 2011), invasion of user privacy (Jurgens 2013), political astroturfing (Ratkiewicz et al. 2011), and attaining an influential position and spreading misinformation or propaganda (Ferrara et al. 2014) are examples of today’s misuse of OSNs. Between 8% and 10% of all social media profiles are malicious in nature (Ahmad 2015). This enormous number emphasizes the acute problem we are facing and demonstrates the need for new solutions (Abokhodair et al. 2015), particularly since OSN providers have repeatedly failed to mitigate such abuse and the threats they pose.

Attackers utilize artificial, machine-operated, OSN profiles called socialbots in order to execute their attacks. Unlike a regular bot (Boshmaf et al. 2012; Ferrara et al. 2014; Ji et al. 2016), a socialbot mimics human users by simulating the actions of a real OSN user. Socialbots have the ability to conduct social activities online, such as posting a message or sending a friendship request (Boshmaf et al. 2012).

These days, socialbots have become very sophisticated, and therefore their detection has become more difficult. Socialbots can be executed to infiltrate an OSN (Boshmaf et al. 2013). In addition, socialbots can be used to infiltrate communities and organizations in order to obtain sensitive information and gain a foothold in the organization by utilizing connections with an organization’s employees through an OSN.

Several studies have focused on designing socialbots and attack methods for different purposes: infiltrating an organization by maintaining friendships with profiles (Elyashar et al. 2013), targeting specific users in organization (Elyashar et al. 2014), gathering personal information (Bokobza et al. 2015), or simply gaining influence (Aiello et al. 2012; Messias et al. 2013; Ferrara et al. 2014).

Due to the evidence of the increased use of OSNs by malicious users, one of the greatest challenges is to detect socialbots. Protecting OSNs from socialbots is important to both users and OSN providers. Recently the academic community has become interested in the detection and identification of socialbots and the development of advanced automatic solutions.

The solutions for detecting socialbots controlled by malicious users is primarily focused on feature extraction to distinguish between socialbots and real users using machine learning, crowdsourcing-based detection, honeypots and monitoring-based detection, and graph-based detection.

Key Points

In this work we focus on the creation, design, and development of socialbots in social networks. We present the socialbots’ goals and the strategies and actions they perform to achieve these goals. In addition, we discuss detection methods used to identify and detect socialbots.

Historical Background

A bot is a software that runs automated tasks. Bots are designed to maintain communication structures and distribute commands and data through a command and control (C&C) channel (Puri 2003).

The existence of bots has been known for some time; for example, bot algorithms designed to hold a conversation with a human were reported over 50 years ago (Turing 1950).

Socialbots are different from regular bots, since socialbots are designed to be more sophisticated and stealthy (Boshmaf et al. 2011). In recent years socialbots have become increasingly sophisticated and difficult to detect.

The first social botnet “koobface” was revealed in 2008, and this botnet targeted OSNs using clever social engineering attacks and the link opening behavior of social media users (Wuest 2010).

In 2009, another social botnet called Naz bot was discovered on Twitter (Nazario 2009), and more recently, there have been several incidents regarding socialbots. In 2013, the “Pony” botnet was discovered on Facebook, Twitter, and Yahoo. This botnet has stolen two million passwords (Finkle 2014). In the next chapter, we mention additional sociabot incidents according their goals.

Goals of Socialbots

Socialbots can be used to obtain an influential position, to mislead users, harvest useful and sensitive information from infiltrated users, infiltrate communities and organizations, and distribute malicious content, rumors, spam, and misinformation (Ferrara et al. 2014; Ferrara 2015).

Gaining Influence

Socialbots can be used to gain influence in the OSN and subsequently achieve influence outside the network (Aiello et al. 2012).

Dickerson et al. (2014) named this type of socialbot “Influence Bots” – bots that try to influence conversations on a specific topic in the OSN. Additionally, socialbots are used in political campaigns for propaganda and recruitment using different manipulation strategies depending on the targets of their campaigns. Socialbots can affect public opinion by distributing misinformation with political astroturfing (Berger and Morgan 2015).

As a case in point, during the 2010 US midterm elections, socialbots injected thousands of tweets directing users to websites with fake news reports supporting specific candidates (Ratkiewicz et al. 2011). Another example is an attack with 25,860 socialbots that spread 440,793 tweets in order to disrupt conversations about the Russian election (Thomas et al. 2012). Moreover, in the 2009 Massachusetts election, nine socialbots attempted to cause a specific URL to rise in influence via Twitter. These socialbots produced 929 tweets in 138 minutes, all of which included a link to the website of a candidate. The tweets were used to expose the politician to a large audience (Mustafaraj and Metaxas 2010).

In addition, socialbots can be used to change the stability of markets; in 2013, for example, a group of Syrian hackers claimed responsibility for tweeting a false tweet about explosions at the White House that injured Obama that caused the Dow Jones stock exchange to fall by 1%, and $200 billion dollars was erased from the entire market (Ferrara et al. 2014).

Malicious Content and Spam Distribution

Socialbots can distribute malicious content through OSNs by sending spam, spreading phishing messages, spreading malware, propagating malicious URLs, and launching distributed denial of service (DDoS) attacks (Hwang et al. 2012; Ji et al. 2016).

Recent research (Osterman Research Consultants 2016) confirmed that one out of five businesses are infected by malware through OSN’s and an even larger proportion simply wasn’t aware how the malware entered.

Incidents of malware distribution are frequently reported; for example, a Trojan attack that infected an estimated 110,000 Facebook users’ machines over a two-day period (Trend Micro 2015) and a W32.Koobface worm (Wuest 2010).

Information Gathering

User profiles provide large amounts of private information, including photos, locations, postings, opinions, comments, beliefs, political views, attitudes, and connections. Moreover, additional researching of the profile makes it is easy to determine family relationships, circles of friends, main interests, and hobbies (Sulick 2016).

Attackers can make use of socialbots in order to harvest this useful and sensitive information from infiltrated users in many ways. Socialbots can extract personally identifiable information such as email addresses (Boshmaf et al. 2012). Polakis et al. (2010) demonstrated how just the names from the profiles within OSNs can be used to harvest email addresses as a first step for personalized phishing campaigns.

Karlinsky (2014) mentioned that attackers use the profile information to obtain answers to security questions used to verify the user’s identity when attempting to log in to such services. Additionally, Karlinsky (2014) described that today’s fraudsters can easily find suppliers that offer personal identifiable information harvesting and sell complete user profiles.


Socialbots can be used to infiltrate communities and organizations in the OSN to pursue a variety of goals, including harvesting information about an organization and perfroming industrial espionage (Sulick 2016), harvesting private employees’ information before launching an advanced attack (Molok et al. 2011; Elyashar et al. 2013; Paradise et al. 2014), selecting employees that can be exploited as an entry point into the organization using social engineering methods (an email message with a malicious URL or payload).

Reconnaissance is the first phase and an essential component of a successful advanced persistent threat (APT). This phase involves collecting information, an important preparatory step required before the subsequent more aggressive steps of APT attack (Kim et al. 2014; Ask et al. 2013). In this phase, attackers identify and study the targeted organization and collect information about the technical environment and key personnel in the organization using open-source intelligence (OSINT) tools and social engineering techniques (Wuest 2010). OSINT is a form of intelligence collection from publicly available sources, and nowadays it typically refers to aggregating information about a subject via free sources on the Internet (Chen et al. 2014).

Information extracted from OSNs may include positions and roles within the organization, contact information, etc. Attackers can use the collected information to construct the organization’s structure, identify leaders, location, and specialized branch offices (Fire and Puzis 2012). In addition, the attacker can select organization members that can be exploited to penetrate the organization and serve as potential entry points into the organization (Boshmaf et al. 2011; Elyashar et al. 2013). Once an attacker is a friend of an employee, he or she may trick them to download infected emails with malicious attachment or URL to provide access to important assets in the organization.

Attackers can also perform an attack through news, status messages, or job postings that lead the user to a subverted Internet resource (Section 9 lab 2014).

Recently, there have been an increasing number of incidents reported in the media regarding cyber-attacks using OSNs; in 2015, Russian intelligence used socialbots to penetrate the Pentagon. In this case a Russian intelligence officer identified targets in the OSN and fabricated a profile that would be appealing (based on common interest) to these targets. He established a connection with the targets directly and/or developed a relationship with a friend or follower of the target who shared the same interest. Once the relationship with the target matured, the Russian intelligence officer sent the target a phishing message with a link or attachment, thereby gaining access to the target’s computer holdings. (Sulick 2016).

In 2011–2014 an attack originating in Iran took place, primarily targeting senior US military. This attack used artificial profiles on social networking sites to build relationships and trust that were later exploited to gain access to sensitive information and deliver malware (Ahmad 2015). Another recent example is a case involving friend requests that were sent through Viadeo (a professional social media network based in France) to the French offices of Trend Micro. The requests targeted several specific employees, and the profile which sent the requests pretended to be an IT manager from the Trend Micro Australia office who had been with the company for 18 years. Checking the company directory confirmed that there was no employee with that name (Pernet 2015).

Methods Used by Socialbots

Socialbots adopt different methods to achieve their goals. In this section we present the methods, strategies, and actions performed by socialbots according to their goals.

Methods for Gaining Influence

Several studies have presented strategies for connecting profiles using socialbots and gaining influence on the Twitter OSN (Aiello et al. 2012; Messias et al. 2013). Researchers have reached several interesting conclusions. Freitas et al. (2014) found that higher Twitter activity is the most important factor for successful infiltration. Aiello et al. (2012) and Messias et al. (2013) demonstrated that socialbots can become influential, like celebrities, in Twitter.

Additional research on the Twitter OSN was a competition associated with “The Web Ecology Project” (2011); in this project the goal was to explore different ways in which a socialbot could influence a target network of 500 Twitter users. The results of this competition showed that socialbots were able to influence user behavior. The socialbots’ strategy, used to persuade targets to interact with them, was based on replying to targets’ tweets, mentioning targets in their tweets, retweeting tweets shared by the targets, and following the targets in Twitter.

Mitter and Strohmaier (2013) analyzed the data from the “The Web Ecology Project” to explore the manner in which socialbot attacks can influence the links created within OSNs between targeted real OSN users. They found that socialbots may have the ability to shape and influence the social graph in OSNs.

The Robin Sage Experiment (Ryan and Mauch 2010) is another study that emphasized the influence that can be achieved using socialbots. They created a socialbot that aimed to influence users. The influence was reflected in the ability to gain the trust of other users. The experiment proved that socialbots can manage to attract, interact, and influence victims. During the experiment, the profile interacted and elicited information from senior-level US government and industry personnel in sensitive roles. Robin Sage was offered free conference tickets, she was asked to speak at security conference, and she received multiple job offers and gifts.

The following observation can be made based on the studies presented: to gain influence, Twitter socialbots primarily apply simple strategies, such as only following users that followed the socialbots, and posting tweets about popular and focused topics (Messias et al. 2013). The socialbots were simple, with predictive behavior and without sophisticated strategies, yet they were able to successfully infiltrate the OSN and become popular and influential.

The Sybil attack is also a central attack method used to gain influence in OSNs. This attack refers to the situation in which an attacker creates multiple fake identities (Sybils) in order to unfairly increase power and influence within a target community; the attacker controls the set of the identities and joins a targeted system multiple times with these Sybil identities (Douceur 2002). The attacker can mount many follow-up attacks in order to disrupt the targeted system using the Sybil identities.

Methods for Malicious Content and Spam Distribution

Mitter et al. (2014) classified different attack methods adopted by socialbots within the OSN functionality. The attacks include: (1) Abusive usage of topics – changing the initial meaning of a topic to a specific new topic, (2) Unsolicited Communication – sending messages and communicating in an unsolicited way, (3) Clickjacking attacks – trying to trick users into clicking on links embedded in unobtrusive context, (4) Affiliation Attacks – trying to make a user buy something on a specific website, (5) Spoofing – impersonating a specific user to perform an attack.

Ji et al. (2014) identified the following main phases in the socialbot attack lifecycle. The first phase is infection, in which socialbots use infection mechanisms, such as the use of malicious URLs in an email, unwanted malware downloading, and installation of cracked software. After the infection phase, socialbots perform predefined host behaviors, such as modifying the bootstrap list of a system and checking Internet cookies. After that, socialbots work to build a C&C (Command and Control) connection in order to receive commands from the attacker (botmaster). Finally, in the last phase the bots execute the commands received from the botmaster.

Methods for Information Gathering

Several researchers have designed socialbots that attempted to connect OSN users in order to obtain their personal information (Sophos Press Release 2007; Boshmaf et al. 2011; Magdon-Ismail and Orecchio 2012); in each of these studies, the researchers presented socialbots that infiltrate random user profiles with the mission of achieving as many connections and as much information as possible. The socialbots were able to gather sensitive and personal details such as email addresses, dates of birth, phone numbers, and photos. Sophos Press Release (2007) created a fake profile on Facebook that sent random friend requests to 200 users. The profile obtained a 41% rate of acceptance. Boshmaf et al. (2011) also created socialbots on Facebook that infiltrated random user profiles. They concluded that OSNs are vulnerable to large-scale infiltration, and that most OSN users are not careful enough when accepting friend requests from strangers, especially when they have mutual connections. Additionally, Magdon-Ismail and Orecchio (2012) developed a model for the infiltration of users based on two assumptions: users would like to have as many connections with others as possible, and users are more likely to connect to trusted nodes. Their results showed that random friend requests are much less successful than even simple greedy strategies that select a profile that is a connection of the user in the neighborhood (second level connection).

A number of studies have demonstrated attacks using existing features in the OSN, including the mutual friends feature (Jin et al. 2013) and the people you may know feature (Krombholz).

Jin et al. (2013) defined three types of attacks that use the mutual friends feature when the user’s privacy setting does not authorize the attacker to see the user’s friend list: (1) Friend exposure attack – an attacker tries to identify many of a target’s friends, (2) Distant neighbor exposure attack – an attacker’s goal is to identify many of the target’s distant neighbors, (3) Hybrid attack – an attacker’s goal is to identify both the target’s friends and distant neighbors. Their results showed that attackers are able to identify more than 60% of a targeted user’s friends and subsequently can harvest their information. Krombholz et al. (2012) simulated a data harvesting attack based solely on the use of the people you may know Facebook feature.

Bilge et al. (2009) suggested a two-stage cross-site profile cloning attack. The first stage was based on identifying a victim and using it to create a new identical profile. In the second stage of the attack, a cross-site profile cloning attack was launched. This attack included the automatic creation of profiles in networks where the victim was not registered. These profiles connected to the victim’s friends that had profiles on both networks. The authors were able to conclude the feasibility and effectiveness of this type of socialbot attack.

Methods for Infiltration

Recent research has also focused on methods to infiltrate an organization using socialbots to connect to employees through an OSN.

Elyashar et al. (2013) showed that socialbots can be used to infiltrate an organization by maintaining friendships with profiles in the OSNs. Their method aimed at establishing a foothold in an organization by gaining friends among the organization’s employees. The method includes first sending friend requests to the most connected members of the organization and then reaching out to members that have the highest number of friends in common with the socialbot. The researchers focused on two organizations and tested their method on Facebook. They were able to disclose up to 13.55% more employees and up to 18.29% more informal links compared to crawling with a public profile that has no friends. These results demonstrate how easily attackers can infiltrate user’s OSN profiles and obtain access to valuable information.

Paradise et al. (2014) expanded the socialbot strategy presented by Elyashar et al. (2013) and proposed a method for acquiring friends in an OSN that are employees of a certain organization by sending a friend request to employees with the highest probability of accepting the friend request. The socialbot can estimate the probabilities of an OSN user to accept its friend requests given the total number of friends that the user has and the number of mutual friends the user shares with the socialbot. The authors found that the probability that a target will accept a friend request from the socialbot can be as high as 80% if they share more than 11 mutual friends (Boshmaf et al. 2011).

Other research conducted by Elyashar et al. (2014) showed that socialbots can be used to infiltrate a specific user from an organization. This method is based on sending friend requests to the friends of a specific user and then sending friend requests directly to the specific user. Their results on two organizations within Facebook showed that they were able to infiltrate 50% and 70% of the targeted users they attempted to infiltrate.

Bokobza et al. (2015) investigated wiring strategies an attacker may employ in order to connect with employees' profiles and harvest leaked information using socialbots. The evaluation was performed using real information (diffusion data) on Twitter and Flickr. Their results emphasize the need to raise employees’ awareness to the threats of accepting friend requests from strangers and exposing information on OSNs. Additionally, results demonstrate that the most effective socialbot wiring strategy for harvesting information was PageRank.

Detection of Socialbots

In this section, we discuss various socialbot detection methods. Studies have suggested solutions based on the use of machine-learning techniques, monitoring and honeypots, crowdsourcing, graph-based detection, and other techniques.

Graph-Based Detection

Several studies have presented techniques to detect Sybil attacks using graph-based detection (Yu et al. 2006, 2008; Danezis and Mittal 2009; Cao et al. 2012; Wei et al. 2012; Xie et al. 2012; Xue et al. 2013; Pham et al. 2015). Graph-based detection examines the structure of the social network graph. There are a number of studies utilizing Sybil detection that are based on the probability of a short random walk in the non-Sybil region (Yu et al. 2006, 2008; Danezis and Mittal 2009; Cao et al. 2012; Wei et al. 2012). SybilGuard was among the first Sybil detection approaches (Yu et al. 2006), and SybilLimit improved on SybilGuard, using multiple walks. SybilInfer (Danezis and Mittal 2009), however, does not provide any assumption on the number of Sybil identities accepted per attack edge. SybilDefender (Wei et al. 2012) also utilizes a community detection approach.

Sybil detection relies on social graph structures for detection, as well as the assumption that socialbots cannot send many requests to benign users, and therefore there is a sparse cut between Sybil and non-Sybil regions. Sybil detection also assumes that the honest region is time mixing, meaning that socialbots connect to only a few tightly-knit communities; this assumption was not found true on online social networks, where Sybil profiles do not form tight-knit communities. Instead, they slowly gain access and trust within a close-knit social network and integrate into the honest region as legitimate users (Mohaisen et al. 2010; Yang et al. 2011).

Xie et al. (2012) presented a system that recognizes legitimate users based on the connections and interactions. This study relied on the assumption that legitimate users refuse to interact with unknown profiles, an assumption that was proven to be inaccurate when dealing with advanced attackers (Boshmaf et al. 2011; Elyashar et al. 2013).

Xue et al. (2013) presented two observations: Sybils receive few incoming requests from real users and Sybils are more likely to receive rejections than real users. They proposed new techniques to classify Sybils; their method is based on global vote aggregation and local community expansion. A profile is considered a Sybil if its global acceptance rate is below a certain threshold. They deployed the VoteTrust system at Renren and showed that VoteTrust can accurately detect real, large-scale Sybil collusion.

A recent study by Pham et al. (2015) presented a solution to detect infiltration of a specific user in the organizational OSN. They built a target function (based on distances in the OSN), and when any user sends a friend request to users belonging to the organization, the target function is calculated; scores below a certain threshold indicate that the profile is suspicious. This solution was tested utilizing the attack strategy that was presented in Elyashar et al. (2014).

Honeypots and Monitoring-Based Detection

Several previous studies have used honeypots to detect spambots in OSNs (Webb et al. 2008; Lee et al. 2010; Stringhini et al. 2010; Lee et al. 2011). This research has focused on identification of unique behaviors of spammers using honeypots in order to distinguish between social spammers and legitimate users.

Webb et al. (2008) introduced social honeypots to inspect spam. Their results show that the behavior of spam profiles is followed by distinct temporal patterns: 57.2% of the spam profiles have “About me” content that is not original, i.e., from another profile.

Lee et al. (2010) also attempted to expose social spammers in OSNs. They found that their honeypots were able to identify social spammers with low false positive rates in an effective way. Lee et al. (2011) found that the social honeypots identified polluters much earlier than traditional Twitter spam detection methods.

The possibility of using a monitoring approach has been explored by Paradise et al. (2014, 2015); they presented a method to detect socialbots during the reconnaissance phase of a sophisticated attack in which attackers attempt to infiltrate an organization by intelligently selecting organization member profiles and monitoring their activity. The results showed that they can limit the strength of sophisticated friend request strategies by reducing their effectiveness to a level below that of random spraying.

A number of works have used the monitoring activity of users to detect socialbot (Burghouwt et al. 2011; Wang et al. 2013). In these studies the researchers analyzed the aggregate behavioral patterns of OSN profiles to distinguish between malicious and legitimate users. Beutel et al. (2013) presented “CopyCatch” to detect lockstep page like patterns on Facebook, and they showed that malicious profiles tend to post fake likes to several fraudulent pages at the same time.

Wang et al. (2013) developed a detection approach that uses user clickstreams to identify fake profiles. A clickstream is the sequence of HTTP requests made by a user to a website. Experiments using ground truth data show that their system generates 1% false positives and 4% false negative.

Cao et al. (2014) presented “SynchroTrap,” a system used to uncover large groups of malicious profiles. They observed that malicious profiles usually perform loosely synchronized actions, and they group profiles with similar action sequences into clusters, and designate large profile clusters as suspicious.

Other research conducted by Burghouwt et al. (2011) suggested using monitoring activity to detect socialbot communication. They presented a method to detect social media-based C&C traffic by monitoring user activity. They measure causality between user activity and network traffic. The presence or absence of certain key strokes and mouse clicks is used to determine if network traffic is legitimate or associated with a socialbot.

Egele et al. (2015) designed “COMPA” to detect compromised OSN profiles, and they built a behavioral profile for OSN profiles, based on the past messages sent by the profile. This research showed that COMPA can reliably detect compromised OSN profiles.

Machine Learning-Based Detection

Detection using machine learning methods is aimed at distinguishing between real users and fake profiles. Several previous studies have used machine learning to detect spambots in online social networks (Benevenuto et al. 2010; Gee and Teh 2010; Wang 2010; Jin et al. 2011; Mccord and Chuah 2011; Song et al. 2011; Wang et al. 2011; Zhang et al. 2012) in which the identification includes the analysis of collected data, feature extraction, and machine-learning methods for classification.

Wang (2010) used machine learning to identify the spambots in Twitter; they showed that graph-based features and content-based features are efficient and accurate in identifying spambots. Zhang et al. (2012) proposed a framework to detect spammers in OSNs. In addition to feature extraction, their framework included a URL-driven estimation method to measure the similarity between two profiles; they also integrated a graph-based approach in their framework in order to extract dense subgraphs as candidate campaigns. Results on a Twitter dataset showed that they were able to extract the actual campaigns with high precision and recall. Wang et al. (2011) presented a framework for spam detection that can be used across all OSNs. Song et al. (2011) were focused on detecting spam messages in Twitter. They used distance and connectivity between messages to determine whether a message was spam or not; their results indicate that most spam comes from profiles that are not well connected (fewer relations) with the receiver.

The limitation of the presented studies is based on their assumption that legitimate users have many legitimate friends while spammers have a small number of friends, a lower friend request acceptance rate, and almost never reply to comments. As socialbots become more sophisticated and human-like, detection methods based on these assumptions are simply not effective enough.

Machine learning has also been used to detect more advanced socialbots (Chu et al. 2010; Yang et al. 2011). Yang et al. (2011) revealed that existing Sybil defenses are unlikely to succeed in today’s OSNs as the Facebook Immune System (Stein et al. 2011), and therefore there is a need for new techniques. Using features, such as the frequency of friend requests and the fraction of accepted requests, the authors were able to train a classifier with a 99% true positive rate (TPR).

Wagner et al. (2012) developed predictive models according to three different feature groups (network, behavioral, and linguistic) in order to identify users who are more susceptible to social infiltration in Twitter. They found that susceptible users (potential victims) tend to use Twitter for conversational purposes and are more open and social, since they communicate with many different users.

Boshmaf et al. (2015) designed and evaluated Íntegro, a defense system that leverages victim classification to rank most real profiles higher than fakes. Íntegro starts by identifying potential victims from user level activities, using supervised machine learning and based on the landing probability of a short random walk that starts from a known real profile. The limitation of this system is that it is intended to complement existing detection systems and is designed to detect automated fake profiles that befriend many victims for subsequent attacks and therefore it is able to detect socialbots with these behaviors.

Dickerson et al. (2014) proposed “SentiBot,” sentiment-aware architecture for identifying socialbots on Twitter using tweet sentiment to differentiate between human and nonhuman users on Twitter. They concluded that the use of sentiment-aware features improves accuracy where fielded algorithms currently fail. Davis et al. (2016) also used sentiment features in the classification of socialbots in Twitter; they presented a publicly available service called “BotOrNot” which computes a bot likelihood score. This system generates more than 1,000 features, and using machine learning techniques learns the signature of human-like and bot-like behaviors.

Subrahmanian et al. (2016) described a competition in 2015 in which six teams tried to identify influence bots in Twitter. Their overall framework included: (1) machine learning; (2) clustering, outliers, and network analysis (i.e., finding bots that are distant from all clusters, using local ego networks of known socialbots to obtain insight about the structural connectivity pattern of socialbots); and (3) classification and outlier analysis.

A problem associated with this type of detection is that cyber criminals have begun to sell legitimate profiles that have been compromised in Twitter (Stringhini et al. 2013). Attackers can buy friends, or even profiles, and use them for malicious purposes, making the identification of sophisticated socialbots very difficult since they look like legitimate users.

Table 1 provides a summary of the types of features employed in socialbot identification.
Table 1

Feature types used for the identification of socialbots




Content based

Features that are extracted from content the user exposed using methods as text analysis, natural language processing

Number of links, number of replies/mentions

Graph based/network topology based

Extraction of features related to the profile network – connections, retweets, mentions, hashtag cooccurrence

Average clustering coefficient of retweet and mention network, number of followers

User based

Features based on profile information

Age, marital status, gender

Timing based

Features related to timing patterns in activities of the profile

Maximum idle duration between posts, average time between posts

Image content based

Meta data related to the images in the profile

Color histogram, color correlogram

Behavioral/activity based

Features related to a profile’s activity

Number of friend requests a user has sent, incoming requests accepted

Semantic based

Extracted features from the content that are based on sentiment analysis algorithms

Emotion score

Table 2 lists the related research and the features used by each to identify socialbots.
Table 2

Researches with regard to features they offered

Features versus research

Content based

Graph based/network topology based

User based


Image content based

Behavioral/activity based

Semantic based

Benevenuto et al. (2010)


Boshmaf et al. (2015)




Chu et al. (2010)


Davis et al. (2016)


Dickerson et al. (2014)


Gee and Teh (2010)




Jin et al. (2011)



Lee et al. (2011)


Mccord and Chuah (2011)


Stringhini et al. (2010)


Subrahmanian et al. (2016)


Song et al. (2011)



Wagner et al. (2012)


Wang (2010)


Yang et al. (2011)




Zhang et al. (2012)


Crowdsourcing-Based Detection

Wang et al. (2012) suggested the use of humans to detect Sybil profiles. They created an online social turing test platform using data from Facebook and Renren in which “experts” and “turkers” were classified profiles based on the profiles information. The authors observed that experts consistently produce near-optimal results. The limitations of this method are the fact that it might not be cost-effective for an OSN with a large number of users, and the fact that sophisticated socialbots may appear as real as human profiles, so crowdsourcing might not be able to distinguish between a real user and a fake profile (in this study only profile information was analyzed).

Table 3 summarizes defense solutions (detection methods) and attacks in a matrix, as a means of mapping current defense solutions to relevant attacks.
Table 3

Defense solutions versus attack matrix



Socialbots that infiltrate an organization

Sybil attack


Influence bot

Machine learning-based detection

Benevenuto et al. 2010; Gee and Teh 2010; Wang 2010; Jin et al. 2011; Mccord and Chuah 2011; Song et al. 2011; Wang et al. 2011; Zhang et al. 2012


Chu et al. 2010; Yang et al. 2011;

Boshmaf et al. 2015;

Davis et al. 2016; Dickerson et al. 2014

Wagner et al. 2012; Subrahmanian et al. 2016

Honeypots and monitoring-based detection

Webb et al. 2008; Lee et al. 2010; Stringhini et al. 2010; Lee et al. 2011

Paradise et al. 2014; Paradise et al. 2015


Burghouwt et al. 2011;

Cao et al. 2014;

Wang et al. 2013;

Egele et al. 2015


Crowdsourcing-based detection


Wang et al. 2012


Graph-based detection


Pham et al. 2015

Yu et al. 2006; Yu et al. 2008; Danezis and Mittal 2009; Cao et al. 2012; Wei et al. 2012

Xie et al. 2012;

Xue et al. 2013



Once socialbot is detected, a number of actions need to be taken: the socialbot needs to be examined by the research community (Mahmoud et al. 2015), the socialbot needs to be analyzed to understand its behavior, the profile features, connections, content that it has been exposed to or sent. In order to analyze the socialbot it is possible to use methods similar to the analysis of data collected from honeypots (Holz et al. 2008). The analyzing of the socialbot may help in developing new proactive defenses against this threat (Vogt et al. 2007).

In order to assess the damage caused by the socialbot to individual, group or organization, it is necessary to carefully examine its connections and friends and even contact them to understand if there were further actions by the socialbot, for example, by sending a malicious e-mail.

Eventually there is a need to report the socialbot to the OSN providers so they could be tracked and the profile brought down.

Key Applications

Socialbots avoid detection and cause prolonged harm to users, communities, and OSN providers.

As we mentioned before, socialbots applications may include promoting agenda and campaign, obtaining an influential position, harvesting useful and sensitive information from infiltrated users, infiltrating communities and organizations, and distributing malicious content, rumors, spam, and misinformations.

Future Directions

In this article we presented the socialbots’ goals and described several strategies and actions that socialbots perform to achieve these goals.

In order to provide the best detection mechanism, one must understand the motives, purposes, and strategies behind these fake profiles.

In general, as socialbots have become more sophisticated and deceptive, most of the detection methods have become less effective, and thus there is a need for new solutions. Future work focuses on developing new algorithms for detecting sophisticated socialbots and improving existing detection mechanisms. Attackers that make use of social networks to infiltrate an organization are largely unaddressed and undetected by traditional mechanisms. Therefore, there is a growing need for tools that can be used to detect reconnaissance and initial penetration performed with the help of social networks.



  1. Abokhodair N, Yoo D, McDonald DW (2015) Dissecting a social botnet: growth, content and influence in Twitter. In: Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, pp 839–851Google Scholar
  2. Ahmad I (2015) How many internet and #SocialMedia users are fake? http://www.digitalinformationworld.com/2015/04/infographic-how-many-internets-users-are-fake.html. Accessed 2 Apr 2015
  3. Aiello LM, Deplano M, Schifanella R, Ruffo G (2012) People are strange when you’re a stranger: impact and influence of bots on social networks. Links 697(483,151):1–566Google Scholar
  4. Ask M, Bondarenko P, Rekdal JE, Nordbø A, Bloemerus P, Piatkivskyi D (2013) Advanced persistent threat (APT) beyond the hype. Project report in IMT4582 Network security at GjoviN University College, SpringerGoogle Scholar
  5. Benevenuto F, Magno G, Rodrigues T, Almeida V (2010) Detecting spammers on twitter. In: CEAS, The seventh annual collaboration, electronic messaging, anti-abuse and spam conference, July 2010, vol 6, p 12Google Scholar
  6. Berger JM, Morgan J (2015) The ISIS Twitter census: defining and describing the population of ISIS supporters on Twitter. The Brookings project on US relations with the Islamic World 3(20)Google Scholar
  7. Beutel A, Xu W, Guruswami V, Palow C, Faloutsos C (2013) Copycatch: stopping group attacks by spotting lockstep behavior in social networks. In: Proceedings of the 22nd international conference on World Wide Web, pp 119–130Google Scholar
  8. Bilge L, Strufe T, Balzarotti D, Kirda E (2009) All your contacts are belong to us: automated identity theft attacks on social networks. In: Proceedings of the 18th international conference on World wide web, pp 551–560Google Scholar
  9. Bokobza Y, Paradise A, Rapaport G, Puzis R, Shapira B, Shabtai A (2015) Leak sinks: the threat of targeted social eavesdropping. In: 2015 IEEE/ACM international conference on advances in social networks analysis and mining, pp 375–382Google Scholar
  10. Boshmaf Y, Muslukhov I, Beznosov K, Ripeanu M (2011) The socialbot network: when bots socialize for fame and money. In: Proceedings of the 27th annual computer security applications conference, pp 93–102Google Scholar
  11. Boshmaf Y, Muslukhov I, Beznosov K, Ripeanu M (2012) Key challenges in defending against malicious socialbots. In: Presented as part of the 5th USENIX workshop on large-scale exploits and emergent threatsGoogle Scholar
  12. Boshmaf Y, Muslukhov I, Beznosov K, Ripeanu M (2013) Design and analysis of a social botnet. Comput Netw 57(2):556–578CrossRefGoogle Scholar
  13. Boshmaf Y, Ripeanu M, Beznosov K, Santos-Neto E (2015) Thwarting fake OSN profiles by predicting their victims. In: Proceedings of the 8th ACM workshop on artificial intelligence and security, pp 81–89Google Scholar
  14. Burghouwt P, Spruit M, Sips H (2011) Towards detection of botnet communication through social media by monitoring user activity. In: International conference on information systems security. Springer, Berlin/Heidelberg, pp 131–143CrossRefGoogle Scholar
  15. Cao Q, Sirivianos M, Yang X, Pregueiro T (2012) Aiding the detection of fake profiles in large scale social online services. In: Proceedings of the 9th USENIX conference on networked systems design and implementation, pp 15–15Google Scholar
  16. Cao Q, Yang X, Yu J, Palow C (2014) Uncovering large groups of active malicious profiles in online social networks. In: Proceedings of the 2014 ACM SIGSAC conference on computer and communications security, pp 477–488Google Scholar
  17. Chen P, Desmet L, Huygens C (2014) A study on advanced persistent threats. In: Communications and multimedia security, pp 63–72Google Scholar
  18. Chu Z, Gianvecchio S, Wang H, Jajodia S (2010) Who is tweeting on Twitter: human, bot, or cyborg? In: Proceedings of the 26th annual computer security applications conference, pp 21–30Google Scholar
  19. Danezis G, Mittal P (2009) SybilInfer: detecting sybil nodes using social networks. In: NDSS, presented at NDSS, California, 8–11 Feb 2009Google Scholar
  20. Davis CA, Varol O, Ferrara E, Flammini A, Menczer F (2016) Botornot: a system to evaluate social bots. In: Proceedings of the 25th international conference companion on World Wide Web, pp 273–274Google Scholar
  21. Dickerson JP, Kagan V, Subrahmanian VS (2014) Using sentiment to detect bots on Twitter: are humans more opinionated than bots? In: Advances in social networks analysis and mining (ASONAM), 2014 IEEE/ACM international conference on, pp 620–627Google Scholar
  22. Douceur JR (2002) The sybil attack. In: International workshop on peer-to-peer systems, pp 251–260Google Scholar
  23. Egele M, Stringhini G, Kruegel C, Vigna G (2015) Towards detecting compromised profiles on social networks. IEEE Trans Dependable Secure ComputGoogle Scholar
  24. Elyashar A, Fire M, Kagan D, Elovici Y (2013) Homing socialbots: intrusion on a specific organization’s employee using Socialbots. In: Proceedings of the 2013 IEEE/ACM international conference on ASONAM, pp 1358–1365Google Scholar
  25. Elyashar A, Fire M, Kagan D, Elovici Y (2014) Guided socialbots: infiltrating the social networks of specific organizations’ employees. AI Commun 29(1):87–106MathSciNetCrossRefGoogle Scholar
  26. Ferrara E (2015) Manipulation and abuse on social media by Emilio Ferrara with Ching-man Au Yeung as coordinator. ACM SIGWEB Newsletter, (Spring):4Google Scholar
  27. Ferrara E, Varol O, Davis C, Menczer F, Flammini A (2014) The rise of social bots. arXiv preprint arXiv:1407.5225Google Scholar
  28. Finkle J (2014) “Pony” botnet steals bitcoins, digital currencies: Trustwave. http://www.reuters.com/article/us-bitcoin-security-idUSBREA1N1JO20140224. Accessed 1 Jan 2014
  29. Fire M, Puzis R (2012) Organization mining using online. Netw Spatial Econ 16(2):545–578MathSciNetCrossRefzbMATHGoogle Scholar
  30. Freitas CA, Benevenuto F, Ghosh S, Veloso A (2014) Reverse engineering socialbot infiltration strategies in twitter. arXiv preprint arXiv:1405.4927Google Scholar
  31. Gee G, Teh H (2010) Twitter spammer profile detection. UnpublishedGoogle Scholar
  32. Gilbert E, Karahalios K (2009) Predicting tie strength with social media. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 211–220Google Scholar
  33. Holz T, Steiner M, Dahl F, Biersack E, Freiling FC (2008) Measurements and mitigation of peer-to-peer-based botnets: a case study on storm worm. LEET 8(1):1–9Google Scholar
  34. Hwang T, Pearce I, Nanis M (2012) Socialbots: voices from the fronts. Interactions 19(2):38–45CrossRefGoogle Scholar
  35. Ji Y, He Y, Jiang X, Li Q (2014) Towards social botnet behavior detecting in the end host. In: 2014 20th IEEE international conference on parallel and distributed systems (ICPADS), pp 320–327Google Scholar
  36. Ji Y, He Y, Jiang X, Cao J, Li Q (2016) Combating the evasion mechanisms of social bots. Computers & Security 58:230–249CrossRefGoogle Scholar
  37. Jin X, Lin C, Luo J, Han J (2011) A data mining-based spam detection system for social media networks. Proceedings VLDB Endowment 4(12):1458–1461Google Scholar
  38. Jin L, Joshi JB, Anwar M (2013) Mutual-friend based attacks in social network systems. Comput Secur 37:15–30CrossRefGoogle Scholar
  39. Jurgens D (2013) That’s what friends are for: inferring location in online social media platforms based on social relationships. ICWSM 13:273–282Google Scholar
  40. Karlinsky A (2014) How cybercriminals monetize information obtained from social networks. https://securityintelligence.com/how-cybercriminals-monetize-information-obtained-from-social-networks/. Accessed 3 Sep 2014
  41. Kim Y, Kim I, Park N (2014) Analysis of cyber attacks and security intelligence. In: Mobile, ubiquitous, and intelligent computing, pp 489–494Google Scholar
  42. Krombholz K, Merkl D, Weippl E (2012) Fake identities in social media: a case study on the sustainability of the Facebook business model. J Serv Sci Res 4(2):175–212CrossRefGoogle Scholar
  43. Kwak H, Lee C, Park H, Moon S (2010) What is Twitter, a social network or a news media? In: Proceedings of the 19th international conference on World wide web, pp 591–600Google Scholar
  44. Lee K, Caverlee J, Webb S (2010) Uncovering social spammers: social honeypots+ machine learning. In: Proceedings of the 33rd international ACM SIGIR conference on research and development in information retrieval, pp 435–442Google Scholar
  45. Lee K, Eoff BD, Caverlee J (2011) Seven months with the devils: a long-term study of content polluters on Twitter. Paper presented at the ICWSM, Barcelona, 17–21 Jul 2011Google Scholar
  46. Magdon-Ismail M, Orecchio B (2012) Guard your connections: infiltration of a trust/reputation based network. In: Proceedings of the 4th annual ACM web science conference, pp 195–204Google Scholar
  47. Mahmoud M, Nir M, Matrawy A (2015) A survey on botnet architectures, detection and defences. Int J Network Security 17(3):264–281Google Scholar
  48. Mccord M, Chuah M (2011) Spam detection on twitter using traditional classifiers. In: Autonomic and trusted computing, pp 175–186Google Scholar
  49. Messias J, Schmidt L, Oliveira R, Benevenuto F (2013) You followed my bot! Transforming robots into influential users in Twitter. First Monday, 18, 7–1 July 2013Google Scholar
  50. Mitter CW, Strohmaier M (2013) Understanding the impact of socialbot attacks in online social networks. arXiv preprint arXiv:1402.6289Google Scholar
  51. Mitter S, Wagner C, Strohmaier M (2014) A categorization scheme for socialbot attacks in online social networks. arXiv preprint arXiv:1402.6288Google Scholar
  52. Mohaisen A, Yun A, Kim Y (2010) Measuring the mixing time of social graphs. In: Proceedings of the 10th ACM SIGCOMM conference on internet measurement, pp 383–389Google Scholar
  53. Molok NA, Ahmad A, Chang S (2011) Information leakage through online social networking: opening the doorway for advanced persistence threats. J Aust Ins Profess Intellig Officer 19(2):38Google Scholar
  54. Mustafaraj E, Metaxas PT (2010) From obscurity to prominence in minutes: political speech and real-time search. UnpublishedGoogle Scholar
  55. Nazario J (2009) Twitter-based Botnet Command Channel. https://www.arbornetworks.com/blog/asert/twitter-based-botnet-command-channel/. Accessed 13 Aug 2009
  56. Osterman Research Consultants (2016) The need to manage social media properly. http://ostermanresearch.com/wordpress/?p=138. Accessed 17 Mar 2016
  57. Paradise A, Puzis R, Shabtai A (2014) Anti-reconnaissance tools: detecting targeted socialbots. IEEE Internet Comput 18(5):11–19CrossRefGoogle Scholar
  58. Paradise A, Shabtai A, Puzis R (2015) Hunting organization-targeted socialbots. In: Proceedings of the 2015 IEEE/ACM international conference on advances in social networks analysis and mining 2015, pp 537–540Google Scholar
  59. Pernet C (2015) Reconnaissance via professional social networks. http://blog.trendmicro.com/trendlabs-security-intelligence/reconnaissance-via-professional-social-networks/. Accessed 2 Jun 2015
  60. Pham CV, Hoang HX, Vu MM (2015) Preventing and detecting infiltration on online social networks. In: Computational social networks, pp 60–73Google Scholar
  61. Polakis I, Kontaxis G, Antonatos S, Gessiou E, Petsas T, Markatos EP (2010) Using social networks to harvest email addresses. In: Proceedings of the 9th annual ACM workshop on privacy in the electronic society, pp 11–20Google Scholar
  62. Puri R (2003) Bots & botnet: an overview. SANS Institute, 3:58Google Scholar
  63. Ratkiewicz J, Conover M, Meiss M, Gonçalves B, Patil S, Flammini A, Menczer F (2011) Truthy: mapping the spread of astroturf in microblog streams. In: Proceedings of the 20th international conference companion on World wide web, pp 249–252Google Scholar
  64. Ryan T, Mauch G (2010) Getting in bed with Robin Sage. Presented at Black Hat conference, Las Vegas, 24–27 Jul 2010Google Scholar
  65. Section 9 lab (2014) Automated linkedIn social engineering attacks. https://medium.com/section-9-lab/automated-linkedin-social-engineering-attacks-1c88573c577e. Accessed 1 Sep 2014
  66. Song J, Lee S, Kim J (2011) Spam filtering in twitter using sender-receiver relationship. In: International workshop on recent advances in intrusion detection, pp 301–317Google Scholar
  67. Sophos Press Release (2007) Sophos Facebook ID probe shows 41% of users happy to reveal all to potential identity thieves. http://www.sophos.com/en-us/press-office/press-releases/2007/08/facebook.aspx. Accessed 14 Aug 2007
  68. Stein T, Chen E, Mangla K (2011) Facebook immune system. In: Proceedings of the 4th workshop on social network systems, p 8Google Scholar
  69. Stringhini G, Kruegel C, Vigna G (2010) Detecting spammers on social networks. In: Proceedings of the 26th annual computer security applications conference, pp 1–9Google Scholar
  70. Stringhini G, Wang G, Egele M, Kruegel C, Vigna G, Zheng H, Zhao BY (2013) Follow the green: growth and dynamics in twitter follower markets. In: Proceedings of the 2013 conference on internet measurement conference, pp 163–176, ACMGoogle Scholar
  71. Subrahmanian VS, Azaria A, Durst S, Kagan V, Galstyan A, Lerman K, Waltzman R (2016) The darpa twitter bot challenge. arXiv preprint arXiv:1601.05140Google Scholar
  72. Sulick M (2016) Espionage and social media. https://www.thecipherbrief.com/article/espionage-and-social-media. Accessed 30 Jan 2016
  73. The Web Ecology Project (2011) The 2011 socialbots competition. http://www.webecologyproject.org/category/competition
  74. Thomas K, Grier C, Paxson V (2012) Adapting social spam infrastructure for political censorship. In: Presented as part of the 5th USENIX workshop on large-scale exploitsGoogle Scholar
  75. Trend Micro (2015) Social media malware on the rise. http://blog.trendmicro.com/social-media-malware-on-the-rise/. Accessed 24 Feb 2015
  76. Turing AM (1950) Computing machinery and intelligence. Mind 59(236):433–460MathSciNetCrossRefGoogle Scholar
  77. Vogt R, Aycock J Jacobson MJ Jr (2007) Army of botnets. Presented at NDSS, California, 28 Feb–2 Mar 2007Google Scholar
  78. Wagner C, Mitter S, Körner C, Strohmaier M (2012) When social bots attack: modeling susceptibility of users in online social networks. In: Proceedings of the 2nd workshop on making sense of microposts (#MSM2012), pp 46–48Google Scholar
  79. Wang AH (2010) Detecting spam bots in online social networking sites: a machine learning approach. In: IFIP annual conference on data and applications security and privacy, pp 335–342Google Scholar
  80. Wang D, Irani D, Pu C (2011) A social-spam detection framework. In: 8th annual conference on collaboration, Electronic messaging, Anti-Abuse and Spam, pp 46–54Google Scholar
  81. Wang G, Mohanlal M, Wilson C, Wang X, Metzger M, Zheng H, Zhao BY (2012) Social turing tests: crowdsourcing sybil detection. arXiv preprint arXiv:1205.3856Google Scholar
  82. Wang G, Konolige T, Wilson C, Wang X, Zheng H, Zhao BY (2013) You are how you click: clickstream analysis for sybil detection. In: Presented as part of the 22nd USENIX security symposium (USENIX security 13), pp 241–256Google Scholar
  83. Webb S, Caverlee J, Pu C (2008) Social honeypots: making friends with a spammer near You. Presented at the CEAS, CaliforniaGoogle Scholar
  84. Wei W, Xu F, Tan CC, Li Q (2012) Sybildefender: defend against sybil attacks in large social networks. In: INFOCOM, 2012 proceedings IEEE, pp 1951–1959Google Scholar
  85. Wuest C (2010) The risks of social networking. https://www.symantec.com/content
  86. Xie Y, Yu F, Ke Q, Abadi M, Gillum E, Vitaldevaria K, Mao ZM (2012) Innocent by association: early recognition of legitimate users. In: Proceedings of the 2012 ACM conference on computer and communications security, pp 353–364Google Scholar
  87. Xue J, Yang, Z, Yang X, Wang X, Chen L, Dai Y (2013) Votetrust: leveraging friend invitation graph to defend against social network sybils. In: INFOCOM, 2013 proceedings IEEE, pp 2400–2408Google Scholar
  88. Yang Z, Wilson C, Wang X, Gao T, Zhao B, Dai Y (2011) Uncovering social network sybils in the wild. arXiv preprint arXiv:1106.5321Google Scholar
  89. Yu H, Kaminsky M, Gibbons PB, Flaxman AD (2006) Sybilguard: defending against sybil attacks via social networks. IEEE/ACM Trans Networking 16(3):576–589CrossRefGoogle Scholar
  90. Yu H et al (2008) Sybillimit: a near-optimal social network defense against sybil attacks. IEEE symposium on security and privacyGoogle Scholar
  91. Zhang X, Zhu S, Liang W (2012) Detecting spam and promoting campaigns in the twitter social network. In: 2012 I.E. 12th international conference on data mining, pp 1194–1199Google Scholar

Copyright information

© Springer Science+Business Media LLC 2017

Authors and Affiliations

  1. 1.Department of Software and Information Systems EngineeringBen-Gurion University of the NegevBeer-ShevaIsrael

Section editors and affiliations

  • V. S. Subrahmanian
    • 1
  • Jeffrey Chan
    • 2
  1. 1.University of MarylandCollege ParkUSA
  2. 2.RMIT (Royal Melbourne Institute of Technology)MelbourneAustralia