Encyclopedia of Security and Emergency Management

Living Edition
| Editors: Lauren R. Shapiro, Marie-Helen Maras

Insider Threat

  • Adam D. WilliamsEmail author
  • Shannon N. Abbott
  • Adriane C. Littlefield
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-69891-5_156-1


Insider threat refers to one or more individuals with authorized access to critical facilities, materials, or information who could attempt unauthorized removal or sabotage or who could aid in an external adversary to do so.


While physical security is often perceived as guards, gates, and guns, all infrastructure security events have a human dimension. Any individual with authorized access to or knowledge of a site has the potential to play a role – positive or negative – in a security event. In most cases, this human dimension manifests in how personnel play a positive role in preventing, detecting, and responding to security events. In other cases, however, personnel play a negative role and facilitate security events through a lack of awareness, negligence, or unintentional acts. In extreme cases, individuals cause security events by committing intentional and malicious acts against critical facilities, materials, or information. These individuals could be anyone, to include those with regular (e.g., staff, management, janitorial, or security personnel), irregular (e.g., contractors, inspectors, or first responders), and limited (e.g., visitors or former employees) authorized access. This constitutes the so-called insider threat, a term defined by the National Insider Threat Task Force (2016) as:

the risk [that] an insider will use their authorized access, wittingly or unwittingly, to do harm to their organization. This can include theft of proprietary information and technology; damage to company facilities, systems or equipment; actual or threatened harm to employees; or other actions that would prevent the company from carrying out its normal business practices. (p. 3)

Thus, unlike external threats, the insider threat is difficult for organizations to address and mitigate because malicious insiders have “legitimate authorized access to your most confidential, valuable information systems, and they can use that legitimate access to perform criminal [or malicious] activity” (Cappelli et al. 2012, p. xviii). Although “many institutions have ‘perimeter defenses’ (gates, guards, access controls, and computer firewalls) [they] are nonetheless vulnerable to insider theft or destruction of critical data [materials, or facilities]” (NITTF 2016, p. 2). Most individuals will never attempt a malicious insider attack, but because any person with authorized access has the potential to do so, there is a need to adequately address and mitigate the insider threat. This chapter will begin with a description of insider advantages, attributes, and motivations, as well as categories by which to evaluate potential insider threats. It will then outline broad concepts and strategies for preventing and mitigating insider threat attempts – to include steps to be taken before and after personnel are granted access, authority, and knowledge to critical pieces of infrastructure.

Insider Threat Advantages, Attributes, Motivations, and Categories

Unlike traditional, external threats that rely on overwhelming shows of force, the insider threat uses stealth (blending in to normal operations) or deceit (e.g., spoofing normal operations) to perpetrate a malicious act – leveraging key advantages of legitimate, authorized access to critical facilities, materials, or information. Authorized access to sensitive facilities or materials provides insiders with time to collect additional information or develop a plan for an attack. Potential insiders can use this time to operate at the margins of normal business practices (e.g., steal small quantities of materials below reporting thresholds), establish new behavioral patterns (e.g., regularly access a new vault), or change operational procedures (e.g., grant themselves network administration permissions) to avoid suspicion and detection. Additionally, insiders who wish to carry out a malicious attack have the tools necessary to do so – they know what capabilities the facility has (such as a crane to lift heavy items) and consequently do not have to bring in their own tools. For external attacks, a traditional concern for security designers is how many tools an adversary can bring into a facility. In contrast, an insider already has knowledge of and access to the necessary tools.

Time and access to tools lend insiders a third distinct advantage – the ability to test the system to identify whether an attack might be detected. Finally, insiders have the potential to leverage their position within the organization to influence the behavior of others. The influence of insiders can lead others to wittingly or unwittingly aid the insider with the attack (e.g., teamwork). Not all individuals who help insiders do so knowingly, as insiders can use their position to convince others to help in seemingly routine or benign ways. However, multiple insiders knowingly working together can be especially disastrous. The four advantages discussed above – time, tools, tests, and teamwork – increase the importance of addressing and mitigating the insider threat. Further, one way to describe insiders is in terms of four key attributes: access, authority, knowledge, and motivation.

Access includes the permission to be in a facility (or certain locations within a facility), to use specific materials or technology, and to view or obtain sensitive information. Typically, control over access is implemented to protect sensitive facilities, materials, and information. However, the primary distinguishing attribute of insiders is their permission to be in close proximity to or influence the use of sensitive facilities, materials, and information without setting off alarms or causing concern. Individuals with this level of access include all operational employees (such as staff and all levels of management) and support employees (such as the janitorial staff, security personnel, and other staff considered ancillary to a facility’s primary objective). In addition, individuals with irregular access (such as inspectors, regulators, visitors, first responders, and maintenance personnel) can also pose a potential insider threat, as they have access to sensitive facilities, materials, or information that can be exploited in an insider attack. Consider, for example, “evidence of potential tampering as the cause of the abnormal condition” after engine coolant was found in the oil system of a diesel generator at the San Onofre Nuclear Power Plant in 2012 (Bunn and Sagan 2014, p. 1, footnote 2). This describes a potential insider scenario because not many individuals have permission to be near diesel generators at nuclear power plants, let alone to access the oil system without raising suspicion.

In addition to access, some insiders have authority or the power to influence behaviors or outcomes of people, procedures, or operations. Authority can be formal, to include hierarchical power structures (e.g., management and leadership within an organization), control over communications protocols (e.g., supervising the central alarm station), and direct responsibility over sensitive operations. In addition, authority can be informal, to include influence driven by professional seniority or expertise, cultural norms, familial lineage, tribal affiliation, or personal affection. In addition to influencing people, authority can also influence operations – for example, an individual with authority could potentially delay the assessment and communication of an alarm to security personnel or ignore (and fail to report) patterns of unauthorized access of critical computer servers. Authority can even impact procedures, such as changing the protocols for retrieving items from a high security vault or making it more difficult to report anomalous security behaviors. Here, the £26.5 million heist from the Northern Bank case in Northern Ireland – in which one of the bank managers was convicted of an active role – is illustrative (Bunn and Sagan 2014).

Knowledge is an insider attribute that indicates a more detailed understanding of critical facilities, materials, or information than is held by the general public – and affords a potential insider additional benefits for carrying out an attack. First, insiders have knowledge of detailed facility layouts, operations, and normal business practices, while an outside attacker would know much less about a facility. Insider knowledge may include how to avoid or defeat protections at a facility such as security force capabilities and locations (or deployment strategies), details of security systems, or bypass activities for implemented protections. In addition, insider knowledge can include how to optimize targeting during an attack (e.g., to maximize damage to the facility or steal the most critical assets). Finally, insiders can combine all their knowledge to carry out an insider attack more expeditiously. Illustrating the role of knowledge in insider threat cases, one database demonstrated that “guards were responsible for 41 percent of insider thefts at non-nuclear guarded facilities” (Bunn and Sagan 2014, p. 4). For additional details, descriptions, examples, and case studies please see Hoffman et al. (1990) and Sagan and Bunn (2017) in the recommended readings list.

These attributes – authority, access, and knowledge – alone do not transform an average employee into an insider. Most often, this transformation is triggered by a motivation – or the reason(s) driving a set of actions or behaviors. There is no single motivation guaranteed to transition an individual with authority, access, and knowledge into a malicious insider, as different external or internal drivers motivate different individuals to act against sensitive facilities, materials, or information. For example, consider an individual with environmentalist sympathies who gains employment at an oil refinery in order to access and damage operational control rooms. This individual exhibits a political motivation, where the malicious act was used to raise awareness for a government or policy issue. Other common motivations include:
  • Ideological: acting in response to a system of ideas and ideals, especially political beliefs.

  • Financial: wanting or needing additional money.

  • Personal/disgruntlement: an individual response to a perceived slight.

  • Psychotic: mental instability driving behaviors.

  • Coercion: external pressure applied to an individual, their family, or their assets to influence behavior (Homoliak et al. 2018; Tsenov et al. 2018).

Each distinct motivation may lead to insight for both the level of potential malicious intent and the likelihood the attempt will occur – and, therefore, helps identify appropriate mitigation strategies. For example, a disgruntled employee with a personal motivation is most likely to commit an insider attack in the 60 days before or after they are fired. This means facilities should more diligently monitor physical and cyber security in the days following an employee dismissal (e.g., cross-check access and change logs) (Collins et al. 2016). Ultimately not all insiders will attempt or carry out a malicious attack, but when insider opportunity is combined with insider motivations, an insider attempt becomes more likely.

Access, authority, knowledge, and motivation can be combined to define categories of potential insiders to understand the range of possible malicious actions. Here, several questions help scope the range of insiders:
  • Is the insider internally (e.g., disgruntlement) or externally (e.g., coercion) motivated toward malicious acts?

  • What level of actions or involvement is the insider willing to commit in support of the malicious act?

  • How violent is the insider willing to be in support of the malicious act?

The answers to these questions describe different roles insiders may take during a malicious act against sensitive facilities, materials, or information. If an insider is non-violent and does not actively participate in the malicious act, they are categorized as a passive insider. Passive insiders may intentionally (or unintentionally) pass information to an external adversary or may unintentionally aid in a malicious act (e.g., leaving a computer workstation unlocked) but will often stop if they believe they have been detected. These insiders may be looking to make extra money or could be coerced by an adversary who is blackmailing them in exchange for information. If an insider is non-violent but willing to contribute more actively to a malicious act, they are categorized as a non-violent, active insider. Active, non-violent insiders may leave doors unlocked, turn off security cameras, or grant unauthorized access to a computer network. Typically, active, non-violent insiders work with external adversaries (or other insiders) to achieve malicious aims, but they do not conduct the act themselves. If an insider is violent and willing to contribute even more actively to a malicious act, they can be categorized as a violent, active insider. Active, violent insiders likely execute the malicious act themselves – either with or without external help. Active, violent insiders are willing to inflict bodily harm on others in the facility and could have multiple motivations for the malicious act. These three categories help explain the insider threat and help identify best practices for detection, prevention, protection, and mitigation.

Many insiders have the access, authority, and knowledge to carry out an attack, giving them an insider opportunity. Yet, potential insiders also need a driver (or trigger event) to initiate an insider attack, requiring insider motivations. This suggests that an insider attack results from the combination of an insider opportunity (a function of access, authority, knowledge) and insider motivations (a function of internal or external drivers toward a malicious act).

Preventative Measures Against Insider Threats

While it is impossible to eliminate the insider threat, one method for identifying how to appropriately respond is evaluating individuals along a spectrum of access, authority, and knowledge. From this perspective, there are three states in which an insider can exist (Cappelli et al. 2012; Homoliak et al. 2018): (1) those applying for authorized access (e.g., job applicants or visitors); (2) those with authorized access – and presumably knowledge (e.g., employees, contractors, visitors); and (3) trusted individuals with additional opportunities (e.g., increased access permissions, management). People in each of these three categories have varying levels of access, authority, and knowledge which require different responses and technological, procedural, and administrative controls.

Preventative measures are mitigations aimed to prevent insider threat opportunities before they occur. More specifically, the goals of preventative measures are to exclude potential insiders by identifying undesirable behavior or characteristics before allowing access and to minimize opportunities for malicious acts by limiting access, authority, and knowledge. Preventative measures can take place before individuals are granted access, such as during the hiring process for employees, or through a set of procedures laid out for irregular access, like visits from first responders. However, preventative measures can also take place after access has been granted – though post-access preventative measures are more difficult to implement. The two most successful and well-known preventative strategies include trustworthiness programs and promoting strong security culture.

Trustworthiness programs have no official or common definition, but a working definition would include all efforts, initiatives, and processes to ensure that employees are vetted and found to be responsible, upstanding, and principled. Key elements of trustworthiness programs include a threat or risk assessment, well-defined personnel security requirements, effective implementation of the program, and regular review and evaluation of effectiveness. Delving further into these key elements, a threat or risk assessment should consider potential adversaries including their intent, capabilities, and tactics, understand potential targets at each facility, and understand possible insider motivations. The risk assessment should be used to scale the trustworthiness program to the risk at a given facility and inform personnel security requirements. The second key element, personnel security requirements, should define information and physical access levels, set eligibility criteria for access, and be codified through legal or regulatory mechanisms. Third, trustworthiness program implementation should establish standardized personnel screening processes that could include pre-employment background checks, psychological or medical evaluations, adjudication mechanisms for handling disputes, and continued reinvestigation. Finally, trustworthiness programs should be regularly reviewed and evaluated and anomalous activities and incidents should be thoroughly investigated and reported. Examples of trustworthiness programs include the Human Reliability Program (US DOE), Access Authorization Programs (US NRC), and Personnel Security Programs (implemented internationally).

Unlike trustworthiness programs, which aim to determine if a person should be allowed access, the goal of security culture is to promote and reinforce good security practices while individuals have access to the facility. The concept of security culture ultimately stems from the broader theory of organizational culture, which argues that the dynamics of organizations can be described by the interactions between core beliefs (e.g., underlying assumptions), shared characteristics (e.g., espoused values), and observable behaviors (e.g., visible artifacts) (Schein 1990). In the nuclear security context, this emphasis shifts to perception of the existence of a credible threat and the belief that nuclear security is vital to combatting such threats. However, Cole et al. (2013) offer a review by human factor experts at Sandia National Laboratories that identified a range of interpretations for the term safety culture. While safety culture and security culture have different end goals, they face similar challenges in their implementation. To be successful in improving security culture, organizations should promote values such as awareness, reporting, flexibility, learning, and just practices.

In order for security culture to take hold there must be buy-in from senior leadership and management. While security culture can be influenced by the international community, national regulations and priorities, public support, norms, and threat credibility, it is ultimately manifested in management systems and employee behaviors. A badging system only works well when everyone has to wear a badge – including the president of the organization. Insider prevention is only as good as the security and security culture of an organization.

Protective Measures to Mitigate Insider Threats

Where preventative measures seek to ensure that a threat does not access a facility, protective measures initialize when preventative measures have failed – meaning that potential insiders have been granted authorized access. Protective measures are implemented to inhibit inappropriate access to sensitive facilities, materials, or information by those with authorized access to a subset of these assets. Similar to the objectives of physical security (which is further explored in the “Physical Security: Methods (e.g., Crime Prevention Through Environmental Design/CPTED) and Practices (e.g., Surveillance)” chapter), the goals of protective measures are to detect, delay, and respond to malicious insider acts to mitigate or minimize negative consequences. Protective measures take place after individuals are granted regular access (e.g., after being hired), enhanced access (e.g., additional clearances are granted), and irregular access (e.g., once inspectors or visitors are on the premises). Two common protective strategies include access control systems and employees reporting suspicious behavior.

Access control technologies are implemented to intentionally restrict proximity to (or contact with) sensitive facilities, materials, or information. In so doing, the protective measure distinguishes between who does (and who does not) have authorized access – a direct protection against potential insider threats. For example, access to a nuclear power plant’s control room can range from visitors being escorted on tours to operators entering the room unescorted during business hours to power plant management with the authority to enter the control room during nonbusiness hours. In addition, access controls protect against the insider threat by helping facilities systematize and expedite decisions regarding who can appropriately access what locations, materials, or information with their facility. Access control technologies can include badge readers, biometrics, passwords, physical locks, pin code-based controls, or some combination of these technologies to ensure only trusted employees can access sensitive locations, information, and materials. Access control technologies, then, actively restrict – and account for – all individuals attempting to access sensitive assets to protect them against potential malicious insider acts.

While access control technologies provide enhanced protection against the insider threat, they are only effective if the databases and supporting digital infrastructure are themselves protected. If an insider can edit the access control database, they can allow themselves or others authorized access to any facility or location within it. Yet, protections on these databases and networks work in concert with traditional access controls in reinforcing dynamics to improve insider threat protection. There is a similarly high need for vigilance in individuals with access control responsibilities (including managing the related databases) to protect against insiders. Individual vigilance can be supported by organizational procedures (e.g., implementing the two-person rule for access or strict change management procedures) and improving security culture. Additionally, if an insider successfully executes a malicious act, access controls can be used forensically to support attribution of the malicious act. For example, badge reader data for a toxic chemical storage vault can help identify a disgruntled maintenance worker who poured a hazardous liquid into the facility’s ventilation system.

As potential insiders know many of the technological protections in place, a second protective measure relates to employee observation of behavioral patterns to limit the ability for unauthorized access. Behavioral reporting programs are related to preventive measures (e.g., human reliability programs and nuclear security culture) but include more formalized protocols to stop the inappropriate access of sensitive facilities, materials, and information by individuals with authorized access. Like their technological counterparts, protective measures based on behavioral observations can become more stringent based on the sensitivity of facilities, materials, or information. For example, concerning political statements made by janitorial staff should not warrant the same level of potential concern as those same comments made by a chief scientific officer with unfettered facility access. In general, reporting of unusual or suspicious behavior should not lead immediately to disciplinary action, but rather to a comprehensive investigation to determine if an insider may have developed the motivation to carry out an attack.

In addition to requiring high-level support, organizations must consider behavioral reporting programs carefully. One major challenge behavioral reporting programs face is the potential conflict with social culture or personal feelings, where employees are reluctant to “betray” a friend/colleague or feel betrayed by those who reported them. Thus, success of these types of protective measures is based on employees feeling comfortable about reporting suspicious or unusual behaviors of their colleagues without fear of reprisal. In order to avoid some of the negative feelings that can arise from behavioral reporting programs, companies must have a clear definition of what “suspicious behaviors” are and when they expect their employees to report. Unclear definitions and guidelines can cause confusion and lead to frustration and anger. Finally, organizations must have timely and “fair” responses to indicators and behavioral reports – along with robust records to document the justification for their decisions. Example of “fair” responses to reports of suspicious or unusual behaviors typically ranges from intensified monitoring to restrictions in job responsibilities to termination and even legal action.


Sustaining a high-performing insider threat mitigation strategy is challenged by the difficulty in achieving – and maintaining – high levels of human reliability surrounding sensitive facilities, materials, and information. One difficulty in human reliability is the conflict between high-quality insider threat initiatives (e.g., enhanced performance monitoring), legal/regulatory requirements, and individual privacy. Another difficulty is often posed by the underlying social culture, especially if cultural sensitivities tend toward a lack of trust in government (or non-familial authority), high levels of tribal loyalty, or an inherent trust of others. In addition, it is difficult to measure the effectiveness of insider threat mitigation programs, as proving that an insider attack did not happen because of a robust insider threat program is extremely challenging. Despite this, evidence suggests these programs have been successful in mitigating insider threats. To this end, Bunn and Sagan (2014) present ten lessons learned from past insider threat cases and declare the “main lesson of all these cases is: do not assume, always assess – and asses (and test) as realistically as possible” (p. 20).

The potential for insider threats poses an evolving, multifaceted set of challenges to the security of sensitive facilities, materials, or information. As discussed above, potential mitigation strategies must be similarly flexible, wide-ranging, and applied continuously throughout the period that individuals have (or are seeking) authorized access to sensitive assets. In response, the US National Insider Threat Task Force has published a set of best practices to help outline appropriate insider threat mitigation strategies (2016, 2017). These best practices include decide who should be engaged, determine what matters most to your organization, reassess personnel management practices, develop clear termination procedures, engage the workforce, review systems for security/vulnerability, engage your privacy experts, put information in context, and test your security posture. In addition to regular testing, insider threat mitigation strategies should be updated each time a significant challenge is identified. The need for vigilance and diligence in insider threat mitigation stresses “a dynamic effort requiring constant evaluation, fresh perspectives, and updated approaches” (NITTF 2017) to effectively detect, prevent, mitigate, and protect against insider threats.



  1. Bunn, M., & Sagan, S. (2014). A worst practices guide to insider threats: Lessons from past mistakes. Cambridge, MA: American Academy of Arts & Sciences.Google Scholar
  2. Cole, K. S., Stevens-Adams, S. M., & Wenner, C. A. (2013). A literature review of safety culture (SAND2013-2754). Albuquerque: Sandia National Laboratories.Google Scholar
  3. Collins, M. L., Theis, M. C., Trzeciak, R. F., Strozer, J. R., Clark, J. W., Costa, D. L., Cassidy, T., Albrethsen, M. J., & Moore, A. P. (2016). Common sense guide to mitigating insider threats (CMU/SEI-2016-TR-015, CERT Insider Threat Center) (5th ed.). Pittsburgh: Carnegie Mellon University.Google Scholar
  4. Homoliak, I., Toffalini, F., Guarnizo, J., Elovici, Y., Ochoa, M. (2018). Insight into insiders and IT: A survey of insider threat taxonomies, analysis, modeling, and countermeasures. ACM Computing Surveys, preprint- arXiv: 1805.01612
  5. National Insider Threat Task Force. (2016). Protect your organization from the inside out: Government best practices. Washington, DC: Government Printing Office.Google Scholar
  6. National Insider Threat Task Force. (2017). Insider threat guide: A compendium of best practices to accompany the National Insider Threat Minimum Standards. Washington, DC: Government Printing Office.Google Scholar
  7. Schein, E. (1990). Organizational culture. American Psychologist, 45(2), 109–119.CrossRefGoogle Scholar
  8. Tsenov, B. G., Emery, R. J., Whitehead, L. W., Reingle Gonzalez, J., & Gemeinhardt, G. L. (2018). A pilot examination of the methods used to counteract insider threat security risks associated with the use of radioactive materials in the research and clinical setting. Health Physics, 114(3), 352–359.Google Scholar

Future Readings

  1. Bunn, M., & Sagan, S. (Eds.). (2017). Insider threats. Ithaca: Cornell University Press.Google Scholar
  2. Cappelli, D., Moore, A., & Trzeciak, T. (2012). The CERT® guide to insider threats: How to prevent, detect, and respond to information technology crimes (theft, sabotage, fraud). Upper Saddle River: Addison-Wesley.Google Scholar
  3. Hoffman, B., Meyer, C., Schwarz, B., & Duncan, J. (1990). Insider crime: The threat the nuclear facilities and programs. Santa Monica: RAND.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Adam D. Williams
    • 1
    Email author
  • Shannon N. Abbott
    • 1
  • Adriane C. Littlefield
    • 1
  1. 1.Center for Global Security & CooperationSandia National LaboratoriesAlbuquerqueUSA