So far, we have painted a rather gloomy picture, suggesting that in circumstances when much data and systems are digitized, the adversaries seem to be, if not winning, then getting significantly ahead in the cyber game. Yet, is it really as gloomy as we might think? Let us now explore what solutions are available to businesses and how well these solutions are addressing various cybersecurity risks.

Wouldn’t it be great to have one universal solution for all cybersecurity issues? Imagine that you could apply a simple tool to estimate all your potential cybersecurity risks and, as a result, the tool would give you a set of concrete recommendations about what you should do to protect yourself and your business against those risks. Unfortunately, the world is much more complex than that. We saw in the previous sections that cybersecurity threats are multiple, vulnerabilities are not easy to anticipate and risks are difficult to estimate. Therefore, the solutions we apply to cybersecurity issues should also be multifaceted. When we talk about cybersecurity for business, three types of solutions exist: Technology -driven approach (or “patching with technology” [1, 2]); Human-centered approach (or “patching with people ” [1, 2]); and Canvas approach (or “patching with frameworks and architectures”). In what follows, we first consider the Canvas approach, and then turn to the discussions of Technology-driven and Human-Centered approaches.

Canvas Approach

Canvas approach is represented by general frameworks and architectures for cybersecurity. They sit on top of technology -driven and human-driven approaches as they incorporate “patching with technology” and usually include “patching with people ”, at least to some extent. Apart from frameworks and architectures, canvas approach also incorporates risk management methods aimed at approximating and measuring the most relevant cybersecurity risks. Usually, canvas approach is discussed separately and not included in the description of solutions; however, for the purposes of our explanation, we decided to include it as a separate category because it helps to understand the way in which various cybersecurity solutions are organized at a strategic level, while “patching with technology” and “patching with people” operate at a tactical level.

It is important to understand the differences between frameworks, architectures, and risk management tools for cybersecurity. Even though all three are aimed at decreasing and managing cybersecurity risks, they are not the same. Frameworks refer to a set of non-compulsory or voluntary guidelines, recommendations, best practices, and standards aimed at helping businesses to strengthen their cybersecurity and to alleviate cyber risks. Architectures explain how business systems (including computer systems, technological systems, human systems, supply-chain systems) should be designed in order to reach business cybersecurity goals. Risk management tools include risk assessment, risk identification, risk analysis, risk evaluation, and risk mapping techniques which help to estimate the probability of harm associated with various adverse cyber events , understanding the nature of these events and what countermeasures could be applied to address them.

Frameworks are very widespread cybersecurity canvas tools used by many businesses. Even though approximately 250 different security frameworks exist globally,Footnote 1 according to the survey conducted by Tenable and released at the beginning of 2018,Footnote 2 there are four frameworks which are most widespread.

  • Payment Card Industry Data Security Standard (PCI DSS) is used by 47% of surveyed organizations;

  • ISO 27001/27002 is implemented by 35% of surveyed organizations;

  • Center for Internet Security Critical Security Controls (CIS) is adopted in 32% of surveyed organizations;

  • National Institute of Standards and Technology (NIST ) Framework for Improving Critical Infrastructure Cybersecurity, known as NIST Cybersecurity Framework (NIST CSF), in place at 29% of surveyed organizations.

The quoted percentages do not add up to 100% because even though 84% of surveyed organizations revealed that they were using some type of framework , 44% explained that they were using more than one framework. Tenable reported that there was a high level of the frameworks’ adoption among organizations of different sizes: 90% of large organizations with 10,000 employees and 77% of relatively small organizations with 1000 or fewer employees operate using a cybersecurity framework.

Even though frameworks allow cybersecurity to be viewed from a strategic level, it is hard to estimate their practical value to organizations. They seem to offer the general flow of strategic actions for cybersecurity (i.e., explain “what” should be achieved), yet it is not clear from the frameworks which particular actions should be taken to reach favorable cybersecurity outcomes (i.e., “how” or “in what way” the strategic “what”s could be attained is not clear). Of course, much of this is driven by the underlying generality and universal appeal of frameworks, as it is implied that they should be useful for a wide variety of organizations, operating in different countries, working in different industries, and facing various contexts. Yet, at the same time, it is hard to avoid noticing that the most popular frameworks look very similar (see Table 5.1 and Fig. 5.1).

Table 5.1 Most widely used cybersecurity frameworks in the USA
Fig. 5.1
figure 1

CIS framework explained

Specifically, three of four frameworks (PCI DSS, ISO , and NIST CSF) include five components which detail the strategic flow of activities aimed at preventing and responding to various cybersecurity risks. The CIS framework offers a layered approach, splitting activities into Basic, Foundational, and Organizational, capturing different levels of strategic thinking. Yet, in its essence, it also offers a high-level plan, although it suggests that all components should work simultaneously rather than sequentially.

In other words, while frameworks seem to provide a nice starting point for those not familiar with cybersecurity and new to the field, it is hard to distill from frameworks how one should go about tackling various types of cybersecurity risks.

Architectures offer more concrete instruments to ensure cybersecurity. Cybersecurity architectures are usually very specific to businesses operating digital systems and depend on a wide variety of factors: the industry in which the business operates, its size, its supply-chain features, the characteristics of its business-to-customer and business-to-business relationships, the context in which the business operates (e.g., political, geographical, economic , legal factors which impact day-to-day operations), etc. Providing general advice on how architectures should be constructed is extremely difficult as they need to be designed on a case-by-case basis and take into account security goals as well as the needs of specific businesses. Nevertheless, some general principles are offered by governmental bodies. For example, the UK ’s National Cyber Security Centre (NCSC ) released four basic goalsFootnote 3 for good cybersecurity architecture design. According to the NCSC, any cybersecurity architecture should pursue the following security goals:

  • Robustness: the system should be difficult to penetrate and compromise (“make initial compromise of the system difficult”);

  • Resilience: the system should be designed to minimize adversarial impact so that it can effectively and quickly recover and survive this impact (“limit the impact of any compromise”);

  • Agility: the system should be flexible to quickly adapt to changes in the environment so that it is difficult to disrupt (“make disruption of the system difficult”);

  • Traceability: the system should allow for quick and efficient detection of compromises (“make detection of a compromise easy”).

We can easily see how, in theory, or in an ideal world, reaching or at least aiming to reach all four goals should be the purpose of any cybersecurity architecture. Yet, several practitioners have highlighted the impossibility of accomplishing all these goals simultaneously, as (i) they appear to be mutually exclusive, and (ii) they may pose serious cost limitation problems if businesses try to achieve them all at the same time [3, 4]. There are structured mathematical arguments which could be applied to prove the impossibility of overlap between the four goals. However, let us follow a simple logical argument.

In the real world, of all the four goals, the hardest to achieve is Robustness. You might use the most sophisticated technology to try and stop the adversaries in their tracks; you might make it really hard to penetrate the system and, that way, seriously increase the time which is needed by adversaries to compromise your business. Yet, if you have something really valuable to steal, it does not matter how thick the wall is which separates you from the cybercriminals. If the adversaries understand that there is a highly monetizable asset behind this wall, they will just get better and find a way to drill a hole in that wall, dig a tunnel underneath it, or go around the wall and avoid it altogether. In other words, stronger, bigger, more sophisticated barriers will only attract smarter, better, and more experienced adversaries. Despite this, billions of dollars, pounds, euros, etc. are spent every year trying to make the systems robust to attacks . And yet, all this investment is not making systems secure because on a daily basis we hear about new cases of highly reputable businesses being compromised . You might wonder why. In some sense, it is clear that we want to do is let the cybercriminals in but then throw them completely off track (away from valuable information or assets), compromise their virtual private networks (VPNs) , collect forensic evidence about them, lead them to something completely useless and worthless and make them give up, thereby rendering your business unattractive to them as a target. It is true that some companies try to engage in such activities; however, the overwhelming majority of businesses concentrate on Robustness.

Philosophy offers some possible answers to this. Security is a social construct and, as a society, we feel very strongly about some security targets more than others. For example, philosopher Jonathan Wolff argues that the UK tends to spend a lot more on railroad security compared to motorway security precisely because we would feel incredibly ashamed as a society if a major railroad accident occurred; by contrast, we are used to hearing about motorway accidents and collisions and do not feel the same sense of shame when they happen [5]. Similarly, many businesses feel very strongly about Robustness: imagine a leading bank or financial institution in any country in the world declaring that cybersecurity Robustness is not their top priority. This would lead to a major scandal as customers and regulators, as well as the general public, would automatically regard this organization as being reckless with cybersecurity. The reputational damage would be devastating. However, despite all the declarations about Robustness, none of the systems operated by banks or financial institutions in the modern world are “unbreakable”, in the sense that under certain circumstances, with a certain menu of available resources, cybercriminals may succeed in compromising even the most sophisticated systems.

Recently, practitioners called for a shift in the architectural paradigm to concentrate more on Resilience rather than Robustness [3, 4]. Indeed, it is extremely important to make sure that the impact of the adversarial act is minimized and that all systems can be up and running as quickly as possible. If we agree that it is impossible or almost impossible to prevent an adversarial event from happening, then it is clear that Resilience is a lot easier to achieve than Robustness. There are many ways in which Resilience can be accomplished. One way is to keep important data assets and records backed up on different independent platforms. For example, it is no secret that the National Health Service (NHS )—a publicly funded healthcare service provider in the UK —has been compromised by cybercriminals many times. Yet, despite all these attacks and leaked records, the system was relatively quick to recover. The reason for this is that the NHS runs a large number of different platforms in parallel, where copies of recorded data exist completely independently. That way, if one part of the system (one platform) is compromised, the other can quickly provide a copy of missing or maliciously encrypted data. The NHS runs branches of its systems which are powered by the Microsoft Windows platform and Linux Ubuntu (i.e., NHSubuntu),Footnote 4 as well as usually keeping printed copies of all patient records. Therefore, despite not being the most secure organization in the world in terms of Robustness, it is reasonably resilient as multiple backups allow the NHS to recover after an adversarial impact.

Another way is to quickly get the information into the public domain. Think of Adobe. Adobe was hacked on multiple occasions, but every time the software giant was honest about the attacks and quickly released the information to customers, urging them to change their passwords . In 2013, over 38 million Adobe accounts were compromised, yet Adobe quickly released the information they had about the breach even though initially the company thought that 2.9 million customers had been affected.Footnote 5 Nevertheless, the company recommended that all customers change their passwords not only for Adobe but on other websites where they used the same login details .Footnote 6 Adobe recovered from the breach relatively quickly.

Yet, it is worth noting that this path might not be optimal for all companies . There are many factors influencing this, one of which could be whether security is essential to the core value proposition of the company to the customer. For example, if hackers go after McDonald’s and steal a handful of credit card numbers, McDonald’s will suffer. Yet, it will not suffer as much as, say, PayPal in a similar situation. The reason for this is that McDonald’s main value proposition is to feed the customers quickly, whereas PayPal’s value proposition is to offer a secure and fast way of electronic payment. This is why sometimes we are surprised when we go to https://haveibeenpwned.com/ (a website which allows you to check whether any of your current email accounts have been compromised ) and discover that some of our emails have been subject to a data breach even though no public announcement was made about the breach. The EU GDPR now requires all businesses dealing with EU customers to release any breach information into the public domain as soon as it becomes available to the company. Nevertheless, we have yet to see how this regulation is going to be enforced and whether it will lead to higher numbers of reported breaches.

Agility, or flexibility, of the system is also a goal which is hard to achieve, though, again, probably not as hard as attaining Robustness. The main issue with Agility is that there is considerable heterogeneity in understanding what it might mean for different systems . If we consider Agility as an ability of the cybersecurity system to adjust to the environment , then the main challenge Agility poses to the architectural design is similar to the challenge which potential customization or personalization poses to many products and services if the underlying flexibility needs to be achieved purely by technical means. It is possible, for example, to make a car fully customizable or even personalizable, yet there is a trade-off between customization and the cost of the car: i.e., the more customizable it is, the more expensive it will be to produce. Likewise, to make the cybersecurity architecture Agile using technical means, one has to consider a long list of possible threats, vulnerabilities , and risks and make sure that the system is ready to face them. All these actions are costly if businesses are aiming to achieve them using technology . Yet, it is possible to decrease the costs if human-centric measures are applied. We will consider human-based solutions below.

Finally, Traceability is something many organizations struggle with. Recall Google’s recent announcement about the closure of their Google+ service in 2018. The vulnerability in Google+ was only discovered by Google in 2018 but could have been there since its launch date (June 28, 2011). Even though Google made statements that they had no reason to believe that the discovered vulnerability was exploited by adversaries, the uncomfortable truth is—there is no way to know for sure whether someone was able to get in and out of the system undetected before the vulnerability was discovered. This example illustrates that even the most sophisticated companies with the best cybersecurity architects cannot avoid unexplored vulnerabilities and, in many cases, will not see if something goes wrong until it is too late. Usually, the more sophisticated the cybersecurity system is, the more difficult it is to spot problems as there are too many places where they may appear.

So, it appears that each of the goals in isolation is hard to achieve. However, the real challenge is to make sure that these goals could be combined. Robustness seems to be in conflict with Resilience and Agility; Traceability with Agility, etc. [3, 4].Footnote 7 This means that achieving all four goals simultaneously is utopian, even if two of the four goals could coexist harmoniously in the same architecture.

Frameworks and architectures are supported by the risk management tools . Any risk management exercise starts with risk assessment or understanding, estimating and measuring the probability of potential harm from various threats. Risk assessment can be conducted using qualitative and/or quantitative methodology. Qualitative methodology allows us to assess potential cybersecurity risks using interviews, discussions, focus groups, or other non-numerical methods, such as considering the strengths, weaknesses , opportunities, and threats (SWOT) of the cybersecurity system . There are also numerous quantitative methods for assessing risk. Since there is a lot of literature dealing with risk assessment tools and algorithms [6,7,8], we will provide several examples for illustration only. For example, a hazard model could be used to generate a vulnerability versus risk curve which maps loss versus the temporal probability of adverse events . Where the path-dependence of adverse events is common (i.e., when one adverse event is likely to lead to another), event-tree analysis is usually used. The main attraction of this method is that it provides a clear mapping of how one adverse event is linked to another and by this associative thinking allows us to calculate the probability of each subsequent adverse event. The Risk Indicator-based approach allows us to map multiple indicators related to threats, exposures, vulnerabilities , and capacities of the organization and then use scoring and weighting procedures to obtain a Threat Index and Vulnerability Index. The overlap between the two indexes yields the Risk Index , which is then used to formulate responses and design solutions. The most commonly used and popular approach, however, is the Risk Matrix approach. If your business has a risk assessment or risk management unit, it is highly likely that you have previously seen and worked with risk matrixes. The matrix maps the frequency of the adverse events versus their impact or consequences . Both frequency and impact usually include three (Low = Green; Moderate = Yellow; High = Red) to five (None = White; Low = Green; Moderate = Yellow; High = Orange; Very High = Red) color-coded levels and offer a useful tool which allows us to clearly systematize the risks and prioritize responses. By looking at the risk matrix, you usually know that a system is risky if the majority of events are red, and mostly safe if the majority of events are green.

Yet, the main issue with using quantitative risk assessment is that historical data or live data on risks is not readily available. There are many reasons for this: data on cybersecurity is not centrally accumulated; information about cyber risks is not only not shared between organizations but even within the same organization ; even when it is possible to confirm that the adverse event has taken place, its consequences remain unknown or uncertain. Therefore, measurement of risk using traditional quantitative risk assessment is not straightforward. This leads to a situation where traditional quantitative risk assessment techniques offer little help to cybersecurity architects. We are, of course, not trying to suggest that these techniques are useless; however, without reliable and accurate historical data, it is hard to expect that the outcome of these techniques can lead to valuable predictions. In later chapters, we look at alternatives to traditional risk assessment and risk management tools which help mitigate at least some of the problems we face when trying to measure and quantify cybersecurity risk.

As we explained above, modern cybersecurity canvas solutions incorporate both technology-driven and human-centered approaches to identify a set of tools , methods, and solutions for tackling potential adversarial impact. Let us look closer at those solutions.

Technology-Driven Approach

“Patching with technology” [1, 2], to date, represents the most widespread way of dealing with cybersecurity threats. A cybersecurity “patch ” refers to a set or series of technological measures, changes or alterations of programming code, supporting data , underlying algorithms, or programming system logic aimed at improving the system’s security , fixing existing bugs , addressing vulnerabilities , and updating defense mechanisms. A Technology -driven approach can be split into two main subcategories: Reactive and Active. Reactive approaches deal with the problem of how to react to a particular threat and how to prevent threats from happening by designing robust systems which are difficult to infiltrate; whereas Active approaches address issues around designing mechanisms which allow us to anticipate cybersecurity risk , effectively detect attacks , as well as mislead and catch adversaries.

Reactive technological tools currently represent the main frontier of cyberdefense. Since most businesses outsource their cybersecurity issues to a range of large digital giants or smaller and specialized cybersecurity companies , any discovered gap in the system (i.e., vulnerability ), whether exploited or unexploited, is usually fixed by applying a technical “patch ”. For example, such “patches” are applied by Microsoft when companies spot loopholes in the Microsoft Office or email systems . While technological patches are very effective, the main issue is that they cannot be applied before the vulnerability is discovered. Therefore, in many cases, patches are applied after the harm has already materialized.

Firewalls are another popular security measure. In the contemporary network security, firewalls usually represent the first line of defense as they separate networks with restricted access and valuable information or data from publicly accessible cyber spaces. While in the past it was rather obvious where firewalls were located in the cybersecurity architectures, contemporary systems allow for the use of firewalls which are not connected to the Internet -powered networks (i.e., “invisible” on the Internet). This creates an illusion that such firewalls are “unbreakable” or impossible to attack . It is certainly true that such measures as taking firewalls off the Internet make it more difficult to spot and compromise systems. Yet, it is important to remember that even the most sophisticated firewall, even if it is invisible on the Internet, offers only a temporary protection . In other words, while a firewall can definitely slow down a motivated adversary, if you have something very valuable to steal, it will not stop this adversary.

We have recently been called to consult a company which suffered an unprecedented breach of a “physical ” firewall. While, for confidentiality reasons, we cannot describe the particulars of this company’s cybersecurity architecture, we will, nevertheless, explain the principle behind this firewall. Imagine that you have a highly secure building with two floors. Each floor operates a separate intranet network (not connected to the outside world) and there is a “physical firewall”, in the sense that the two floors are physically separated from each other and operate independent (unconnected) networks . Therefore, in order to infiltrate the two networks, one has to physically go to a particular floor and “plug” into the network. The building has very sophisticated entry requirements and both floors are filmed 24/7. What if we told you that, despite all these precautions, it was possible for adversaries (who are not malicious insiders) to infiltrate such a system ? This is what we mean when we say that a firewall is only a temporary measure capable of slowing down but not stopping the adversary.

Antivirus is also a popular measure which many companies as well as individuals believe protects them from malware . Yet, again, considering the level of sophistication with which attacks are currently executed, it is highly unlikely that antivirus will offer you adequate protection . For example, if previously viruses were delivered to personal computers using email attachments, currently your computer can be infected by you simply accepting a malicious calendar invite sent as a part of a spear-phishing campaign.

Multifactor authentication has recently become a new norm. When logging in to your email from a new device for the first time, you are usually asked to verify your identity by typing in a code which is sent to you as a text message on your mobile phone . However, considering that mobile phones are also infiltratable, or keeping in mind that your mobile device can simply be stolen or hijacked, multifactor authentication does not really offer a reliable defense. Although, by any standard, it is hard to disagree with the fact that adopting a multifactor authentication does offer an additional layer of protection .

Technical measures also include (among many others) backups, zero-trust, and device solutions. Backups are generally a good idea. However, it is important to remember that cloud backups can easily be accessed from a compromised device. Therefore, it is important to keep several copies of your data files on an external drive or a CD not connected to the Internet . It is also a good idea to encrypt those offline files. Zero-trust refers to a security model where any attempt to access an organizational security system is treated as not trustworthy . Zero-trust systems have recently gained momentum due to their “never trust, always verify” principle which implies having multiple checks of access and movement points. Yet, even such systems can be abused and loopholes for infiltration can be found. Finally, device solutions are also not the best way of dealing with cybersecurity problems . Speaking at a cybersecurity debate at The Alan Turing Institute in September 2018, Cal Leeming, formerly the youngest hacker prosecuted for cybersecurity crimes in the UK at the age of 12 and currently a cybersecurity consultant, maintained that Chromebooks were relatively secure compared to other devices. It is true that Chromebooks are not very easy to compromise. However, by moving all your files to Chromebook, (i) you are placing all your security into the hands of Google, and (ii) we have recently seen a live demonstration from a white-hat hacker who apparently infiltrated and extracted valuable data from a Chromebook in a matter of minutes.Footnote 8

Active Cyberdefense (ACD) technological solutions, unlike Reactive measures, are usually designed to proactively lure and mislead cybercriminals in order to collect forensic data on them as well as find out who they are. This is an exciting new direction in cybersecurity. Yet, as Pete Cooper, a cybersecurity expert puts it, “ACD is not about hacking back”. It is about a systematic approach to understanding the criminal mind [9]. Currently, the most widespread methodology used for ACD is the creation of a sophisticated net of so-called “smart honeypots”, or traps, on the network which are intended to attract cybercriminals. The honeypots are usually machines on the network which look very attractive to an adversary but do not contain any valuable or interesting information and, most importantly, do not act as a gateway to anything important. By hitting these pre-set targets, cybercriminals waste their time and compromise their forensic data, allowing the cybersecurity team to track them and, with luck, even identify them.

There are, however, several issues with this approach. First of all, engaging in ACD is not a route for all businesses. This path requires a great deal of “maturity” (i.e., an understanding of the issue at a strategic level), resources, and technical capability from the organization . Second, recent advances in AI offer cybercriminals a variety of ways to detect honeypots on the system, thereby destroying the whole purpose of setting them up in the first place. Obviously, AI technology is available to both sides, and several savvy businesses have responded by using AI to set up smart honeypot nets. However, like any technological solution , it is only a matter of time before a motivated set of adversaries will find a way to detect and avoid smart honeypots if the method used to set them up is determined purely on algorithmic logic. This is because any system set up purely by mathematical logic can be infiltrated by employing a mathematical counter–logic.

As we can see, technological solutions on their own are unlikely to solve the cybersecurity problems of businesses as they primarily focus on the Robustness goal, and it is next to impossible to make a system robust. It is highly likely that despite all the technology available to an organization , sooner or later highly motivated adversaries will find their way in. Under these circumstances, it is important to shift the cybersecurity paradigm from “patching with technology” to “patching with people ” [1, 2].

Human-Centered Approach

“Patching with people ” [1, 2], a term coined by Debi Ashenden, Professor of Cybersecurity at the University of Portsmouth, incorporates a set of human-centered solutions which allow us to ensure the Agility and Traceability of the system as well as support Resilience in a way that technological solutions cannot. Businesses often talk about “the human firewall”, referring to the ability of human beings within the organizations to effectively detect, report, and alleviate major cybersecurity risks. Yet, the mechanisms for creating the human solutions are not in place in the majority of organizations. There are several reasons for this. It is obvious that in circumstances where many companies outsource their cybersecurity, there exists a rather large bias towards technological fixes as opposed to human-centered solutions. Cybersecurity is perceived within organizations as an incredibly “dry” topic requiring significant technical expertise. At the same time, the overwhelming majority of cybersecurity companies offer technological or algorithmic solutions which seem to offer a convincing panacea from all cybersecurity troubles. Yet, as our earlier quote from Robot and Frank suggests, the “perception of security” which these solutions create do not equate to actual security. No matter how good your cybersecurity consultant is, and no matter how awesome their product, unless you see cybersecurity issues as hybrid (anthropotechnological) problems , this product is unlikely to save you from a harmful attack .

Looking back at the UK 2018 Cyber Security Breaches Survey,Footnote 9 we also spot a rather alarming paradox. Even though over 70% of surveyed organizations in the UK admit that cybersecurity issues are their top priority, only 20% of them have cybersecurity training for staff. This is incredibly low considering our earlier conjecture that social engineering is widely used by cybercriminals to compromise organizational cyber systems . Even though efforts have been made to develop new human-centered solutions, they are still very scattered and, in many cases, counterproductive. Let us consider several examples.

Staff compliance with the cybersecurity policies of the company is probably one of the most common ways in which cybersecurity human-related issues are addressed within businesses. Many companies have a set of rules with regard to their computer and data systems . For example, while some companies are very relaxed about taking data outside the company premises, allowing employees to work from home, others are extremely cautious about this, requiring staff to use dedicated internal drives or internal clouds for all operations with data; whereas some businesses allow free unencrypted communication , others heavily restrict the way in which communication happens and even insist on encrypting all externally faced emails. With respect to compliance, three aspects are particularly worth mentioning: policies with regard to device ownership, USB port usage, and passwords .

With regard to devices, businesses tend to operate one of two models: “bring your own device ” or “in-house device” policy . “Bring your own device” implies that company employees are allowed to bring and use their own devices (laptops, iPads, etc.) to fulfill their duties; whereas an “in-house device” policy refers to a situation where each employee has to use devices and systems provided by the company. There are pros and cons associated with each model. Generally, an “in-house device” policy serves a Robustness goal, whereas “bring your own device” is more focused on Resilience. Specifically, many large organizations (with several notable exceptions in the technology industry) prefer to have a single supplier of personal computers and limit themselves to one operational system (say, Microsoft Windows). This is primarily due to the fact that the majority of employees are not technically savvy and rely on IT services to manage and fix problems should any arise: be it an issue with a particular PC or with the digital security of the organizational system as a whole. This is why such systems are built with technical solutions in mind and are designed to minimize human interactions with the fragile segments or areas of a system’s elements and networks .

At the same time, the “bring your own device ” policy is often exercised in smaller and more technically aware organizations. It is true that individual devices might have a higher probability of being compromised in those organizations compared to “in-house device” organizations; however, the fact that different employees operate not only different systems , but also different versions of those systems, makes it very difficult for adversaries to infiltrate such organizations. For example, many research institutes allow staff to bring their own devices to work. Under these circumstances, you are likely to see people coming to the office with different devices and working in a wide variety of operating systems: e.g., HP computers may coexist with Macs on the same office floor and operate not only different versions of Windows and MacOS but also different distributions of Linux. Therefore, even if adversaries compromise all users of Windows , users of other operating systems are likely to sustain the attack and the organization will be able to quickly recover. Yet, of course, “bring your own device” creates issues in day-to-day systems’ management as it requires an IT department capable of working with multiple systems and devices.

USB ports’ usage represents another interesting aspect of compliance policy . The nature of academic work implies that we give invited talks in many different organizations throughout the year. Even when those organizations position themselves as tough on cybersecurity, very often they ask external speakers to bring presentations on USB sticks. In fact, of all organizations where we have given external talks within the last 12 months, only three required us to send presentations to the organizers by email in advance. This simple example is extremely characteristic of a major gap in the compliance policies of many organizations: even though it is clear that a major cybersecurity threat could be delivered to the system via an infected USB stick, employees are often left in charge of their own USB ports as well as USB ports of network computers placed in various parts of their office spaces.

However, internal policies with regard to passwords often supersede all other policies in terms of precision and severity . And yet, these policies often lead to unintended consequences . If you recall the passwords we used for various online accounts in the 1990s, you will remember that there were no particular restrictions on them. They could have consisted of letters only, or numbers only, and their length was not regulated. Now think of your email password at work. It probably has to contain no fewer than eight characters, with both capital and small letters, and at least one number and one special character. Considering that the system also prohibits use of your name or common word phrases, all this makes it very difficult for any individual to remember the password. Another major problem for many of us is that we have to change our password regularly to keep accessing the systems (it is a requirement in most organizations to change email passwords at least every three months). Karen Renaud (Professor of Cybersecurity at Abertay University, Professor Extraordinarius at the University of South Africa, Fullbright Scholar, and Honorary Research Fellow at the University of Glasgow), who has conducted many experiments and written extensively on this subject, argues that current policies leave many of us with no choice but to write down passwords because we, as humans , have a limited capacity to memorize letter and digit sequences of the form required by the current cybersecurity systems [10, 11]. Our own experience and the anecdotal evidence we collected in preparation for this book tell us that Professor Renaud is right. We have recently come across a CEO of one of the largest corporations in the UK who kept his multiple passwords on a sheet of paper attached to the desk lamp in his office. We have also talked to a reputable cybersecurity professor from a major US university who told us that for one of his projects he has to use a different password for a data storage cloud every week throughout a year. His team of research assistants needs frequent access to the cloud and it is next to impossible to memorize the 52 complicated passwords required for the whole calendar year (as there are 52 weeks in a year). Furthermore, even if one could memorize them, it would be very easy to confuse the week when a particular password should be used. Therefore, the professor in question has no other choice but to print off those passwords on a sheet of paper and attach the list to the bottom of his desk drawer… Clearly, this is hardly the smartest way to handle security. Apart from writing down our password, another strategy which many of us use when we need to set up a complex password is making the browser remember that password. This is also far from optimal.

Obviously, the fact that we write down passwords or make the browser remember them in many ways destroys all the good will created by the complicated and hard-to-break passwords in the first place. Karen Renaud tested several solutions to the problem. For example, in an experimental study, she showed that users can be incentivized to create more sophisticated passwords. When setting up a more complex password was associated with a longer delay before a replacement was required, users were more likely to set up harder-to-break passwords [10]. Their efforts to come up with a more sophisticated password were rewarded by not having to think of another one for the next six months. Yet, as we have already seen above, this does not mean that users were necessarily “safer” in terms of their password behavior as they could have used their browser to remember the new passwords.

In another study, Karen Renaud showed that users who have a hard time setting and remembering passwords are better off using password managers [11]. Indeed, password managers are very useful tools . They allow a user to set up and remember only one “master” password while keeping passwords for all accounts encrypted. They even have built-in random password generators. By using a password manager, you can synchronize all online accounts requiring passwords in a single system , create sophisticated 16-character passwords for every login, and simply revoke those passwords by typing in a “master” password instead of the account password every time you need to log in. Of course, like any technical solution , password managers are not unbreakable. yet, using a password manager is indeed a lot better than trying to remember multiple passwords, write them down, or even ask the browser to remember them.Footnote 10

Stick rather than carrot measures are usually employed when businesses apply human-centered strategies in cybersecurity. In many large organizations we have spoken to in preparation for this book, IT departments exercise the practice of creating benign bogus attacks (most often phishing attacks) and trial these attacks on their staff. There are also rather severe punishments in place for those who fail to spot the threat or attack, to the extent that those members of staff who do not perform well in the test lose some functionality of their computers. For example, we have talked to a PA in one large organization whose job was to schedule meetings, prepare documents, and liaise externally and internally on behalf of her manager . This PA failed to spot a fake phishing attack two times in a row and, as a result, faced internal IT sanctions: she lost the capacity to send any emails for two weeks. Can you imagine the effect these sanctions had on this person ’s productivity (as she had to fully rely on telephone communication to fulfill her duties) and, more importantly, on her psychological ability to treat her organization as a trusted entity? We also talked to a member of staff in a large corporation who revealed that monetary fines are imposed for doing poorly in cybersecurity “surprise” tests in his organization and that once he paid out over a third of his monthly salary in related penalties.

All these examples illustrate an important trend: many companies seem to be using negative reinforcement mechanisms in order to incentivize their staff to pay attention to potential threats. Yet, very often this leads to negative results: people become overly suspicious, panicking over simple emails, and fail to constructively address potential problems because they do not know how they will be viewed by the management . Such measures also tend to create an erosion of trust. Many corporate employees we have talked to about cybersecurity issues told us that they were unlikely to inform their IT department of a potential problem (e.g., if they clicked on a link in a malicious email , for example) simply because they thought this might have a negative impact on their promotion opportunities, salary, reputation, etc. Obviously, by introducing such measures, organizations often underestimate the potential consequences of negative reinforcement as their goal is to subject their staff to potential threats and help them gain experience in recognizing risks. Yet, it appears that instead of gaining experience, employees tend to become anxious about cyber issues when negative reinforcement is used. It would seem that positive reinforcement (bonuses, praise, etc.) would work a lot better than punishment where staff training is concerned as it would allow organizations to encourage staff to participate in cybersecurity tasks by avoiding unnecessary blame and negative feelings. We will come back to this point later when we discuss future human-centered cybersecurity solutions.

Overloading people with cybersecurity information is another major problem for many organizations. It seems that many businesses believe that the more information they provide to their staff and customers, the less human-related risk their cybersecurity system will face. Unfortunately, systematic measurements reveal that too much information about cybersecurity leads to the completely opposite result. People who are overwhelmed with cybersecurity information become more risk-taking in cyberspace [12]. We believe this might be because the constant reminder of potential risks makes them overconfident or overoptimistic in this space, leading to a situation where they fail to spot rather obvious threats.

The concept of overconfidence was introduced by Ola Svenson, a psychophysicist from Sweden, who conducted research on road safety in the 1980s. Svenson noticed that the overwhelming majority of car drivers in his survey sample believed that their driving ability was above average; while, statistically, this could only be true for only half of his sample [13]. He concluded, therefore, that people tended to exhibit overconfidence about their relative ability and underestimated the abilities of others. We seem to observe a similar phenomenon in cybersecurity: people who receive large amounts of information about cyber risks and cyberdefense, and who operate in highly regulated environments, tend to significantly overestimate their ability to detect and avoid cybersecurity threats [12]. This, in turn, leads to risk- taking rather than risk-averse behavior in cyberspace.

Gamification is currently one of the in-vogue “patching with people ” measures which seems to be gaining momentum in many different industries. Many cybersecurity vendors offer interfaces where company staff can engage in cyber risk detection exercises and earn points. Some companies use those points as “citizenship” tokens, which then could either be publicized or used as a basis for promotion assessment. Although we have not heard examples of this in interviews, in principle, gamification of the space could also be used to offer monetary rewards or annual bonuses to the staff.

There is, however, a major caveat which one has to remember about gamification. It is incredibly difficult to keep the momentum going where games are concerned. Remember how almost the entire planet was excited about Pokémon Go? On the day of its release, millions of people were out wandering the streets in search of the cute artificial characters. Yet, very soon the enthusiasm died down and, apart from a handful of fans and enthusiasts, it is impossible to imagine anyone being interested in catching pokémons now. The same is true for most games, including cybersecurity games: unless these games are updated and changed regularly, it is hard to see them succeeding as long-term measures. Another problem with gamification is making sure that incentives to play games during the work time do not outdo the incentives to work. It is important to remember that every member of staff has a set of duties and, while we know from research that short distractions during the working day boost productivity, too many distractions may be detrimental to it [14,15,16].

Conversation is another promising direction of human-centered approach. Debi Ashenden’s Centre for Research and Evidence on Security ThreatsFootnote 11 at the University of Portsmouth pioneered the technique of Cybersecurity Conversation. Researchers from the Centre noticed that people at different layers of organizational structure (who may be working in completely different areas than IT ) felt they had a lot of ideas to contribute to enhancing cybersecurity in their organization . Yet, they are never asked to contribute to the conversation about cybersecurity. By facilitating these useful discussions, the Centre showed the value in listening to different opinions about cybersecurity within the organization. By creating an open and inclusive environment, conversations often lead to “out-of-the-box” solutions and allow staff to exercise their digital and physical citizenship within organizations.