Cybersecurity in the grey zone

Discussions of ‘digital war’ quickly resolve to an understanding that global digital connectivity has disturbed our orthodox understandings of war and peace. In the so-called grey zone (Wirtz 2017) below the threshold of armed conflict, a diverse and fluid cast of actors attempts to derive political, strategic and economic gain through the ever-proliferating assemblage of information communications technologies. Even if it is a step too far to label many of these activities—political activism, crime, espionage, subversion—as ‘war’ or ‘war-like’ (Levinson, this volume), it is not unreasonable to characterise the wider environment of cybersecurity as one of persistent grey-zone conflict (Corn 2019, 347–354). At the strategic level, for instance, suitably equipped states view ‘cyber’ as a coercive tool worth exploiting for national gain, be it in military operations, intelligence-gathering or commercial cyberespionage (Brantly 2018; Valeriano et al. 2018). As most of this occurs below the level of conventional conflict, it is seen as a way of limiting conflict escalation, albeit always with the potential to generate just the opposite (Brands 2016; Schneider 2019).

The twists and turns of operations in this digital domain may be surprising and their effects—inasmuch as causality can ever be demonstrated—remarkable, but the vectors of their propagation are familiar, omnipresent and quotidian, including the global internet and the myriad consumer electronics that share and shape our lives. So too the practices of cybersecurity, which, for all the justifiable interest in ‘cyberwarfare’ and ‘cyberespionage’, constitute the ‘everyday’ of digital conflict in the grey zone. In the daily exchanges of technical cybersecurity, there is a constant to-and-fro between network defenders and attackers. Cybersecurity practitioners tend to wring the most out of existing technologies to wrest the upper hand from their adversaries, whether the humblest hacker or the best-resourced state entity. Inevitably, however, they also seek to employ innovative technologies as the environment evolves. In tune with the zeitgeist, artificial intelligence and machine learning (hereafter, AI) are proposed as an essential ‘solution’ to all manner of cybersecurity threats.

The following discussion addresses the identification of cyber threats through AI and algorithmic means and demonstrates a shift from signature- to anomaly-based approaches to threat detection and mitigation. It then moves to the specific application of AI to anomaly detection, particularly in relation to the construction of ‘normal’ organisational ‘patterns of life’. The epistemic implications of this technical orientation are then unpacked, specifically how AI creates new sites of knowledge production through new human–nonhuman subjectivities. This brief exploration concludes by looking beyond everyday cybersecurity to military and intelligence ‘cyber’ and asks what is at stake in the automation of cybersecurity.

From signatures to anomalies

Cybersecurity cannot proceed without data. Access to and protection of data are core practical considerations for cybersecurity professionals. As more computing devices become networked in the ‘Internet of Things’ and data volumes—at rest and in transit—continue to grow, the ‘attack surface’ of the global information environment expands continuously. Understood as the totality of weak points exploitable by nefarious actors, the attack surface is less a physical boundary to be defended than a logical membrane of potential vulnerability distributed in space and time. Preventing exploitation of flaws in this integument, through which malicious code can be inserted or valuable data extracted, is a near-impossible task, not least as vulnerabilities are inherent in information systems. As Libicki (2009, 14) notes, there is ‘no forced entry in cyberspace. Whoever gets in enters through pathways produced by the system itself’. This ontological insecurity is compounded by the sheer diversity and growth of this environment, so that defence can never be perfect and attackers seem always to be one step ahead. Cybersecurity is about making visible those vulnerabilities and threats in order to counter them, an informational challenge that relies on timely and intelligible data for interpretation and effective use in prevention and remediation.

The collection and filtering of data about the status of information systems and threats to their intended functioning have long been largely automated. Humans do not have the cognitive or sensory capacity to cope with the enormous data volumes produced by software and hardware dedicated to alerting systems administrators to problems inside and outside their networks. Add to this a human capital shortfall in the cybersecurity industry (Shires 2018) and automation is a reasonable technical fix for some of these problems. Cybersecurity vendors, for example, offer thousands of different products for automated malware detection. These software packages inspect network traffic and match data packets to known signatures of malicious software (malware) that install themselves on users’ machines and act in deleterious ways. Now much more than just ‘anti-virus’ protection, these software packages also detect and deny worms, Trojans, spyware, adware, ransomware, rootkits and other software entities searching for and exploiting digital vulnerabilities. Automated in this fashion, malware can be repelled, quarantined, destroyed or otherwise prevented from infecting a system and opening it up for exploitation. Effective as this has been in the past, new types of polymorphic and metamorphic malware, which change their ‘appearance’ every few seconds, can evade traditional techniques of ‘signature-based’ detection that rely on existing knowledge of malware species. You cannot defend against these rapidly mutating threats if they have no signature matches in databases that are inevitably always out of date.

Signature-based detection is still effective against known malware, but technical cybersecurity is developing ‘anomaly-based’ capabilities that compare network traffic to continuously revised baselines of ‘normal’ behaviour to detect intrusions, analyse malware and detect spam and phishing attacks. Traffic classified as legitimate against a given baseline is permitted across network boundaries, but unusual behaviours will be detected automatically and flagged for further investigation, including by human analysts. Yet, as the number of alerts triggered increases, many requiring human attention, we are thrown back again on the limits of organisational and cognitive capacity. This situation is exacerbated by the persistent creation of false negatives (bad behaviour treated as good) and false positives (good behaviour labelled as bad), which further drain resources (Libicki et al. 2015, 28–29). One response has been to add a new layer of automation to the parsing and analysis of these ‘big data’ sets, using emerging technologies of AI to process and interpret anomalous behaviour and feed it back into cybersecurity decision-making processes in ever-shortening cycles of action and reaction. Cybersecurity and AI have not always been obvious bedfellows. In the early days of each field, computer security researchers viewed with some distrust the relative unpredictability of AI system behaviours, their multiple possible outputs anathema to the order sought by software engineers (Landwehr 2007). Now, the possibility that AI can learn and adapt in dynamic ways makes it more attractive for countering similarly adaptive cybersecurity threats.

The promise of AI

The use of AI in cybersecurity is not entirely new. If we understand ‘AI’ as a placeholder for a set of algorithmic machine learning heuristics rather than a new form of ‘intelligence’, cybersecurity has been alive to its potential since at least the 1990s. Early applications include the filtering of spam email using Bayesian logic and the classification of spam by large neural networks (Brunton 2013). AI has increased significantly in sophistication since then and its development has partly been occasioned by adversaries’ use of those same technologies. Learning algorithms fuel the evolution of the aforementioned polymorphic and metamorphic malware, for instance. They have also been created to detect vulnerabilities in target systems and assist in their exploitation and subversion: the cybersecurity industry is deeply worried by the future deployment of ‘adversarial’ or ‘offensive’ AI, a concern shared with governments and military and intelligence agencies (Brundage et al. 2018). In this respect, AI deployments are part of an emerging ‘arms race’ between attackers and defenders (Scharre 2018, 229–230). This message is not lost on enterprises, 61% of which reported in 2019 that they could not detect network breaches without AI technologies and 69% believed AI would be necessary to respond to those breaches (Capgemini 2019).

One notable response to this market demand has been the development of cybersecurity AI platforms that claim to mimic natural ‘immune systems’, borrowing from a long analogical association between cybersecurity and notions of public health and epidemiology (Betz and Stevens 2013; Weber 2017). Companies promote anomaly-based AI as a way of learning what ‘normal’ means for an organisation and then begin to explore unusual behaviours. The tolerance of the AI engine to anomalous behaviours can then be adjusted, based on the risk appetite of the organisation and its resourcing levels (Wilkinson 2019). The ‘immune system’ metaphor plays out in a platform’s putative capacity to detect, root out and inoculate against further suspicious behaviours. In one firm’s account of its ‘Enterprise Immune System’ package, ‘[b]y detecting subtle deviations from the organisation’s “pattern of life”, it can distinguish friend from foe—and highlight true cyber threats or attacks that would otherwise go unnoticed’ (Darktrace n.d.). These vendors claim another benefit for potential clients, in that they free up analysts’ time to focus on more important tasks, instead of being mired in ‘low-priority or benign events from the start’ (Darktrace n.d.). This is a recognition of the frequency of false negatives and positives—although AI promises to reduce their incidence by learning what is ‘normal’ over time—but is also a legitimate comment on the exigencies of life in modern security operations centres. These are bombarded with security reports, whose importance at first glance can be difficult to discern. Automated AI systems can handle the bulk of low-level threat detection and response, allowing analysts to tackle high-level behaviours and incidents instead. This includes developing profiles of the human actors behind sophisticated cyber campaigns, such as so-called advanced persistent threats (APTs), in order to tailor deterrence and response regimes. A further promising avenue of research is to capture analysts’ real-life behaviours and feed this data back into learning algorithms as a way of shaping AI responses to cybersecurity incidents (Bresniker et al. 2019).

AI systems learn about the threat environment within and without the notional perimeters of organisations. This includes employees and other people interacting with those networks: clients, customers and, in the case of government agencies, citizens and service-users. Monitoring these individuals allows AI to establish an organisation’s internal rhythms and patterns, a normal baseline against which anomalies can be registered, such as ‘insider threats’ posed by disgruntled employees, leakers and whistle-blowers (Wall 2013). AI-powered behavioural analytics might detect, for instance, unusual employee activities at odd times of the night or unexpected spikes in transfers of sensitive commercial data or intellectual property. Combined with multiple other data streams, a picture is developed of anomalous behaviour within an organisation’s immediate purview organisation, leading to further investigation and possible censure. Using AI to predict human rather than just machine behaviour is an important contributor to ‘sociotechnical’ framings of cybersecurity that address how technology, users and processes interact in complex organisational systems (Coles-Kemp and Hansen 2017). An uneasy tension exists in this field between designing computer networks that enhance the human security of end-users (Coles-Kemp et al. 2018) and using technology to discern anomalous behaviours, a perspective that tends to perceive all end-users as potential vectors of threat (Schneier 2016).

AI, anomalies and episteme

The search for anomalies shifts the emphasis of cybersecurity from known threats to the prolepsis of as-yet-unknown threats and into an anticipatory posture that has received much attention in the critical security literature (e.g. Amoore 2013). It also suggests a more inductive approach to the creation of useful cybersecurity knowledge from big data sets. Driven by AI, anomaly-based cybersecurity practices search for patterns, associations and correlations that cannot be identified solely by signature analysis, the latter understood as a form of deductive reasoning. In part, this channels the infamous claim made of big data analytics that they make the hypothetico-deductive method ‘obsolete’ (Anderson 2000). As critics of this position observe, all enquiry, big data analytics included, instantiates any number of theoretical assumptions and biases (Kitchin 2014). In AI-driven machine learning, parameters and data categories are often set in advance by human programmers. This ‘supervised learning’ is very different from ‘unsupervised’ learning techniques, in which algorithms do the labelling and categorisation themselves and might therefore be ‘unbounded from the bias of specific localities’ (Parisi 2019, 98). The quality of training data is therefore key to determining output quality, and supervised machine learning can give rise to outcomes that reproduce programmers’ biases (McDonald 2019). In cybersecurity, the assembling of ‘cyber threat intelligence’ (CTI) from multiple data streams is prone to various biases, as analysts are swayed by groupthink, external social media and reporting, seduction by the power of expertise, and their personal beliefs and prejudices; responsible CTI cannot therefore be the preserve of machines alone (Collier 2019). These everyday decisional acts constitute ‘little security nothings’ that have even greater import in the aggregrate than in their individual existence alone (Husymans 2011)

Furthermore, as Aradau and Blanke (2018) identify in their critique of ‘algorithmic rationalities’, the hunt for anomalies in big data security environments subtly alters the logics of security in three specific ways. Referring to intelligence and counter-terrorism, rather than cybersecurity, they note first how advanced analytics are, as outlined above, a response to the increased data volumes of interest to security practitioners. Anomaly-based analytics are geared, therefore, to the practical requirement for ‘actionable information’, rather than ‘good or truthful information’, which unravels claims to ‘algorithmic objectivity’ and shows ‘how uncertainty is radically embedded within algorithmic reasoning’ (Aradau and Blanke 2018, 20). This is compounded by the lack of standardised data collection and classification procedures, which complicates comparisons between data sets. Second, anomalies are constructed by their divergence from normality, which is defined as similarity between data points rather than predetermined categories. ‘Otherness’ is expressed as ‘dots, spikes or nodes’ in geometric or topological space that are anomalous in their dissimilarity to the ‘normal’ so constructed. Whilst this may seem to avoid the questions of bias previously noted, this unmooring from negative categorisations presents a political problem: how to ‘reconnect techniques of producing dots, spikes and nodes with vocabularies of inequality and discrimination’? (Aradau and Blanke 2018, 20). Third, stable categories—as in statistical analyses—give way to ‘continuous calculations of similarity and dissimilarity’, thereby refusing direct political contestation, particularly as those calculations ‘remain invisible, often even to the data analysts themselves’ (Aradau and Blanke 2018, 21). Hidden within the archetypal ‘black box’ of technology, defined in terms of inputs and outputs only (Von Hilgers 2011), the computation of AI technologies is frequently unknowable and therefore radically opaque (Castelvecchi 2016).

The complexity, uncertainty and lack of transparency of anomaly-based AI technologies in cybersecurity and elsewhere raise questions of AI trust, safety and control. The postulated solution is either to double down on AI as an engineering solution to these problems, or to figure out precisely where humans should be located in respect of the ‘loop’ of AI decision-making. In the context of lethal autonomous weapons systems and the use of AI in war, the question has become: should humans be in, on or out of this loop? (Jensen et al. 2019; Scharre 2018) In cybersecurity, as in war, analysts and operators are enmeshed in hybrid human–machine assemblages that generate knowledge of particular utility in specific decision-making circumstances. The production of cyber threat intelligence is referred to in terms of hybrid configurations of people, data, machines and organisations, tasked with meeting specific operational and strategic demands (Jasper 2017). Srnicek (2014, 45) captures this dynamic when he analyses the ‘delegation of thought’ to machines in ‘cognitive assemblages’ of sociotechnical composition. In these epistemic loci, heterogeneous human and nonhuman actors, including algorithms, contribute to the creation of expert knowledges, which are ‘collective and distributed rather than individual or solely social’ (Srnicek 2014, 45; Hayles 2016). They establish ‘collaborations of influence’ (Kaufmann 2019) that have agency in how and what kind of knowledge is produced. Are we ‘in the loop’ if our choices are between paths of action relayed by machines? (Weber and Suchman 2016) If unsupervised learning systems develop their own questions and goals, what becomes of the human at all? The traditional mantra that cybersecurity is a ‘team sport’ is better read as a ‘dance of agency’ (Pickering 2010) once this relational process of delegation and complex agency in AI-rich environments takes hold.

Beyond the everyday

The foregoing review has addressed the everyday cybersecurity necessary to maintain the ordinary functioning of the information systems on which daily life depends. AI has an important role in this diverse landscape, automating the mundane, learning from its environment and promoting better protection for users, organisations and societies. However, as alluded to above, the role of algorithms in creating and sustaining specific socioeconomic and political arrangements is never neutral (Kitchin 2017; Berry 2019). AI-driven anomaly detection creates new subjectivities in the grey zone of modern informational conflict and risks sacrificing ‘truth’ for ‘utility’ and ‘convenience’ in the managerial practices of organisational cybersecurity. This radical contingency creates political problems as AI substitutes for human cognition as the arbiter of network decision-making and regulation of data flows. Anomalies may escape easy mapping to established groupings of people and behaviours, but there are multiple examples of the insidious effects of this computational logic. Tools of data manipulation similar to those in AI-fuelled cybersecurity are easily turned to economic and political projects of worrisome scale and impact. From the ‘behavioural futures markets’ of surveillance capitalism (Zuboff 2019) to the Chinese social credit system (Matsakis 2019), from the excesses of domestic surveillance (Bauman et al. 2014) to online information operations in a post-truth era (Woolley and Howard 2018), algorithms constitute new modalities and mediators of political influence and control. Whilst we have few grounds for assuming that AI is inferior to human intelligence (Collins 2019), questions of knowledge, power and subjectivity are intimately bound up with these algorithmic rationalities (Aradau and Blanke 2018).

AI and cybersecurity are also far from neglected in the realms of secret intelligence and military operations. AI is a sine qua non for modern militaries (Lewis 2019) and a potent augmentation to strategic planning (Payne 2018). Without AI in cyberspace, the former head of US Cyber Command remarked in Senate testimony, ‘you are always behind the power curve’ (Congressional Research Service 2019, 11). Militaries and intelligence agencies are investing heavily in AI-enabled cyber, including the US Department of Defense’s new Joint Artificial Intelligence Centre (Corrigan 2019) and numerous other initiatives leveraging AI to generate operational and strategic effect in and through cyberspace. The UK is developing ‘human–machine teaming’ in AI-rich environments including cyber, so as to promote ‘the effective integration of humans, artificial intelligence (AI) and robotics into warfighting systems … that exploit the capabilities of people and technologies to outperform our opponents’ (Ministry of Defence 2018, 39). Known in the USA as ‘centaur warfighters’, these hybrids will ‘leverage the precision and reliability of automation without sacrificing the robustness and flexibility of human intelligence’ (Scharre 2018, 321). This continues a long lineage of hybrid military subjectivities (Coker 2013), oriented now to predictive machine cognition providing decision advantage to the new military operations of ‘cognitive manoeuvre’ in global information space (Dear 2019). These hybrid entities are not just network defenders but agents of cyberwarfare and cyberespionage. As the US adopts new doctrines of ‘persistent engagement’ and ‘defending forward’ in cyberspace (Healey 2019), they will be on the front lines of digital warfare and intelligence, with as-yet unknown autonomy in decision-making and target discrimination. In this assemblage, ‘we may never know if a decision is a decision … or if it has been “controlled by previous knowledge” and “programmed”’ (Amoore and de Goede 2008, 180).

Militaries and intelligence agencies join firms, citizens, criminals, hacktivists and others using AI-enabled cyber tools in the complex battlespaces of the grey zone. Technical cybersecurity exploits AI to overcome constraints on human cognition, to speed decision-making and to provide better protection to organisations and end-users of information systems. In so doing, it prioritises operational efficiency over truth and creates new subjectivities and targets of intervention. Algorithms and automation also create new sites of knowledge production whilst simultaneously obfuscating the precise calculations that lead to particular outputs. This draws attention to problematic aspects of the delegation of thought to machines, in that it portends a decrease in opportunities for meaningful oversight and regulation. The stakes could not be higher in military cyber targeting or digital intelligence—strategic cybersecurity—by which standards the use of AI in everyday cybersecurity is a straightforward proposition. Nevertheless, the role of AI in cybersecurity generally, a core modality of offence–defence dynamics in the grey zone, remains open to contestation, modulating as it does the global flows of data in our nominal ‘information age’. In considering AI in cybersecurity and elsewhere, perhaps we should revisit Lewis Mumford’s (1964, 273) provocation: ‘Unless you have the power to stop an automatic process, you had better not start it’. Where is agency in the new cybersecurity assemblage and who or what makes the decisions that matter?