1 Introduction

Many activities of individuals’ everyday lives can now be captured, quantified, and processed into data. As a result, organizations increasingly engage with analytics technology – the combination of practices, skills, techniques, and technologies to develop actionable insights from data [12] – to make work more effective, efficient, and objective [18, 19].

In response to this so-called “data-revolution”, a growing scholarship voices critical questions regarding the nature and consequences of analytics [11, 14, 17, 21, 25, 28, 29, 31]. These scholars point out that, due to the complex and inherently subjective nature, introducing analytics is likely to have a significant impact on work. Consequently, they call for scrutinizing the consequences of analytics for work, relations and occupations [15, 23]. Responding to these repeated calls, we provide an empirical case of how analytics occasions occupational transformation.

We report on an ongoing ethnographic study (currently spanning 23 months) at the Dutch Police, following how the police develops and uses predictive analytics. In the police, predictive analytics is referred to as “predictive policing” – the use of analytics to predict, for example, where and when crime is likely to occur [27]. It was introduced in the Dutch police in 2013 and is currently used across nearly all 168 police stations in the Netherlands. The general aim of using predictive policing is to facilitate a change in the nature of police work towards more data-driven and efficient policing and in such a way to prevent crime from happening.

The findings of our study indicate that the shift towards predictive policing was followed by the emergence of a novel occupational role – “intelligence officers”. Initially, intelligence officers were intended to support police officers in the use of predictive policing technology, by helping them to make sense of algorithmic outputs. However, by investing a lot of expertise into interpreting and translating algorithms and the outputs, intelligence officers became increasingly influential and started to steer police action. As a consequence, the practices of intelligence officers came to paradoxically reinforce police officers’ belief in the superiority of algorithmic decisions over human expertise. We conclude by reflecting on the implications of our findings for the literature on occupational change in the age of analytics and artificial intelligence.

2 Theoretical Background

2.1 Criticisms on the Nature of Analytics

In response to a so-called “data revolution” in organizations of all sorts, critical questions start to be raised about the problematic nature and consequences of analytics [9, 11, 14, 17, 21, 28, 31]. One recurrent critical argument is that the input data is subjective, because categorization is a product of human judgment [1, 6, 14, 22]. For example, Ribes and Jackson [29] propose that it is impossible to separate data from data-making practices that instill data with decisions, judgments, and values dictating what is taken into account and what is not. Pine and Liboiron [28] argue that data is not neutral but politically influenced. Similarly, Gitelman [17] cautions that: “The imagination of data is in some measure always an act of classification, of lumping and splitting, nesting and ranking” [15, p. 8].

A related argument is that the output of analytics is black-boxed [21, 25]. It is generally assumed that, due to the large amount of data, analytics is not about the “why” (causation) since indicating the “what” (correlation) is enough [18]. Newell and Marabelli [21] question the societal impacts of this kind of knowledge production and reflect on what it means when it is sufficient that an algorithm produces accurate predictions, even when little is known about what led to these predictions.

Moreover, algorithmic logics are considered often too complex to be fully understood by humans, thus triggering questions about the implications of such algorithmic complexity. For example, in a recent conceptualization of so-called “learning algorithms” Faraj et al. [15] reflect on the black-boxed nature of analytics technology itself (instead of merely its output). Although algorithms always include design choices – for example, the designer’s values, beliefs and ethical standards – these often cannot be straightforwardly understood by human actors [13, 15]. While an algorithm can be constructed in such a way that it might have hidden political consequences, such as including and excluding certain groups of people, the danger is that design choices will likely remain hidden or can only be understood by a few, highly specialized professionals [15].

Managing this complex, black-boxed nature of analytics therefore requires human interpretation [14]. But scholars also highlight that the process of interpretation necessitates careful attention, as it is contingent on cultural and organizational conditions. For example, Schultze [30] demonstrates how interpretations of information made by three occupational groups (system administrators, intelligence analysts, and librarians) were shaped by their struggles over the legitimacy of their organizational position. Striving to show how the individual occupations added value to the collective process of knowledge production, the separate actors engaged in expressing, monitoring, and translating information. These three informing practices consequently showed that the interpretation of information is not independent and objective but can be driven by status struggles of individual occupational groups vis-a-vis each other and the organization.

Introducing such a complex and subjective technology is thus likely to prompt changes in work, relations, and occupations [15]. A relevant question that emerges is how the use of analytics influences occupational work.

2.2 Analytics and Occupational Change

Previous research on occupational change due to technology use generally identifies two possible scenarios for the transformation of work and occupational expertise. One scenario involves an occupation transforming the expertise that is key to its existence, thereby significantly reconfiguring its identity and nature of work [4, 10, 20, 24, 32]. An early account is provided by Zuboff [32], who described how the occupation of pulp workers, faced with the introduction of information technology into the factory, had to shift their skills from action-centered to “abstract” and “intellective”. Pulp workers traditionally relied on direct sensing of materials, for example, defining the quality of pulp by its look and feel. In the new situation they had to learn how to judge the quality of materials from a distance, relying on computerized signs and symbols and using abstract thinking and procedural reasoning. Similarly, Nelson and Irwin [20] explain how the occupation of librarians, faced with the development of Internet search, had to completely redefine the core of their expertise and identity. Not only did librarians have to learn how to master the Internet search effectively, they also had to expand the repertoire of their work by becoming experts in new domains, such as learning how to interpret different Internet results, how to teach Internet search to clients, and how to connect disparate web-sources. Generally, the first scenario in current literature would thus predict that an occupation faced with new technology goes through a considerable reconfiguration of the nature of its work, letting go of old expertise and developing a range of new ways of working.

A second scenario concerns rising tensions or conflicts between occupations as a result of technology introduction [3, 5, 7, 8, 26]. For example, Barrett et al. [7] describe how the introduction of a pharmaceutical robot led to tensions in the relations between three occupational groups in pharmacy work: pharmacists, technicians, and assistants. While the robot allowed technicians and pharmacists to specialize in novel and exciting domains – such as fixing robots’ mechanical failures and engaging in cutting-edge clinical research – it simultaneously produced strain in the relationships between technicians and assistants; i.e., while the technicians developed new expertise and gained authority, the robot took over many of the assistant’s tasks which had a detrimental effect on their expertise and status. Similarly, Pachidi et al. [26] found that the introduction of analytics in telecommunications work led to a serious clash between two groups in the workplace: account managers and data scientists. The claim of data scientists that they could predict customer behavior through data sources without the need for any personal relations significantly threatened the whole raison-d’etre of account managers, who relied on cultivating personal relations with customers as an important source for their income. The fundamental disagreement between the two occupational groups resulted in account managers refusing to engage with analytics altogether, which escalated into a significant conflict between the two groups and ultimately led to layoffs of account managers.

In sum, available research thus far would lead us to expect that occupational groups engage in either redefining their core expertise or find themselves in conflictual relationships with other occupational groups. Our empirical study of the use of analytics in the police points to a different scenario: that of the emergence of a new occupational role that, in collaboration with other occupational groups, makes analytics meaningful for work. Less is known about how such a scenario plays out in practice. In what follows, we report on a study that identifies what happened when the police intentionally introduced a new occupational role to be in charge of analytics to support police officers in the shift to data-driven work.

3 Case Setting and Research Methodology

Our study focuses on the situated work practices of the Dutch Police, to which we gained access in October 2016. The data collection took place in a large city in the Netherlands in which four police stations are located, collectively housing over 700 full time employees. We examined the activities of the use of a Dutch predictive policing algorithm – the so-called “Criminal Anticipation System” (CAS). The algorithm was developed in-house by a data scientist (Dennis) who joined the police in 2012. After extensive work experience as a data-miner in the marketing industry, Dennis started to consider his work as “not very satisfying” and wanted to apply his data preparation and modelling skills to a more meaningful purpose. Inspired by the PredPol algorithm – which was first introduced by the Los Angeles Police Department in 2008 [27] – Dennis was excited about the opportunity to use his insights from the marketing industry to infer patterns in crime behavior and predict crime chances. Dennis remained the lead developer of CAS throughout the process of its roll-out across all Dutch police stations.

CAS runs on a logistic regression algorithm. Influenced by the limited amount and types of data made available to the data science department, Dennis included 50 different variables and divided them into two categories: location-specific characteristics and crime history. Location specific characteristics are based on statistical data that indicate, for example, the size of families, the family income, and the number of social securities. It also includes police data about, for example, the distance to the closest-known burglar or the number of suspects living in a specific area. Crime history is based on the number and spread of criminal incidents over the last three years in and surrounding a location.

Using these variables, Dennis developed the CAS algorithm that calculates crime chances in hot times (time blocks of four hours) and hotspots (area blocks of 125 by 125 m2). The hot times and hotspots are made visible in a heat map (see Fig. 1) with the aim to answer two essential resource allocation questions for police management: where to deploy police officers and at which times to do that. CAS was introduced to the Dutch Police in 2013 in one police district. By the end of 2017 over 90 Dutch police stations were using it and CAS is currently deployed across all police stations in the Netherlands.

Fig. 1.
figure 1

An example of a CAS heat map.

Our ethnographic fieldwork consists of observations and interviews supplemented by archival documents such as job descriptions. All observations are conducted by the first author. The total of 410 h of observation includes daily work at the police station, 90 briefings and 22 team meetings.

In addition, we conducted 18 formal semi-structured interviews (ranging from 25 to 120 min), including 4 interviews with data scientists, 5 interviews with police management, 3 interviews with intelligence officers, and 6 interviews with police officers. During these interviews, participants were asked to describe the trajectory they went through in the police, their everyday activities, and their use of CAS. We also asked them about their view on the usefulness of such a technology for crime prevention. Most formal interviews were voice recorded, summarized, and transcribed. In case voice recording was not possible, detailed notes were taken during the interview and expanded afterwards into an interview summary.

4 Findings

The findings are divided into four sections. We first explain the background and aims of introducing predictive policing technology. Second, we describe how the introduction of predictive policing occasioned the establishment of a new occupational mandate for a group that became labelled as “intelligence officers”. Third, we explain what expertise intelligence officers developed in practice. Fourth, we describe that while police officers increasingly depended on the human expertise of intelligence officers, their work paradoxically reinforced police officers’ belief in the superior value of algorithmic decision making.

4.1 Intelligence Led Policing and Predictive Policing Technology

In 2013, the Dutch police introduced predictive policing through an internally created algorithm called the “Criminal Anticipation System” (CAS). The introduction of CAS was part of the “intelligence led policing” policy change, which had started in 2008. The overall aim of this strategy transformation was to increase the awareness and importance of working with data, including a differentiation between strategic and operational information, improving the reporting skills of police officers, making information available in real time, and establishing formal procedures for analyzing existing data which otherwise remained unutilized.

As part of this approach, introducing CAS promised to achieve three specific goals. First, knowing where to go at what time should give the police a possibility to more efficiently schedule their resources, for example, through reducing or increasing the number of police officers scheduled depending on predicted hot times. Second, due to the large amount of data included, policing decisions during fieldwork – e.g., about where to surveil to counter housebreaking – should become more objective by replacing “gut feeling” for data-based decisions. Finally, the overall aim of introducing CAS was to transform the traditionally reactive nature of police work into a more proactive stance towards preventing crimes such as housebreaking or young gangs creating nuisance. In essence, CAS should assist in preventing crime and safeguarding the lives of police officers while on the road; it should become just as important as every other police skills and tools. To illustrate this ambition, police manager Marga compared the importance of using analytics to police officers’ personal gun; “they also don’t leave their gun on the table”, she explained, referring to analytics being just as indispensable.

To achieve these goals, CAS had to be adopted and used by police officers. Previous experiences with the introduction of new technologies had shown to police management that, as police manager Anna recalled, merely “throwing a new technology over the fence” and expecting police officers to start using it would likely result in a failure of technology adoption. According to data scientist Dennis, this was even more risky when introducing an algorithm such as CAS because of its complex and math-based nature. Dennis believed that police officers would be unwilling to engage deeply with deciphering and interpreting the output of CAS because of their occupational culture, referring to police officers as “people who are selected for being very eager to act and not very eager to think”. In order to shift the police officers to a more data-driven way of working, Dennis argued that algorithmic outputs should be explained by “echoing what the police officers themselves say”. To do this, Dennis argued that the “why, what and how”, or as he put it “the qualitative stuff”, had to be added to algorithmic outputs. However, adding context required interpretation and translation skills, which differed from data scientists’ data preparation and modelling skills. This gap therefore had to be filled by people with a different kind of expertise. These people became so-called “intelligence officers”.

4.2 The Intelligence Officer as a New Occupational Role

To fill the gap between data science and police skills, data scientists and police management wondered if they could introduce an intermediary who could support the work of police officers by making algorithmic output meaningful for police work. During the time of the introduction of CAS in 2013, there was a group within the police – referred to as “information officers” – that seemed most logical to take on this role since they were already working with information, albeit in a different way. Traditionally, the work of an information officer included supporting police management and criminal investigation by gathering various types of information. Former information officer Ben recalled what this role involved:

I have assisted a lot in murder investigations. There you would get various work orders like ‘map this’, or ‘figure that out’, or ‘how do the families relate’. These kinds of things. Or business relations. […] It was about delving into all different internal sources. You didn’t really have access to Internet back then.

Due to their focus on information gathering, information officers had in-depth knowledge about where data – such as crime numbers, suspect data, or information about criminal networks – could be found in police databases. However, their work was regarded as relatively low-status, because information officers were not required to interpret the information they found. Instead, as data scientist Dennis explained, they would “collect all data, print it, put a staple in it and give it to their boss”. Information officers were also sometimes described as “not very assertive”, keen on “avoiding confrontations”, used to “following orders” and doing “kind of boring work” (data scientist Dennis). Moreover, the information officer position was informally regarded as a back-office department for police officers who came to be unfit to continue working in the field. In essence, the information officers’ position was considered as a “shelter for police officers with back problems or illnesses” (intelligence officer Ben).

Despite their relatively low status, the data scientists acknowledged the information officers’ expertise with police databases and reasoned that this occupational group could be well-equipped to take on additional tasks that emerged with the introduction of predictive policing. Instead of just gathering information according to predefined requests, information officers were to take on novel responsibilities, such as interpreting algorithmic output, summarizing it for police officers and making suggestions of potential actions. This way, information officers were required to “add qualitative stuff” to algorithms and to provide back-office support to police officers for using algorithmic outputs. Using the example housebreaking, Dennis explained what that would involve:

You could say: ‘We have quite a drug problem over here [in this neighborhood]’. Then you could wonder: ‘Maybe it [housebreaking prediction] is because of the junkies?’ Well, junkies don’t prepare much, so maybe it is just very easy to burgle there. Maybe the houses have bad locks so you can enter with a simple trick. That kind of information should be retrieved by the information officer. […] Then we can think of what to do about it. As police, we are of course very inclined to just send a car there [for surveillance] but it could be that this is completely useless and that they should do something totally different.

Reflecting the shift in the nature of information officers’ work, the new job title “intelligence officer” and a new job description were introduced in 2013. The novel job description was significantly longer and more focused on interpreting tasks, rather than the operational tasks that characterized the prior work of information officers. For example, the responsibilities now included so-called “data editing” requirements which involved making sense of the data and adding context to it. Intelligence officer Ben explains his perspective on the transformation:

Back in the days, when we received a crime notification, we gathered all information and handed that package over [to police officers]. But I guess that when you gathered and read all that information, you can also interpret it, right? You can confirm or refute such a notification, or you can add some advice like: ‘maybe this and that requires further investigation’, you know. Information is more and more being interpreted.

As a result of the shifting nature of their work, intelligence officers started to gain in-depth expertise about interpreting and working with algorithmic output. This expertise centered around meaning-making practices, on which we elaborate below.

4.3 Intelligence Work in Practice

Although intelligence officers were initially intended to merely provide back-office support to police officers for using algorithmic outputs, they quickly discovered that working with CAS required more than simply “adding qualitative stuff”, as was imagined by the data scientists. In practice, the algorithmic output was highly complex; for example, selecting hotspots and hot times required comparing between different graphs and maps. It was also voluminous, for example, the heatmap regularly showed entire districts covered in hotspots. The outputs often seemed nonsensical, for example, predictions of car burglary were shown in areas where cars were not allowed. And finally, the algorithm remained black-boxed, so the intelligence officers often complained that they did not understand the output because there was no transparency about which variables were most important for predicting hotspots or hot times. In order to make algorithmic outputs legible and meaningful for police work, the intelligence officers had to go beyond just “adding qualitative stuff” and slowly started to learn how to unpack the specific features of the algorithm.

Besides unpacking, intelligence officers also had to make sure that police officers would be able to accept the algorithmic outputs and were actively considering how to best integrate CAS outputs into police work. They reasoned that it was important not to overload the police officers with too many tasks for covering hotspots and hot times, because a large part of police work still consisted of responding to unexpected crimes not included in CAS, such as car accidents. Indeed, as commander Rudy emphasized, police officers had limited resources available: “Look, we [police] cannot handle everything [all crimes], but let’s at least make a choice and set a priority like ‘we will certainly handle this [type of crime], because we think it is now important”.

Moreover, intelligence officers also anticipated that in their recommendations to police officers they should vary the hotspots and types of crime they introduced, so that the predictions would not look too repetitive and would keep the police officers interested in using them. For example, during one of the shifts, intelligence officer Louisa was trying to decide which hotspots to recommend for sending police officers to surveil against housebreaking. The algorithm had produced two hotspots that otherwise never showed up, and two “regular” hotspots that were common crime spots in the district. Louisa was not sure which hotspots to select: the new or the common ones? She asked Ben and together they decided to select the new ones. They reasoned that the police officers would get bored if the hotspots stayed the same and would be more excited to go into a new neighborhood. According to Ben, variety increases the chance that “police officers take hotspots seriously” (observation notes, 13-11-2017).

Finally, to make algorithmic outputs “echo what police officers themselves say”, intelligence officers figured that it was important to make outputs appear closer to the context of police work. They figured this would be possible by including additional background information, such as suspects or information about surrounding neighborhoods. As police commander Rudy explained this viewpoint from the police officers’ perspective:

If you keep the goals [of the algorithmic output] too broad, then police officers will let it go too fast. If you dare to add possible suspects, then they will quickly start searching. Then they’ll better scan the surroundings, like: ‘Hey, we see someone strolling over there’. I think the concreter you are, the more feeling police officers will have for it [the output].

With the aim to make algorithmic outputs meaningful for police work, intelligence officers thus went beyond simply “supporting” police work. Instead, their decisions started to steer the work of police officers. Specifically, because working with algorithmic output required reducing the number of hotspots and hot times presented to police officers, it meant that intelligence officers in fact prioritized certain types of crimes according to their own judgement. Moreover, because they had to combine the results of their interpretation into a single succinct PowerPoint slide to be shown to police officers, this significantly simplified algorithmic outputs by compressing a messy picture into a seemingly clean and objective result. Finally, because intelligence officers also included information from other databases, such as possible suspects, this effectively gave the impression that contextual information was also part of the algorithmic output.

In sum, while intelligence officers’ work was intended to merely support police officers in using algorithmic outputs, in practice they shifted into exerting a much bigger influence on how police work should be organized and where priorities should fall. Intelligence officers started to recognize this growing importance as well: “Most of the time, at least for us, police officers do not know what they need. And then I think ‘well, I know what you need to do because I see a big problem in this neighborhood, so you should go there’. So then I tell them what they should do” (intelligence officer Wendy).

4.4 Police Officers’ Perspective

Over time, the influence of intelligence officers became acknowledged by police officers and their activities were increasingly incorporated into police routines. For example, at the end of the first year of our observations, a new practice was established that required the police commander to meet with an intelligence officer each morning before the briefing. During this meeting the intelligence officer instructed the commander about the crime types, hotspots, and hot times, including background information, that they deemed most important to communicate and emphasize to the team. As intelligence officer Ben explains:

We give an interpretation [to the algorithmic output] so that police officers can do something with it. In other words: ‘It is this for these reasons’. You can also give them advice, like: ‘I would focus on that or that person’ or ‘I wouldn’t do anything about that [crime] because it’s way too unpredictable and you can’t do anything about it’.

Over the course of the two years of our fieldwork, intelligence officers acquired even more influence over police work. For example, they became the most important source for formulating strategically-focused work assignments – “to-do” lists for police action which are used for weekly guidance of police fieldwork. Previously, compiling a “to-do list” for police work was performed by local police officers, responsible for specific neighborhoods. With the use of predictive policing and CAS, the local police officers’ to-do lists started to be viewed as too idiosyncratic; a messy and random list of activities. Gradually, the responsibility for making more strategically-focused work assignments was placed in the hands of police management, who embraced predictive policing and made intelligence officers their central source of input. Consequently, police actions became de facto driven by the intelligence officers’ judgments and interpretations of CAS.

Furthermore, police officers often accepted suggestions of intelligence officers without questioning their reasoning. An example of a recent briefing discussion illustrates this. For one of the shifts, the CAS prediction indicated a high level of nuisance. Considering this as an important prediction, intelligence officer Louisa found a linkable suspect in the police databases and had manually added him to the slide to be shown during the briefing. Upon seeing the slide, a discussion arose about the suitability of that suspect. A couple of police officers claimed that this specific suspect was a much “tougher guy” and said that it was ridiculous to keep an eye on him “merely as a suspect of nuisance”. The commander overruled the discussion by saying that this information came from the intelligence department, so there would “surely be a reasonable link”. The other police officers acknowledged that and did not further question the suspect’s suitability. The briefing ended without further ado (observation notes, June 2018).

One of the reasons for this ready acceptance was that police officers seemed impressed by the complexity of algorithms. “What I’ve seen and what I heard from [intelligence officer] Eva is that CAS includes so many variables, that machine must really be a monster!” said police officer Michael. As a consequence, police officers believed that they might not be “smart” enough to question such complex algorithms and assumed that it was better if they just accepted the output. As police officer Harry explained:

“[W]hen I really think about crime predictions, then I wonder: is a burglar really influenced by something that can make us predict where burglary will happen? Or is it just his target area? But I shouldn’t think too much about that, because I don’t have the answer. I’m quite a follower in that sense. I trust that the people who really understand this thought about these things.”

Even though it did not always make sense to them, police officers started to increasingly accept that crimes can be systematically explained through the use of data and algorithms, which they assumed transcended their level of understanding. Police officer Jay explained his trust in the expertise invested into the technology, without exploring the embedded assumptions or doubting the legitimacy: “I would say that it must come from somewhere. It won’t be implemented just out of the blue.”

Corresponding with their belief in the usefulness of algorithms, police officers started to regard their work as having higher value when they followed the advice generated by the predictions:

I feel useless when I’m just driving around without seeing anything. […] If something [CAS] tells me that the chances are high that a burglary will happen over there, well that’s what we want! Catching thieves or at least prevent crime. So I will go for it! (Police officer Harry).

Police officer Jimmy showed a similar perspective: “With the right information I can make the right decisions” he said, “and making the right decisions gives me a purpose.” Moreover, mainly driven by the growing respect for the algorithmic recommendations the intelligence officers provided them, police officers also started to deem the insights and expertise associated with algorithms as superior to their own judgment, viewing the latter as “subjective” and “blind”. Police officer Harry compared the recommendations of a local police officer with the ones generated based on data:

I think a local police officer is also somehow subjective and has his own agenda. He may think that some type of crime is particularly important, but perhaps this is not at all what the data shows. […] Maybe the data points at something completely different [some other type of crime in another part of town]. I don’t think we should blindly trust the local police officer’s perspective.

In sum, police officers gradually embraced the growing influence intelligence officers came to exert over their work through the use of predictive policing. Even though intelligence officers were initially introduced to provide relatively simple back-office support to police officers, their work in practice came to include many interpretations and judgments to make algorithmic outputs meaningful for police work. Because police officers believed in an incomprehensible complexity of algorithms, they argued that they were not smart enough to understand algorithms, viewed their own tacit expertise as inferior to data-based recommendations, and eventually accepted the algorithmic outputs presented to them without questioning its reasoning. As a result, the new occupational role paradoxically reinforced the police officers’ belief in the superiority of algorithmic decisions.

5 Concluding Remarks

This study aimed at understanding what happens to occupational work upon the introduction of analytics. Our findings offer three contributions to existing literature on occupational change due to technology use and the critical debate on nature of analytics. First, we show that analytics occasions the emergence of an intermediary occupational role that takes charge of analytics and unpacks specific algorithmic features. Prior literature on occupational change either focuses on the skill transformation of separate occupations [20, 32], or on the resulting tensions and conflicts when multiple occupational groups are involved [7, 26]. We extend prior literature by showing the possibility of the rise of an intermediary occupational role in-between analytics designers and users. Thereby, we respond to calls for a relational perspective on occupations [2].

Second, our study shows that analytics is not only constructed by the design choices of its creators, but is also iteratively shaped by the expert work of intermediary occupations who take on the task of unpacking the features of algorithms to make them usable. We thereby respond to calls for disentangling analytics technology [14, 15, 23]. We extend the current critical debate regarding the nature of analytics [6, 11, 17, 21, 22, 25, 26, 28, 29] by giving a detailed explanation of analytics in action by highlighting how different occupational groups perform work with analytics.

Third, our findings indicate that engaging in such “unpacking” practices is consequential for the relations between occupational groups. As such, identifying the role of intermediaries in analytics at work has important implications for the distribution of power between occupations. While prior literature acknowledged the growing power of data scientists as the designers of analytics who can determine what counts as knowledge and what not [15, 16, 26], we highlight that the growing power and steering influence of intermediaries also warrants attention. Growing legitimacy and use of algorithms is making this changing power distribution even more salient.

To conclude, we have shown how the introduction of a new occupational role intended to add interpretations to algorithmic outputs to support existing work also has a counterintuitive consequence. While on the one hand, unpacking the features and making algorithmic outputs meaningful for work by adding interpretations and human judgment encouraged the use of analytics, it also paradoxically reinforced police officers’ belief in the superiority of algorithmic decisions over human expertise. In the long run, the danger of creating a new occupational role that interprets and unpacks analytics to make it readily available for its users is that specifically these practices might even further black-box the inherent inclusion of human expertise.