Skip to main content
Log in

On the person-based predictive policing of AI

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Notes

  1. Berk (2008) notes that researchers have applied statistical methods on crime for nearly a hundred years.

  2. This distinction is based on Ferguson (2017b), Egbert and Krasmann (2019), and a Hitachi (2019) report. Strictly speaking, while all labeled as “predictive policing,” they share neither theoretical bases nor implications and consequences in practice (Ferguson 2017a, p. 1148).

  3. According to USA TODAY (Baig 2019), after the Parkland shooting, three US companies (Bark Technologies, Gaggle.Net, and Securly Inc.) claim that their AI systems can detect possible signs of cyber bullying and violence by scanning student emails, texts, documents, and social media activity.

  4. In 2016, for example, the UK’s National Police Chiefs’ Council (NPCC), Association of Police and Crime Commissioners (APCC), and National Crime Agency released “The Policing Vision 2025” programme, setting out a ten-year plan to help the law enforcement transform and adapt to the modern policing environment. The aim is to employ innovative and transformative approaches for proactive and preventative policing. As was planned, a new national super-database, called the “National Law Enforcement Data Programme” (NLEDP), will be put in place to replace the existing separate systems by 2020.

  5. In the literature of dual use, dual use technologies refer to tools that can be used to achieve good or evil. AI seems to be dual use as by which malevolent individuals can perpetrate wrongful harms. However, AI is unlike guns and H-bombs in that it is not designed to harm. So, in what sense that AI is a dual use technology is an interesting question. Please see Miller (2018) for the analysis of the concept of dual use.

  6. Assigning meaning is crucial in computer science (e.g., mapping symbols onto actions). A Turing machine can initially be viewed as manipulating otherwise meaningless marks, which become symbols when they are linked with rules as to bear assignment of reference and conform to the rules of syntax. This happens, for example, when the marks are taken as 0s and 1s and construed as numerals and hence as symbols standing for binary numbers, and the same is true for standard construals of machine code. It is also standard practice to add further layers of symbols and representation by building up these up out of binary code. Thus we have higher levels of representations, which can be assigned different kinds of reference, subjected to further kinds of syntax. However, even if we can construct the meaning of individual computational procedure, it may be hard for us to analyze the meaning of billions of procedures in AI.

  7. The human brain is a two-way blackbox. On one hand, psychological behaviourists, holding that the mental states are hard to measure, suggest studying observable outer behaviours instead. On the other hand, Bayesian theorists of predictive coding argue that the brain only measures the sensory signal without directly measuring the external world (Swanson 2016). This creates a problem: how the brain only infers its “cause” in the external world based on the “effect” of the sensory signals. This puzzle is described as “view from inside the blackbox” (Clark 2013, p. 183) or “the skull-bound brain” (Hohwy 2013, p. 15).

  8. In addition to biases, there is also the undecidable problem. It has been proven to be impossible to construct an algorithm that can provide correct answers to all yes-or-no questions (Floridi 2016). For example, Kleene (1943) applies Gödel’s incompleteness theorem to computation, and he shows that no effective system can correctly determine whether a program, if run with a given input, will finish running or continue to run (known as the halting problem). Therefore, biases and errors are somewhat inevitable (Lin et al. forthcoming).

  9. Interestingly, despite adopting a total social credit system, China also announced the Beijing AI Principles (2019, May) through the Beijing Academy of Artificial Intelligence (https://www.baai.ac.cn/blog/beijing-ai-principles)—an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government.

  10. According to Ferguson (2017b), before the Kansas City Police Department introduced advanced social network analysis to spot at-risk suspects in 2012, Kansas City’s homicide rate was two-to-four times the national rate. Although the number fell 26.5% after the new technology was employed, homicide and shooting rates dramatically climbed again in 2015. Likewise, in 2013, the Chicago police adopted different algorithms for focused deterrence, which located potential offenders based on their personal criminal record. At the beginning, the software generated numerous false-positive predictions, but its accuracy was significantly improved in 2016 (more than 70% shot people were on the list). However, this by no means implies the ending of violence because the technology only “identifies the disease but offers no cure” (Ferguson 2017b, p. 49). See also Saunders et al. (2016) for similar concerns.

  11. For example, the Chicago Police Department has used an algorithm to prioritize limited resources to focus on those at highest risk by rating every person arrested with a threat score from 1 to 500-plus. Due to the lack of specific guidance on what treatments to apply to the subjects on the list, however, most districts did not focus on intervening with these subjects (Saunders et al. 2016). Careful research shows that the list does not reduce homicides (Saunders et al. 2016). See also Ferguson (2017b, p. 40); Perry et al. (2013); Couchman (2019).

  12. The New Orleans Police Department has applied similar techniques to those employed by the Chicago Police Department since 2012. For a more integrated approach using predictive technologies to reduce crime, the city also supplemented the Group Violence Reduction Strategy as part of their broader NOLA for Life murder reduction strategy.

  13. As Ferguson observes, “[b]ig data collection will not count those whom it cannot see” (2017b, p. 179). Big-data-driven systems will overlook the populations who do not “engage in activities that big data and advanced analytics are designed to capture” (Lerman 2013, p. 56). In our case, those with criminal records or gang associations, as well as prior police contact, are most likely to be marked as suspicious. This creates the concern about “the initial selection bias” (Ferguson 2015, p. 402) of law enforcement data-collection systems that certain individuals will always be at risk to be future targets of suspicion, despite that they are not currently engaging in criminal activities. The danger is straightforward. The databases with “the initial selection bias” will make it easier for a police officer to justify her suspicion if she tends to believe that a particular type of person may be more likely to commit a crime (Saunders et al. 2016; Richardson et al. 2019).

  14. Also, according to Miller and Blackler’s (2017) normative theory of policing, the protection of moral rights is the principal purpose of policing, constrained by democratically supported laws. The purpose in protecting these rights justifies policing. So we can, for example, claim that the police officers are justified to arrest and detain someone for assault. They possess the moral right to do so in virtue of their membership of a morally legitimate police institution. Police officers are individually institutionally responsible for at least some of their actions and omissions regarding the purpose of protecting moral rights.

  15. When used properly, the technologies may benefit law enforcement with increased accuracy. As Ferguson (2015) points out, big data enables not only a wealth of suspicious inferences, but also an equal number of potentially exculpatory facts. When big data is available, police should be required to use it in an exculpatory manner as well. It offers to search for more information and more precise information, including exculpatory information that reduces suspicion, and thus can make more reliable predictions than human investigators. It allows for a more focused use of police resources as well. Moreover, with a vast amount of information, the big data technologies allow collecting unexpected seemingly innocuous connections and correlations for future criminal activities. Take one of Ferguson’s examples (2015, pp. 395–396), a drug dealer needs tiny plastic bags and a scale to package crack cocaine. It is considered that recent innovations can help to track the sale of these items and thus to help spot the drug dealer. Similarly, big data is useful to reveal patterns of national or transnational crimes which were difficult to track before.

  16. China is also exporting its surveillance tech to the global. See Mozur et al. (2019).

  17. A similar debate is currently ongoing in the UK, concerning the Metropolitan Police and the Home Secretary’s trials of the facial recognition surveillance technology since 2016. According to a final report conducted by the London Policing Ethics Panel, an independent panel set up by the Mayor of London to provide ethical advice on policing issues that may impact on public confidence, ‘[m]arginal benefit would not be sufficient to justify [life facial recognition’s] adoption in the face of the unease that it engenders in some, and hence the potential damage to policing by consent’ (London Policing Ethics Panel 2019, p. 47). The panel suggests that the facial recognition surveillance technology should not be adopted unless it could be shown from the field trials that it could be able to significantly increase police efficiency and effectiveness in dealing with serious offences. Currently, human rights organisations Liberty and Big Brother Watch are challenging the use of FR cameras in the courts.

  18. Restrictions are placed on the rights of the data subjects, where necessary and proportionate, in order to avoid obstructing an investigation or inquiry, avoid prejudicing the prevention, detection, investigation or prosecution of criminal offences or the execution of criminal penalties, protect public security, protect national security, and protect the rights and freedoms of others. See the Guide to Law Enforcement Processing of DPA on the Information Commissioner’s Office website (https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-law-enforcement-processing/individual-rights/the-right-of-access/).

  19. As our solution is not necessarily involving police intervention, it raises a question of whether the term “predictive policing” should be substituted or integrated into a larger framework of humanity security.

  20. The databases are the Police National Computer (PNC) and the Police National Database (PND). For further details, please refer to the UK government’s the NLEDP Privacy Impact Assessment Report (https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/721542/NLEDP_Privacy_Impact_Assessment_Report.pdf).

References

  • ACLU. (2016). Community control over police surveillance—Guiding principles. Retrieved June 10, 2019, from https://reurl.cc/M7EdKX.

  • Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.

    MATH  Google Scholar 

  • Amnesty International. (2018). Amnesty international report 2017/18: The state of the world’s human rights. Retrieved March 3, 2019, from https://reurl.cc/Ylz6Ko.

  • Amnesty International United Kingdom. (2018). Trapped in the matrix: Secrecy, stigma, and bias in the Met’s gangs database. Retrieved March 3, 2019, from https://reurl.cc/8lmnzy.

  • Baig, E. C. (2019). Can artificial intelligence prevent the next Parkland shooting? USA TODAY (Feb 13, 2019). Retrieved July 10, 2019, from https://reurl.cc/ObWqzy.

  • Barocas, S, Bradley, E, Honavar, B., & Provost, F. (2017). Big data, data science, and civil rights. arXiv preprint http://arxiv.org/abs/1706.03102.

  • Bennoune, K. (2006). A contextual analysis of headscarves, religious expression, and women’s equality under international law. Columbia Journal of Transnational Law, 45, 367–426.

    Google Scholar 

  • Berk, R. (2008). Forecasting methods in crime and justice. The Annual Review of Law and Social Science, 4, 219–238.

    Google Scholar 

  • Big Brother Watch. (2018). Face off: The lawless growth of facial recognition in UK policing. Retrieved July 10, 2019, from https://reurl.cc/xDq0XL.

  • Brown, H. R., & Friston, K. J. (2012). Dynamic causal modelling of precision and synaptic gain in visual perception—An EEG study. Neuroimage, 63(1), 223–231.

    Google Scholar 

  • Buchholtz, G. (2020). Artificial intelligence and legal tech: Challenges to the rule of law. In Regulating artificial intelligence (pp. 175–198). Cham: Springer.

  • Bodeen, C. (2019). Hong Kong protesters wary of Chinese surveillance technology. The Associated Press (June 14, 2019). Retrieved July 8, 2019, from https://reurl.cc/24qg3O.

  • Bullington, J., & Lane, E. (2018). How a tech firm brought data and worry to New Orleans crime fighting. The New Orleans Times-Picayune (Mar 1, 2018). Retrieved June 9, 2019, from https://reurl.cc/D156DR.

  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st conference on fairness, accountability and transparency, PMLR (Vol. 81, pp. 77–91).

  • Castelvecchi, D. (2016). Can we open the black box of AI? Nature News, 538(7623), 20–23.

    Google Scholar 

  • Chen, S., & Hu, X. (2018). Individual identification using the functional brain fingerprint detected by the recurrent neural network. Brain Connectivity, 8(4), 197–204.

    Google Scholar 

  • Chomsky, N. (2006). Failed States: The abuse of power and the assault on democracy. New York: Metropolitan Books.

    Google Scholar 

  • Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.

    Google Scholar 

  • Conger, K., Fausset, R., & Kovaleski, S. F. (2019). San Francisco bans facial recognition technology. The New York Times (May 14, 2019). Retrieved June 25, 2019, from https://reurl.cc/1QR4pV.

  • Corsaro, N., & Engel, R. S. (2015). Most challenging of contexts: Assessing the impact of focused deterrence on serious violence in New Orleans. Criminology and Public Policy, 14(3), 471–505.

    Google Scholar 

  • Couchman, H. (2019). Policing by machine: Predictive policing and the threat to our rights. Retrieved July 10, 2019, from https://reurl.cc/RdM1Er.

  • Degeling, M., & Berendt, B. (2018). What is wrong about robocops as consultants? A technology-centric critique of predictive policing. AI & Society, 33(3), 347–356.

    Google Scholar 

  • Devarajan, S., & Khemani, S. (2018). If politics is the problem, how can external actors be part of the solution? In K. Basu & T. Cordella (Eds.), Institutions, governance and the control of corruption (pp. 209–251). Cham: Palgrave Macmillan.

    Google Scholar 

  • Egbert, S, & Krasmann, S. (2019). Predictive policing: Not yet, but soon preemptive? Policing and Society.

  • Fajnzylber, P., Lederman, D., & Loayza, N. (2002). Inequality and violent crime. The Journal of Law and Economics, 45(1), 1–40.

    Google Scholar 

  • Ferguson, A. G. (2015). Big data and predictive reasonable suspicion. University of Pennsylvania Law Review, 163(2), 327–410.

    Google Scholar 

  • Ferguson, A. G. (2017a). Policing predictive policing. Washington University Law Review, 94(5), 1115–1194.

    Google Scholar 

  • Ferguson, A. G. (2017b). The rise of big data policing: Surveillance, race, and the future of law enforcement. New York: New York University Press.

    Google Scholar 

  • Floridi, L. (2016). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083).

  • Friston, K. (2019). Publisher correction: Does predictive coding have a future? Nature Neuroscience, 22(1), 144.

    Google Scholar 

  • Garcia, M. (2016). Racist in the machine: The disturbing implications of algorithmic bias. World Policy Journal, 33(4), 111–117.

    Google Scholar 

  • Guild, E. (2019). Data rights: Searching for privacy rights through international institutions. In D. Bigo, E. Isinb, & E. Ruppert (Eds.), Data politics: Worlds, subjects, rights (pp. 230–245). London: Routeldge.

    Google Scholar 

  • Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic bias: From discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 2125–2126). New York: ACM.

  • Hardyns, W., & Rummens, A. (2018). Predictive policing as a new tool for law enforcement? Recent developments and challenges. European Journal of Criminal Policy Research, 24, 201–218.

    Google Scholar 

  • Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2002). Human neural systems for face recognition and social communication. Biological Psychiatry, 51(1), 59–67.

    Google Scholar 

  • High-Level Expert Group on Artificial Intelligence. (2019). The ethics guidelines for trustworthy AI. Retrieved March 3, 2019, from https://reurl.cc/RdM1gG.

  • Hitachi Inc. (2019). Hitachi provides an AI environment in research on Kanagawa prefecture police’s crime and traffic accident prediction techniques. Retrieved January 16, 2020, from https://reurl.cc/lL6d2E.

  • Hohwy, J. (2013). The predictive mind. New York: OUP.

    Google Scholar 

  • Human Rights Watch. (2017). China: Police ‘big data’ systems violate privacy, target dissent. Retrieved June 25, 2019, from https://reurl.cc/A1Z8ld.

  • Human Rights Watch. (2018). China: Big data fuels crackdown in minority region. Retrieved June 25, 2019, from https://reurl.cc/Nae6om.

  • Human Rights Watch. (2019). World report 2019. Retrieved June 25, 2019, from https://reurl.cc/6g641d.

  • Kleene, S. C. (1943). Recursive predicates and quantifiers. Transactions of the American Mathematical Society, 53(1), 41–73.

    MathSciNet  MATH  Google Scholar 

  • Kreutzer, R. T., & Sirrenberg, M. (2020). Fields of application of artificial intelligence—Security sector and military sector. Understanding artificial intelligence (pp. 225–233). Cham: Springer.

    Google Scholar 

  • Kulkarni, P., & Akhilesh, K. B. (2020). Big data analytics as an enabler in smart governance for the future smart cities. In Smart technologies (pp. 53–65). Singapore: Springer.

  • Lazreg, M. (2009). Questioning the veil: Open letters to Muslim women. Princeton: Princeton University Press.

    Google Scholar 

  • Lerman, J. (2013). Big data and its exclusions. Stanford Law Review Online, 66, 55–63.

    Google Scholar 

  • Levinson-Waldman, R., & Posey, E. (2018). Court: Public deserves to know how NYPD uses predictive policing software. Retrieved July 16, 2019, from https://reurl.cc/A1Z8Wd.

  • Lewis, M. K. (2011). Presuming innocence, or corruption, in China. Columbia Journal of Transnational Law, 50, 287–369.

    Google Scholar 

  • Lin, Y., Hung, T., & Huang, T. L. (forthcoming). Engineering equity: How AI can help reduce the harm of implicit bias. Philosophy & Technology.

  • London Policing Ethics Panel. (2019). Final report on live facial recognition. Retrieved July 22, 2019, from https://reurl.cc/RdM17G.

  • Miller, S. (2017). Institutional responsibility. In M. Jankovic & K. Ludwig (Eds.), The Routledge handbook of collective intentionality (pp. 338–348). New York: Routledge.

    Google Scholar 

  • Miller, S. (2018). Dual use science and technology, ethics and weapons of mass destruction. New York: Springer.

    Google Scholar 

  • Miller, S., & Blackler, J. (2017). Ethical issues in policing. New York: Routledge.

    Google Scholar 

  • Moses, L. B., & Chan, J. (2018). Algorithmic prediction in policing: Assumptions, evaluation, and accountability. Policing and Society, 28(7), 806–822.

    Google Scholar 

  • Mozur, P., Kessel, J. M., & Chan, M. (2019). Made in China, exported to the world: The surveillance state. The New York Times (April 24, 2019). Retrieved Jan 4, 2020, from https://reurl.cc/zy9zje.

  • Myerson, R. B. (2006). Federalism and incentives for success in democracy. Quarterly Journal of Political Science, 1, 3–23.

    Google Scholar 

  • Nishida, T. (2018). Kanagawa police to launch AI-based predictive policing system before olympics. Australasian Policing, 10(1), 43.

    Google Scholar 

  • Nissan, E. (2017). Digital technologies and artificial intelligence’s present and foreseeable impact on lawyering, judging, policing and law enforcement. AI & Society, 32(3), 441–464.

    MathSciNet  Google Scholar 

  • Oosterloo, S., & van Schie, G. (2018). The politics and biases of the ‘crime anticipation system’ of the Dutch police. In Proceedings of the international workshop on bias in information, algorithms, and systems (BIAS 2018).

  • Orlandi, N. (2018). Predictive perceptual systems. Synthese, 195(6), 2367–2386.

    Google Scholar 

  • Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celikand, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506–519). New York: ACM.

  • Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., & Hollywood, J. S. (2013). Predictive policing: The role of crime forecasting in law enforcement operations. Rand Corporation. Retrieved Jan 16, 2020, from https://reurl.cc/QpQ3k0.

  • Prince, A., Schwarcz, D. (2019). Proxy discrimination in the age of artificial intelligence and big data. Iowa Law Review, 105, 1257–1318.

    Google Scholar 

  • Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review, 94, 192–233.

    Google Scholar 

  • Room, R. (2005). Stigma, social inequality and alcohol and drug use. Drug and Alcohol Review, 24(2), 143–155.

    Google Scholar 

  • Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint http://arxiv.org/abs/1708.08296.

  • Saunders, J., Hunt, P., & Hollywood, J. S. (2016). Predictions put into practice: A quasi-experimental evaluation of Chicago’s predictive policing pilot. Journal of Experimental Criminology, 12(3), 347–371.

    Google Scholar 

  • Sheehey, B. (2019). Algorithmic paranoia: The temporal governmentality of predictive policing. Ethics and Information Technology, 21(1), 49–58.

    Google Scholar 

  • Shahbaz, A. (2018). The rise of digital authoritarianism: Fake news, data collection and the challenge to democracy. Retrieved July 1, 2019, from https://reurl.cc/vnN1Oa.

  • Stanley, J. (2018). New Orleans program offers lessons in pitfalls of predictive policing. Retrieved Jan 15, 2020, from https://reurl.cc/Gk0r6d.

  • Suresh, H., & Guttag, J. V. (2019). A framework for understanding unintended consequences of machine learning. arXiv preprint http://arxiv.org/abs/1901.10002.

  • Swanson, L. R. (2016). The predictive processing paradigm has roots in Kant. Frontiers in Systems Neuroscience, 10, 79.

    Google Scholar 

  • Sweeney, L. (2013). Discrimination in online Ad delivery. Queue, 11(3), 10.

    Google Scholar 

  • Tamir, D. I., & Thornton, M. A. (2018). Modeling the predictive social mind. Trends in Cognitive Sciences, 22(3), 201–212.

    Google Scholar 

  • Tisne, M. (2018). It’s time for a bill of data rights. MIT Technology Review (Dec 14, 2018). Retrieved Jan 6, 2020, from https://reurl.cc/vnN1zA.

  • Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungs problem. Proceedings of the London Mathematical Society (Series 2), 2(42), 230–265.

    MATH  Google Scholar 

  • Turing, A. M. (1937). Computability and λ-Definability. Journal of Symbolic Logic, 2(4), 153–163.

    MATH  Google Scholar 

  • Tzourio-Mazoyer, N., De Schonen, S., Crivello, F., Reutter, B., Aujard, Y., & Mazoyer, B. (2002). Neural correlates of woman face processing by 2-month-old infants. Neuroimage, 15(2), 454–461.

    Google Scholar 

  • Uchida, C. (2014). Predictive policing. In G. Bruinsma & D. Weisburd (Eds.), Encyclopedia of criminology and criminal justice (pp. 3871–3880). New York: Springer.

    Google Scholar 

  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics. Science Robotics, 2(6), eaan6080.

    Google Scholar 

  • Williams, B. A., Brooks, C. F., & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions, and policy implications. Journal of Information Policy, 8, 78–115.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chun-Ping Yen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hung, TW., Yen, CP. On the person-based predictive policing of AI. Ethics Inf Technol 23, 165–176 (2021). https://doi.org/10.1007/s10676-020-09539-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-020-09539-x

Navigation