Skip to main content

Controlling the Creators

  • Chapter
  • First Online:
Robot Rules

Abstract

I raises new ethical problems in terms of what decisions it should be allowed to take and how it should take those decisions. In order to resolve these, he says we need to create a forum for discussions across society, involving the public, the private sector and experts from different fields. Numerous groups around the world have proposed ethical codes aimed predominantly at human designers of AI. Turner shows how, in order to implement any of these, it is necessary to take practical steps which might include creating a uniform professional system of regulation and training for AI designers. Turner also suggests a regulatory code for members of the public interacting with AI, similar to a driving license.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The distinction is sometimes referred to as regulation ex ante (before the event) and ex post (after the event).

  2. 2.

    John Markoff, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots (New York: ECCO, 2015).

  3. 3.

    The term “AI engineers” is generally adopted in this book in preference to “programmers”, in order to avoid giving the impression that each AI decision is programmed or set by the human(s) in question, and because the term engineer connotes a wider class of activities than traditional programming.

  4. 4.

    Morag Goodwin and Roger Brownsword, Law and the Technologies of the Twenty-First Century: Text and Materials (Cambridge: Cambridge University Press, 2012), 246.

  5. 5.

    See, for example, Directive 2003/88/EC of the European Parliament and of the Council of 4 November 2003 concerning certain aspects of the organisation of working time, or Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.

  6. 6.

    “Take Back Control” was the slogan of the Vote Leave campaign.

  7. 7.

    See, for example, the website of the Vote Leave campaign: “In the EU, decisions are made by three key bodies; the European Commission (which is unelected), the Council of Ministers (where the UK is outvoted) and the European Parliament. This system is deliberately designed to concentrate power into the hands of a small number of unelected people and undermines democratic government”. Briefing, Taking Back Control from Brussels, http://www.voteleavetakecontrol.org/briefing_control.html, accessed 1 June 2018.

  8. 8.

    For a critical but ultimately hopeful vision of Europe and its failure to engender a sense of shared identity, see Larry Siedentop, Democracy in Europe (London: Allen Lane, 2000).

  9. 9.

    Hiroyuki Nitto, Daisuke Taniyama, and Hitomi Inagaki, “Social Acceptance and Impact of Robots and Artificial Intelligence—Findings of Survey in Japan, the U.S. and Germany”, Nomura Research Institute Papers, No. 2011, 1 February 2017, https://www.nri.com/~/media/PDF/global/opinion/papers/2017/np2017211.pdf, accessed 1 June 2018. The definition of “robots” in this survey was somewhat unclear, meaning that participants likely included both simple automation and what this book would define as true artificial intelligence in their responses.

  10. 10.

    Sarah Castell, Daniel Cameron, Stephen Ginnis, Glenn Gottfried, and Kelly Maguire, “Public Views of Machine Learning: Findings from Public Research and Engagement Conducted on Behalf of the Royal Society”, Ipsos MORI, April 2017, https://royalsociety.org/~/media/policy/projects/machine-learning/publications/public-views-of-machine-learning-ipsos-mori.pdf, accessed 1 June 2018.

  11. 11.

    Vyacheslav Polonski, “People Don’t Trust AI—Here’s How We Can Change That”, Scientific American, 10 January 2018, https://www.scientificamerican.com/article/people-dont-trust-ai-heres-how-we-can-change-that/, accessed 1 June 2018.

  12. 12.

    The House of Lords Science and Technology Committee has commented: “For any new technology to succeed, the trust of consumers is vital. In the food sector gaining that trust is a particular challenge—as recently demonstrated by the public reaction to the introduction of technologies such as genetic modification and irradiation”, House of Lords Science and Technology Committee, First Report of Session 20092010: Nanotechnologies and Food, s. 7.1.

  13. 13.

    “What is genetic modification (GM) of crops and how is it done?” Website of the Royal Society, https://royalsociety.org/topics-policy/projects/gm-plants/what-is-gm-and-how-is-it-done, accessed 1 June 2018.

  14. 14.

    Charles W. Schmidt, “Genetically Modified Foods: Breeding Uncertainty”, Environmental Health Perspectives, Vol. 113, No. 8 (August 2005), A526–A533.

  15. 15.

    L. Frewer, J. Lassen, B. Kettlitz, J. Scholderer, V. Beekman, and K.G. Berdalf, “Societal Aspects of Genetically Modified Foods”, Food and Chemical Toxicology, Vol. 42 (2004), 1181–1193.

  16. 16.

    Ibid.

  17. 17.

    Andy Coghlan, More Than Half of EU Officially Bans Genetically Modified Crops, 5 October 2015, https://www.newscientist.com/article/dn28283-more-than-half-of-european-union-votes-to-ban-growing-gm-crops/, accessed 1 June 2018.

  18. 18.

    Ibid. This was a weevil-resistant maize grown in Spain.

  19. 19.

    “Recent Trends in GE [Genetically-Engineered] Adoption”, US Department of Agriculture, 17 July 2017, https://www.ers.usda.gov/data-products/adoption-of-genetically-engineered-crops-in-the-us/recent-trends-in-ge-adoption.aspx, accessed 1 June 2018.

  20. 20.

    Melissa L. Finucane and Joan L. Holup, “Psychosocial and Cultural Factors Affecting the Perceived Risk of Genetically Modified Food: An Overview of the Literature”, Social Science & Medicine, Vol. 60 (2005), 1603–1612.

  21. 21.

    L. Frewer, C. Howard, and R. Shepherd, “Public Concerns About General and Specific Applications of Genetic Engineering: Risk, Benefit and Ethics”, Science, Technology, & Human Values, Vol. 22 (1997), 98–124.

  22. 22.

    Roger N. Beachy, “Facing Fear of Biotechnology”, Science (1999), 285, 335.

  23. 23.

    Melissa L. Finucane and Joan L. Holup, “Psychosocial and Cultural Factors Affecting the Perceived Risk of Genetically Modified Food: An Overview of the Literature”, Social Science & Medicine, Vol. 60 (2005), 1603–1612, 1608.

  24. 24.

    Lin Fu, “What China’s Food Safety Challenges Mean for Consumers, Regulators, and the Global Economy”, The Brookings Institution, 21 April 2016.

  25. 25.

    Ibid.

  26. 26.

    See also the discussion at s. 4.9 of this chapter of the January 2018 White Paper prepared by a division of China’s Ministry of Industry and Information Technology and its observation at para. 3.3 that “[i]n the case of AI technology, issues of safety, ethics and privacy have a direct impact on people’s trust in AI technology in their interaction experience with AI tools.”: “White Paper on Standardization in AI”, National Standardization Management Committee, Second Ministry of Industry, 18 January 2018, http://www.sgic.gov.cn/upload/f1ca3511-05f2-43a0-8235-eeb0934db8c7/20180122/5371516606048992.pdf, accessed 1 June 2018.

  27. 27.

    Ulrich Beck, “The Reinvention of Politics: Towards a Theory of Reflexive Modernization”, in Reflexive Modernization: Politics, Tradition and Aesthetics in the Modern Social Order, edited by Ulrich Beck, Anthony Giddens, and Scott Lash (Cambridge: Polity Press, 1994), 1–55.

  28. 28.

    Jean-Jacques Rousseau, The Social Contract , edited and translated by Victor Gourevitch (Cambridge: Cambridge University Press: 1997), Book 2, 4.

  29. 29.

    Human Rights Committee General Comment No. 25: CCPR/C/21/Rev.1/Add.7, 12 July 1996.

  30. 30.

    Morag Goodwin and Roger Brownsword, Law and the Technologies of the Twenty-First Century: Text and Materials (Cambridge: Cambridge University Press, 2012), 262.

  31. 31.

    This justification for free speech was set out in the writings of John Stuart Mill and was invoked by Justice Oliver Wendell Holmes in a celebrated dissent in the US Supreme Court Case Abrams v. United States, 250 U.S. 616 (1919), at 630.

  32. 32.

    John Rawls, A Theory of Justice: Revised Edition (Oxford: Oxford University Press, 1999). See also Jurgen Habermas, “Reconciliation Through the Public Use of Reason: Remarks on John Rawls’s Political Liberalism”, The Journal of Philosophy, Vol. 92, No. 3 (1995), 109–131.

  33. 33.

    Morag Goodwin and Roger Brownsword, Law and the Technologies of the Twenty-First Century: Text and Materials (Cambridge: Cambridge University Press, 2012), 255.

  34. 34.

    See further Chapter 8 at s. 3.3.1.

  35. 35.

    Resources are available on the website of the UK’s All Party Parliamentary Group on AI, http://www.appg-ai.org/, accessed 1 June 2018.

  36. 36.

    “Notice-and-Comment’ Rulemaking”, Centre for Effective Government, https://www.foreffectivegov.org/node/2578, accessed 1 June 2018. For discussion see D.J. Galligan, “Citizens’ Rights and Participation in the Regulation of Biotechnology”, in Biotechnologies and International Human Rights , edited by Francesco Francioni (Oxford: Hart Publishing, 2007).

  37. 37.

    European Parliament Research Service, “Summary of the public consultation on the future of robotics and artificial intelligence (AI) with an emphasis on civil law rules”, October 2017, summary of the public consultation on the future of robotics and artificial intelligence (AI) with an emphasis on civil law rules, accessed 1 June 2018.

  38. 38.

    Tatjana Evas, “Public Consultation on Robotics and Artificial Intelligence First (Preliminary) Results of Public Consultation”, European Parliament Research Service, 13 July 2017, http://www.europarl.europa.eu/cmsdata/128665/eprs-presentation-first-results-consultation-robotics.pdf, accessed 1 June 2018.

  39. 39.

    “What Is Open Roboethics Institute?”, ORI Website, http://www.openroboethics.org/about/, accessed 1 June 2018.

  40. 40.

    “Would You Trust a Robot to Take Care of Your Grandma?”, ORI Website, http://www.openroboethics.org/would-you-trust-a-robot-to-take-care-of-your-grandma/, accessed 1 June 2018.

  41. 41.

    “Homepage”, Moral Machine Website, http://moralmachine.mit.edu/, accessed 1 June 2018.

  42. 42.

    Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan, “The Social Dilemma of Autonomous Vehicles”, Science, Vol. 352, No. 6293 (2016), 1573–1576; Ritesh Noothigattu, Snehalkumar ‘Neil’ S. Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel D. Procaccia, “A Voting-Based System for Ethical Decision Making”, arXiv:1709.06692v1 [cs.AI], accessed 1 June 2018.

  43. 43.

    Oliver Smith, “A Huge Global Study On Driverless Car Ethics Found the Elderly Are Expendable”, Forbes, 21 March 2018, https://www.forbes.com/sites/oliversmith/2018/03/21/the-results-of-the-biggest-global-study-on-driverless-car-ethics-are-in/#7fbb629f4a9f, accessed 1 June 2018.

  44. 44.

    Joel D’Silva and Geert van Calster, “For Me to Know and You to Find Out? Participatory Mechanisms, the Aarhus Convention and New Technologies”, Studies in Ethics, Law, and Technology, Vol. 4, No. 2 (2010).

  45. 45.

    Strictly speaking, DeepMind is UK based, though it is a subsidiary of Alphabet, the US-based parent of Google.

  46. 46.

    “Homepage”, Website of the Partnership on AI , https://www.partnershiponai.org/, accessed 1 June 2018. The Partnership’s governing board now includes six representatives from for-profit organisations and six from not-for-profit ones. See “Frequently Asked Questions: Who Runs PAI today?”. At the time of writing the Executive Director of the Partnership is Terah Lyons, a former Policy Advisor to the U.S. Chief Technology Officer in the White House Office of Science and Technology Policy. Notwithstanding this formal balance between companies and NGOs, it remains to be seen whether the Partnership will present any real challenge to the major technology firms.

  47. 47.

    “Regulatory Sandbox”, FCA Website, 14 February 2018, https://www.fca.org.uk/firms/regulatory-sandbox, accessed 1 June 2018.

  48. 48.

    “FinTech Regulatory Sandbox”, Moneyart Authority of Singapore Website, 1 September 2017, http://www.mas.gov.sg/Singapore-Financial-Centre/Smart-Financial-Centre/FinTech-Regulatory-Sandbox.aspx, accessed 1 June 2018.

  49. 49.

    See Geoff Mulgan, “Anticipatory Regulation: 10 Ways Governments Can Better Keep UP with Fast-Changing Industries”, Nesta Website, 15 May 2017, https://www.nesta.org.uk/blog/anticipatory-regulation-10-ways-governments-can-better-keep-up-with-fast-changing-industries/, accessed 1 June 2018.

  50. 50.

    FCA, Regulatory Sandbox Lessons Learned Report, October 2017, para. 4.1, https://www.fca.org.uk/publication/research-and-data/regulatory-sandbox-lessons-learned-report.pdf, accessed 1 June 2018.

  51. 51.

    Ibid., para. 4.16.

  52. 52.

    Some national standards bodies have promulgated their own AI guidance, such as the British Standards Institute’s BS 8611:2016 on “Robots and robotic devices - Guide to the ethical design and application of robots and robotic systems”. These ought also to be factored into any standard-setting conversation internationally.

  53. 53.

    “Artificial Intelligence”, Website of the National Institute of Standards and Technology , https://www.nist.gov//topics/artificial-intelligence, accessed 1 June 2018.

  54. 54.

    Readers may have noted that the acronym IOS does not match the name of the organisation; this is deliberate: “ISO” is derived from the Greek word isos (equal) and remains the same across all languages. “ISO and Road Vehicles—Great Things Happen When the World Agrees”, ISO, September 2016, 2, https://www.iso.org/files/live/sites/isoorg/files/archive/pdf/en/iso_and_road-vehicles.pdf, accessed 1 June 2018.

  55. 55.

    “About the ACM Organisation”, Website of the Association of Computer Machinery, https://www.acm.org/about-acm/about-the-acm-organization, accessed 2 July 2018.

  56. 56.

    See, for example, “ISO and Road Vehicles—Great Things Happen When the World Agrees”, ISO, September 2016, https://www.iso.org/files/live/sites/isoorg/files/archive/pdf/en/iso_and_road-vehicles.pdf, accessed 1 June 2018.

  57. 57.

    “About IEEE”, Website of IEEE, https://www.ieee.org/about/about_index.html, accessed 1 June 2018.

  58. 58.

    “About ISO”, Website of ISO, https://www.iso.org/about-us.html, accessed 1 June 2018.

  59. 59.

    Report of the Committee of Inquiry into Human Fertilisation and Embryology, July 12984, Cmnd. 9314, ii–iii.

  60. 60.

    Ibid., 4.

  61. 61.

    Ibid., 2–3.

  62. 62.

    Ibid., 75–76.

  63. 63.

    “About Us”, Website of the HFEA, https://www.hfea.gov.uk/about-us/, accessed 1 June 2018.

  64. 64.

    “Cabinet Members: Minister of State for Artificial Intelligence”, Website of the Government of the UAE , https://uaecabinet.ae/en/details/cabinet-members/his-excellency-omar-bin-sultan-al-olama, accessed 11 June 2018. See also “UAE Strategy for Artificial Intelligence”, Website of the Government of the UAE , https://government.ae/en/about-the-uae/strategies-initiatives-and-awards/federal-governments-strategies-and-plans/uae-strategy-for-artificial-intelligence, accessed 1 June 2018.

  65. 65.

    Anna Zacharias, “UAE Cabinet Forms Artificial Intelligence Council”, The UAE National, https://www.thenational.ae/uae/uae-cabinet-forms-artificial-intelligence-council-1.710376, accessed 1 June 2018.

  66. 66.

    Dom Galeon, An Inside Look at the First Nation with a State Minister for Artificial Intelligence”, Futurism, https://futurism.com/uae-minister-artificial-intelligence/, accessed 1 June 2018.

  67. 67.

    Ibid.

  68. 68.

    APPG on AI, “APPG on AI: Findings 2017”, http://www.appg-ai.org/wp-content/uploads/2017/12/appgai_2017_findings.pdf, accessed 1 June 2018.

  69. 69.

    “EURON Roboethics Roadmap”, July 2006, 6, http://www.roboethics.org/atelier2006/docs/ROBOETHICS%20ROADMAP%20Rel2.1.1.pdf, accessed 1 June 2018.

  70. 70.

    Ibid., 6–7.

  71. 71.

    “Principles of Robotics”, EPRSC Website, https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/, 1 June 2018.

  72. 72.

    Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman,Vivienne Parry, Geoff Pegman, Tom Rodden, Tom Sorrell, Mick Wallis, Blay Whitby, and Alan Winfield, “Principles of Robotics: Regulating Robots in the Real World”, Connection Science, Vol. 29, No. 2 (2017), 124–129.

  73. 73.

    Its founding members include the Conference of Engineering College and Training Directors, the French Atomic Energy Commission, the French National Centre for Scientific Research, the Conference of University Chairmen, the French National Institute for computer science and applied mathematics and the Institut Télécom. “Foundation of Allistene, the Digital Sciences and Technologies Alliance”, Website of Inria, https://www.inria.fr/en/news/mediacentre/foundation-of-allistene?mediego_ruuid=4e8613ea-7f23-4d58-adfe-c01885f10420_2, accessed 1 June 2018.

  74. 74.

    “Cerna”, Website of Allistene, https://www.allistene.fr/cerna-2/, accessed 1 June 2018.

  75. 75.

    “CERNA Éthique de la recherche en robotique”: First Report of CERNA, CERNA, http://cerna-ethics-allistene.org/digitalAssets/38/38704_Avis_robotique_livret.pdf, accessed 3 February 2018. The CERNA researchers used a definition of robots which is roughly co-extensive with that adopted in this book.

  76. 76.

    First, a General S. dealt with matters common to all high-profile emerging technologies, because it was not tailored particularly to AI or robotics this will not be discussed further here. The Cerna principles also cover six recommendations for robots which imitate living entities and engage in emotional and social interactions with humans, as well as medical robots. Both of these topics are too narrow to qualify as general ethical codes and therefore are not discussed further here.

  77. 77.

    “CERNA Éthique de la recherche en robotique”: First Report of CERNA, CERNA, 34–35, http://cerna-ethics-allistene.org/digitalAssets/38/38704_Avis_robotique_livret.pdf, accessed 1 June 2018.

  78. 78.

    The term “Recombinant” refers to the practice of attaching DNA from one organism to DNA of another, with the potential for creating organisms displaying traits from these multiple sources. See Paul Berg, “Asilomar and Recombinant DNA”, Official Website of the Nobel Prize, https://www.nobelprize.org/nobel_prizes/chemistry/laureates/1980/berg-article.html, accessed 1 June 2018.

  79. 79.

    Paul Berg, David Baltimore, Sydney Brenner, Richard O. Roblin III, and Maxine F. Singer. “Summary Statement of the Asilomar Conference on Recombinant DNA Molecules”, Proceedings of the National Academy of Sciences Vol. 72, No. 6 (June 1975), 1981–1984, 1981.

  80. 80.

    Paul Berg, “Asilomar and Recombinant DNA”, Official Website of the Nobel Prize, https://www.nobelprize.org/nobel_prizes/chemistry/laureates/1980/berg-article.html, accessed 1 June 2018.

  81. 81.

    “A principled AI Discussion in Asilomar”, Future of Life Institute, 17 January 2017, https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/, accessed 1 June 2018.

  82. 82.

    90% approval from participants was required in order for a principle to be adopted in the final set.

  83. 83.

    “Asilomar AI Principles”, Future of Life Institute, https://futureoflife.org/ai-principles/, accessed 1 June 2018.

  84. 84.

    Jeffrey Ding, “Deciphering China’s AI Dream”, Governance of AI Program, Future of Humanity Institute (Oxford: Future of Humanity Institute, March 2018), 30, https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf, accessed 1 June 2018.

  85. 85.

    Anonymous comment made in discussion with the author, January 2018. Even fewer participants were non-native English speakers working in countries which were not English-speaking.

  86. 86.

    Jack Stilgoe and Andrew Maynard, “It’s Time for Some Messy, Democratic Discussions About the Future of AI”, The Guardian, 1 February 2017, https://www.theguardian.com/science/political-science/2017/feb/01/ai-artificial-intelligence-its-time-for-some-messy-democratic-discussions-about-the-future, accessed 1 June 2018.

  87. 87.

    EAD v2 follows from an initial version (“EAD v1”), published in December 2016, and reflects feedback on that initial document, http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf, accessed 1 June 2018.

  88. 88.

    IEEE, EAD v2 website, https://ethicsinaction.ieee.org/, accessed 1 June 2018.

  89. 89.

    The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems”, Version 2. IEEE, 2017, 2, http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html, accessed 1 June 2018.

  90. 90.

    Ibid., 25–26.

  91. 91.

    Ibid., 28.

  92. 92.

    Ibid., 29–30.

  93. 93.

    Ibid., 32–33.

  94. 94.

    In addition to setting standards for human technology designers, the IEEE Global Initiative aims to embed values into autonomous systems and acknowledges the prior need to “identify the norms of the specific community in which the systems are to be deployed and, in particular, norms relevant to the kinds of tasks that they are designed to perform”. Ibid., 11.

  95. 95.

    See, for example, ibid., 150.

  96. 96.

    Satya Nadella, “The Partnership of the Future”, Slate, 28 June 2016, http://www.slate.com/articles/technology/future_tense/2016/06/microsoft_ceo_satya_nadella_humans_and_a_i_can_work_together_to_solve_society.html, accessed 1 June 2018.

  97. 97.

    James Vincent, “Satya Nadella’s Rules for AI Are More Boring (and Relevant) Than Asimov’s Three Laws”, The Verge, 29 June 2016, https://www.theverge.com/2016/6/29/12057516/satya-nadella-ai-robot-laws, accessed 1 June 2018.

  98. 98.

    Microsoft, The Future Computed: Artificial Intelligence and Its Role in Society (Redmond, WA: Microsoft Corporation: U.S.A., 2018), 57, https://msblob.blob.core.windows.net/ncmedia/2018/01/The-Future_Computed_1.26.18.pdf, accessed 1 June 2018.

  99. 99.

    “European Parliament—Overview”, Website of the European Union, https://europa.eu/european-union/about-eu/institutions-bodies/european-parliament_en, accessed 1 June 2018.

  100. 100.

    The right of the European Parliament to request that the Commission propose legislation is now found in art. 225 of the Treaty on the Functioning of the European Union (otherwise known as the Lisbon Treaty).

  101. 101.

    European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), art. 65.

  102. 102.

    Ibid., Annex to the motion for a resolution: detailed recommendations as to the content of the proposal requested.

  103. 103.

    Ibid.

  104. 104.

    G7 refers to the “Group of 7” countries. It consists of Canada, France, Germany, Italy, Japan, the UK and the USA. The EU is also represented at summits. These principles were distributed by Minister Takaichi at the G7 ICT Ministers’ Meeting in Takamatsu, Kagawa held on 29–30 April 2016. See: https://www.kagawa-mice.jp/en/g7.html, accessed 1 June 2018; and, for Minister Takaichi’s presentation materials, http://www.soumu.go.jp/joho_kokusai/g7ict/english/main_content/ai.pdf, accessed 1 June 2018.

  105. 105.

    “Towards Promotion of International Discussion on AI Networking”, Japan Ministry of Internal Affairs and Communications, http://www.soumu.go.jp/main_content/000499625.pdf (Japanese version), http://www.soumu.go.jp/main_content/000507517.pdf (English version), accessed 1 June 2018.

  106. 106.

    Ibid.

  107. 107.

    Yutaka Matsuo, Toyoaki Nishida, Koichi Hori, Hideaki Takeda, Satoshi Hase, Makoto Shiono, Hiroshitakashi Hattori, Yusuna Ema, and Katsue Nagakura, “Artificial Intelligence and Ethics”, Artificial Intelligence Journal, Vol. 31, No. 5 (2016), 635–641; Fumio Shimpo, “The Principal Japanese AI and Robot Strategy and Research toward Establishing Basic Principles”, Journal of Law and Information Systems, Vol. 3 (May 2018).

  108. 108.

    Fumio Shimpo, “The Principal Japanese AI and Robot Strategy and Research toward Establishing Basic Principles”, Journal of Law and Information Systems, Vol. 3 (May 2018).

  109. 109.

    Available in English translation from the New America Institute: “A Next Generation Artificial Intelligence Development Plan”, China State Council, Rogier Creemers, Leiden Asia Centre; Graham Webster, Yale Law School Paul Tsai China Center; Paul Triolo, Eurasia Group; and Elsa Kania trans. (Washington, DC: New America, 2017), https://na-production.s3.amazonaws.com/documents/translation-fulltext-8.1.17.pdf, accessed 1 June 2018. See for discussion Chapter 6 at s. 4.6.

  110. 110.

    National Standardization Management Committee, Second Ministry of Industry, “White Paper on Standardization in AI”, translated by Jeffrey Ding, 18 January 2018 (the “White Paper”) http://www.sgic.gov.cn/upload/f1ca3511-05f2-43a0-8235-eeb0934db8c7/20180122/5371516606048992.pdf, accessed 9 April 2018. Contributors to the White Paper included: the China Electronics Standardization Institute, Institute of Automation, Chinese Academy of Sciences, Beijing Institute of Technology, Tsinghua University, Peking University, Renmin University, as well as private companies Huawei, Tencent, Alibaba, Baidu, Intel (China) and Panasonic (formerly Matsushita Electric) (China) Co., Ltd.

  111. 111.

    Ibid., para. 3.3.3.

  112. 112.

    Ibid., para. 3.4.

  113. 113.

    Ibid., para. 3.3.2.

  114. 114.

    Ibid.

  115. 115.

    Ibid.

  116. 116.

    Ibid., para. 3.3.1.

  117. 117.

    Ibid., para. 4.5.

  118. 118.

    For instance, Jeffrey Ding notes that there are “common misperceptions of China’s relatively lax privacy protections”. See Jeffrey Ding, “Deciphering China’s AI Dream”, Governance of AI Program, Future of Humanity Institute (Oxford: Future of Humanity Institute, March 2018), 19, https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf, accessed 1 June 2018.

  119. 119.

    White Paper, para. 3.3.3.

  120. 120.

    “Guild: Trade Association”, Encyclopaedia Britannica, https://www.britannica.com/topic/guild-trade-association, accessed 1 June 2018.

  121. 121.

    Avner Greif, Paul Milgrom, and Barry R. Weingast, “Coordination, Commitment, and Enforcement: the Case of the Merchant Guild”, Journal of Political Economy, Vol. 102 (1994), 745–776.

  122. 122.

    Roberta Dessi and Sheilagh Ogilvie, “Social Capital and Collusion: The Case of Merchant Guilds” (2004) CESifo Working Paper No. 1037. Dessi and Ogilvie do not endorse guilds as an entirely beneficial institution, but they do acknowledge that the social norms which they created.

  123. 123.

    Richard and Daniel Susskind, The Future of The Professions (Oxford: Oxford University Press, 2015).

  124. 124.

    Ludwig Edelstein, The Hippocratic Oath : Text, Translation and Interpretation (Baltimore: Johns Hopkins Press, 1943), 56.

  125. 125.

    “Hippocratic Oath”, Encyclopaedia Britannica, https://www.britannica.com/topic/Hippocratic-oath, accessed 1 June 2018, quoting translation from Greek by Francis Adams (1849).

  126. 126.

    Microsoft, The Future Computed: Artificial Intelligence and Its Role in Society (Redmond, WA: Microsoft Corporation, 2018), 8–9, https://msblob.blob.core.windows.net/ncmedia/2018/01/The-Future_Computed_1.26.18.pdf, accessed 1 June 2018. In March 2018, Oren Etzioni of AI2 responded to Microsoft’s book by proposing a draft text for an AI practitioners’ Hippocratic Oath. See Oren Etzioni, “A Hippocratic Oath for Artificial Intelligence Practitioners”, TechCrunch, https://techcrunch.com/2018/03/14/a-hippocratic-oath-for-artificial-intelligence-practitioners/, accessed 1 June 2018.

  127. 127.

    Eric Schmidt and Jonathan Rosenberg, How Google Works (London: Hachette UK, 2014).

  128. 128.

    Leo Mirani, “What Google Really Means by ‘Don’t Be Evil’”, Quartz, 21 October 2014, https://qz.com/284548/what-google-really-means-by-dont-be-evil/, accessed 1 June 2018.

  129. 129.

    Eric Schmidt and Jonathan Rosenberg, How Google Works (London: Hachette UK, 2014).

  130. 130.

    The text of the letter is available at: https://static01.nyt.com/files/2018/technology/googleletter.pdf, accessed 1 June 2018.

  131. 131.

    Scott Shane and Daisuke Wakabayashi, “‘The Business of War’: Google Employees Protest Work for the Pentagon”, The New York Times, 4 April 2018, https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html, accessed 1 June 2018.

  132. 132.

    Letter from various Google employees to Sundar Pichai, https://static01.nyt.com/files/2018/technology/googleletter.pdf, accessed 1 June 2018.

  133. 133.

    Hannah Kuchler, “How Workers Forced Google to Drop Its Controversial ‘Project Maven’”, Financial Times, 27 June 2018, https://www.ft.com/content/bd9d57fc-78cf-11e8-bc55-50daf11b720d, accessed 2 July 2018.

  134. 134.

    Sundar Pichai, “AI at Google: Our Principles”, Google website, 7 June 2018, https://blog.google/technology/ai/ai-principles/, accessed 2 July 2018.

  135. 135.

    For a similar proposal, see Joanna J. Bryson, “A Proposal for the Humanoid Agent-Builders League (HAL)”, Proceedings of the AISB 2000 Symposium on Artificial Intelligence, Ethics and (Quasi-) Human Rights , edited by John Barnden (2000), http://www.cs.bath.ac.uk/~jjb/ftp/HAL00.html, accessed 1 June 2018.

  136. 136.

    “Homepage”, Website of Federation of State Medical Boards, http://www.fsmb.org/licensure/spex_plas/, accessed 1 June 2018.

  137. 137.

    As to the difficulties of foreign doctors, even from those with high quality health systems, practising in the USA, see, for example, “Working in the USA”, Website of the British Medical Association, https://www.bma.org.uk/advice/career/going-abroad/working-abroad/usa, accessed 1 June 2018.

  138. 138.

    Directive 2005/36/EC of the European Parliament and Council of 7 September 2005.

  139. 139.

    See below at s. 7 of this chapter.

  140. 140.

    See generally The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation, edited by George J. Annas and Michael A. Godin (Oxford: Oxford University Press, 1992).

  141. 141.

    Michael Ryan, Doctors and the State in the Soviet Union (New York: Palgrave Macmillan, 1990), 131.

  142. 142.

    Anthony Lewis, “Abroad at Home; A Question of Confidence”, New York Times, 19 September 1990, http://www.nytimes.com/1985/09/19/opinion/abroad-at-home-a-question-of-confidence.html, accessed 1 June 2018.

  143. 143.

    “2017 Global AI Talent White Paper”, Tencent Research Institute, http://www.tisi.org/Public/Uploads/file/20171201/20171201151555_24517.pdf, accessed 20 February 2018. See also James Vincent, “Tencent Says There Are Only 300,000 AI Engineers Worldwide, but Millions Are Needed”, The Verge, 5 December 2017, https://www.theverge.com/2017/12/5/16737224/global-ai-talent-shortfall-tencent-report, accessed 1 June 2018. By contrast, PWC estimate that in the USA alone, there will be 2.9 m people with data science and analytics skills by 2018. Not all will be AI professionals per se, but many of their skills will overlap. “What’s Next for the 2017 Data Science and Analytics Job Market?”, PWC Website, https://www.pwc.com/us/en/library/data-science-and-analytics.html, accessed 1 June 2018.

  144. 144.

    Katja Grace, “The Asilomar Conference: A Case Study in Risk Mitigation”, MIRI Research Institute, Technical Report, 2015–9 (Berkeley, CA: MIRI, 15 July 2015), 15.

  145. 145.

    A constantly-updated database of tech ethics curricula is available at: https://docs.google.com/spreadsheets/d/1jWIrA8jHz5fYAW4h9CkUD8gKS5V98PDJDymRf8d9vKI/edit#gid=0, accessed 1 June 2018.

  146. 146.

    Microsoft, The Future Computed: Artificial Intelligence and Its Role in Society (Redmond, WA: Microsoft Corporation, U.S.A., 2018), 55, https://msblob.blob.core.windows.net/ncmedia/2018/01/The-Future_Computed_1.26.18.pdf, accessed 1 June 2018.

  147. 147.

    See, for example, s. 1 of the UK Road Traffic Act 1988, or s. 249(1)(a) of the Canadian Criminal Code.

  148. 148.

    “About TensorFlow”, Website of TensorfFlow, https://www.tensorflow.org/, accessed 1 June 2018.

  149. 149.

    See, for example, the UK Government’s “Guidance: Wine Duty”, 9 November 2009,

    https://www.gov.uk/guidance/wine-duty, accessed 1 June 2018.

  150. 150.

    See, for example, Max Weber, “Politics as a Vocation”, in From Max Weber: Essays in Sociology, translated by H.H. Gerth and C. Wright Mills (New York: Oxford University Press, 1946).

  151. 151.

    “Firearms-Control Legislation and Policy: European Union”, Library of Congress, https://www.loc.gov/law/help/firearms-control/eu.php, accessed 1 June 2018.

  152. 152.

    “1996: Massacre in Dunblane School Gym”, BBC Website, http://news.bbc.co.uk/onthisday/hi/dates/stories/march/13/newsid_2543000/2543277.stm, accessed 19 February 2018. The UK Firearms (Amendment) Act 1997 and the Firearms (Amendment) (No. 2) Act 1997 banned almost all handguns from private ownership and use.

  153. 153.

    “We Banned the Guns That Killed School Children in Dunblane. Here’s How”, New Statesman, 16 February 2018, https://www.newstatesman.com/politics/uk/2018/02/we-banned-guns-killed-school-children-dunblane-here-s-how, accessed 1 June 2018.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacob Turner .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Turner, J. (2019). Controlling the Creators. In: Robot Rules . Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-96235-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96235-1_7

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-319-96234-4

  • Online ISBN: 978-3-319-96235-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics