Skip to main content

Risks of the Journey to the Singularity

  • Chapter
  • First Online:
The Technological Singularity

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Summary

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. Unlike current AI systems, individual AGIs would be capable of learning to operate in a wide variety of domains, including ones they had not been specifically designed for. It has been proposed that AGIs might eventually pose a significant risk to humanity, for they could accumulate significant amounts of power and influence in society while being indifferent to what humans valued. The accumulation of power might either happen gradually over time, or it might happen very rapidly (a so-called “hard takeoff”). Gradual accumulation would happen through normal economic mechanisms, as AGIs came to carry out an increasing share of economic tasks. A hard takeoff could be possible if AGIs required significantly less hardware to run than was available, or if they could redesign themselves to run at ever faster speeds, or if they could repeatedly redesign themselves into more intelligent versions of themselves.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Unlike the term “human-level AI,” the term “Artificial General Intelligence” does not necessarily presume that the intelligence will be human-like.

  2. 2.

    For this paper, we use a binary distinction between narrow AI and AGI. This is merely for the sake of simplicity we do not assume the actual difference between the two categories to necessarily be so clean-cut.

  3. 3.

    A catastrophic risk is something that might inflict serious damage to human well-being on a global scale and cause ten million or more fatalities (Bostrom and Ćirković 2008). An existential risk is one that threatens human extinction (Bostrom 2002). Many writers argue that AGI might be a risk of such magnitude (Butler 1863; Wiener 1960; Good 1965; Vinge 1993; Joy 2000; Yudkowsky 2008a; Bostrom 2014).

  4. 4.

    On the less serious front, see http://www.michaeleisen.org/blog/?p=358 for an amusing example of automated trading going awry.

  5. 5.

    In practice, there have been two separate communities doing research on automated moral decision-making (Muehlhauser and Helm 2012a, b; Allen and Wallach 2012; Shulman et al. 2009). The “AI risk” community has concentrated specifically on advanced AGIs (e.g. Yudkowsky 2008a; Bostrom 2014), while the “machine ethics” community typically has concentrated on more immediate applications for current-day AI (e.g. Wallach et al. 2008; Anderson and Anderson 2011). In this chapter, we have cited the machine ethics literature only where it seemed relevant, leaving out papers that seemed to be too focused on narrow-AI systems for our purposes. In particular, we have left out most discussions of military machine ethics (Arkin 2009), which focus primarily on the constrained special case of creating systems that are safe for battlefield usage.

  6. 6.

    Miller (2012) similarly notes that, despite a common belief to the contrary, it is impossible to write laws in a manner that would match our stated moral principles without a judge needing to use a large amount of implicit common-sense knowledge to correctly interpret them. “Laws shouldn’t always be interpreted literally because legislators can’t anticipate all possible contingencies. Also, humans’ intuitive feel for what constitutes murder goes beyond anything we can commit to paper. The same applies to friendliness.” (Miller 2012).

  7. 7.

    Bugaj and Goertzel defined hard takeoff to refer to a period of months or less. We have chosen a somewhat longer time period, as even a few years might easily turn out to be too little time for society to properly react.

  8. 8.

    Bostrom (2014, chap. 3) discusses three kinds of superintelligence. A speed superintelligence “can do all that a human intellect can do, but much faster”. A collective superintelligence is “a system composed of large number of smaller intellects such that the system's overall performance across many very general domains vastly outstrips that of any current cognitive system”. A quality superintelligence “is at least as fast as a human mind and vastly qualitatively smarter”. These can be seen as roughly corresponding to the different kinds of hard takeoff scenarios. A speed explosion implies a speed superintelligence, an intelligence explosion a quality superintelligence, and a hardware overhang may lead to any combination of speed, collective, and quality superintelligence.

  9. 9.

    Bostrom (1998) estimates that the effective computing capacity of the human brain might be somewhere around 1017 operations per second (OPS), and Moravec (1998) estimates it at 1014 OPS. As of June 2016, the fastest supercomputer in the world had achieved a top capacity of 1016 floating-point operations per second (FLOPS) and the five-hundredth fastest a top capacity of 1014 FLOPS (Top500 2016). Note however that OPS and FLOPS are not directly comparable and there is no reliable way of interconverting the two. Sandberg and Bostrom (2008) estimate that OPS and FLOPS grow at a roughly comparable rate.

  10. 10.

    The speed that would allow AGIs to take over most jobs would depend on the cost of the hardware and the granularity of the software upgrades. A series of upgrades over an extended period, each producing a 1% improvement, would lead to a more gradual transition than a single upgrade that brought the software from the capability level of a chimpanzee to a rough human equivalence. Note also that several companies, including Amazon and Google, offer vast amounts of computing power for rent on an hourly basis. An AGI that acquired money and then invested all of it in renting a large amount of computing resources for a brief period could temporarily achieve a much larger boost than its budget would otherwise suggest.

  11. 11.

    Botnets are networks of computers that have been compromised by outside attackers and are used for illegitimate purposes. Rajab et al. (2007) review several studies which estimate the sizes of the largest botnets as being between a few thousand to 350,000 bots. Modern-day malware could theoretically infect any susceptible Internet-connected machine within tens of seconds of its initial release (Staniford et al. 2002). The Slammer worm successfully infected more than 90% of vulnerable hosts within ten minutes, and had infected at least 75,000 machines by the thirty-minute mark (Moore et al. 2003). The previous record holder in speed, the Code Red worm, took fourteen hours to infect more than 359,000 machines (Moore et al. 2002).

  12. 12.

    Loosemore and Goertzel (2012) also suggest that current companies carrying out research and development are more constrained by a lack of capable researchers than by the ability to carry out physical experiments.

  13. 13.

    Most accounts of this scenario do not give exact definitions for “intelligence” or explain what a “superintelligent” AGI would be like, instead using informal characterizations such as “a machine that can surpass the intellectual activities of any man however clever” (Good 1965) or “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills” (Bostrom 1998). Yudkowsky (2008a) defines intelligence in relation to “optimization power,” the ability to reliably hit small targets in large search spaces, such as by finding the a priori exceedingly unlikely organization of atoms which makes up a car. A more mathematical definition of machine intelligence is offered by Legg and Hutter (2007). Sotala (2012) discusses some of the functional routes to actually achieving superintelligence.

  14. 14.

    The relationship in question is similar to that described by Amdahl’s (1967) law.

References

  • Allen, Colin, and Wendell Wallach. 2012. “Moral Machines: Contradiction in Terms or Abdication of Human Responsibility.” In Lin, Abney, and Bekey 2012, 55–68.

    Google Scholar 

  • Allen, Colin, Wendell Wallach, and Iva Smit. 2006. “Why Machine Ethics?” IEEE Intelligent Systems 21 (4): 12–17. doi:10.1109/MIS.2006.83.

  • Amdahl, Gene M. 1967. “Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities.” In Proceedings of the April 18–20, 1967, Spring Joint Computer Conference—AFIPS ’67 (Spring), 483–485. New York: ACM Press. doi:10.1145/1465482.1465560.

  • Anderson, Michael, and Susan Leigh Anderson, eds. 2011. Machine Ethics. New York: Cambridge University Press.

    Google Scholar 

  • Arkin, Ronald C. 2009. Governing Lethal Behavior in Autonomous Robots. Boca Raton, FL: CRC Press.

    Google Scholar 

  • Baum, Seth D., Ben Goertzel, and Ted G. Goertzel. 2011. “How Long Until Human-Level AI? Results from an Expert Assessment.” Technological Forecasting and Social Change 78 (1): 185–195. doi:10.1016/j.techfore.2010.09.006.

  • Bostrom, Nick. 1998. “How Long Before Superintelligence?” International Journal of Futures Studies 2.

    Google Scholar 

  • Bostrom, Nick. 2002. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9. http://www.jetpress.org/volume9/risks.html.

  • Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” In “Theory and Philosophy of AI,” edited by Vincent C. Müller. Special issue, Minds and Machines 22 (2): 71–85. doi:10.1007/s11023-012-9281-3.

  • Bostrom, Nick. 2014. Superintelligence: Paths, dangers, strategies. Oxford University Press.

    Google Scholar 

  • Bostrom, Nick, and Milan M. Ćirković. 2008. “Introduction.” In Bostrom, Nick, and Milan M. Ćirković, eds. Global Catastrophic Risks. New York: Oxford University Press., 1–30.

    Google Scholar 

  • Brynjolfsson, Erik, and Andrew McAfee. 2011. Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Lexington, MA: Digital Frontier. Kindle edition.

    Google Scholar 

  • Bugaj, Stephan Vladimir, and Ben Goertzel. 2007. “Five Ethical Imperatives and Their Implications for Human-AGI Interaction.” Dynamical Psychology. http://goertzel.org/dynapsyc/2007/Five_Ethical_Imperatives_svbedit.htm.

  • Butler, Samuel [Cellarius, pseud.]. 1863. “Darwin Among the Machines.” Christchurch Press, June 13. http://www.nzetc.org/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html.

  • CFTC & SEC (Commodity Futures Trading Commission and Securities & Exchange Commission). 2010. Findings Regarding the Market Events of May 6, 2010: Report of the Staffs of the CFTC and SEC to the Joint Advisory Committee on Emerging Regulatory Issues. Washington, DC. http://www.sec.gov/news/studies/2010/marketevents-report.pdf.

  • Chalmers, David John. 2010. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17 (9–10): 7–65. http://www.ingentaconnect.com/content/imp/jcs/2010/00000017/f0020009/art00001.

  • Congress, US. 2000. National Defense Authorization, Fiscal Year 2001, Pub. L. No. 106–398, 114 Stat. 1654.

    Google Scholar 

  • Dahm, Werner J. A. 2010. Technology Horizons: A Vision for Air Force Science & Technology During 2010-2030. AF/ST-TR-10-01-PR. Washington, DC: USAF. http://www.au.af.mil/au/awc/awcgate/af/tech_horizons_vol-1_may2010.pdf.

  • Good, Irving John. 1965. “Speculations Concerning the First Ultraintelligent Machine.” In Advances in Computers, edited by Franz L. Alt and Morris Rubinoff, 31–88. Vol. 6. New York: Academic Press. doi:10.1016/S0065-2458(08)60418-0.

  • Hall, John Storrs 2008. “Engineering Utopia.” In Wang, Goertzel, and Franklin 2008, 460–467.

    Google Scholar 

  • Hanson, Robin. 1998. “Economic Growth Given Machine Intelligence.” Unpublished manuscript. Accessed May 15, 2013. http://hanson.gmu.edu/aigrow.pdf.

  • Hanson, Robin. 2008. “Economics of the Singularity.” IEEE Spectrum 45 (6): 45–50. doi:10.1109/MSPEC.2008.4531461.

  • Hollerbach, John M., Matthew T. Mason, and Henrik I. Christensen. 2009. A Roadmap for US Robotics: From Internet to Robotics. Snobird, UT: Computing Community Consortium. http://www.usrobotics.us/reports/CCC%20Report.pdf.

  • Joy, Bill. 2000. “Why the Future Doesn’t Need Us.” Wired, April. http://www.wired.com/wired/archive/8.04/joy.html.

  • Legg, Shane, and Marcus Hutter. 2007. “A Collection of Definitions of Intelligence.” In Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms—Proceedings of the AGI Workshop 2006, edited by Ben Goertzel and Pei Wang, 17–24. Frontiers in Artificial Intelligence and Applications 157. Amsterdam: IOS.

    Google Scholar 

  • Loosemore, Richard, and Ben Goertzel. 2012. “Why an Intelligence Explosion is Probable.” In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Miller, James D. 2012. Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World. Dallas, TX: BenBella Books.

    Google Scholar 

  • Moore, David, Vern Paxson, Stefan Savage, Colleen Shannon, Stuart Staniford, and Nicholas Weaver. 2003. “Inside the Slammer Worm.” IEEE Security & Privacy Magazine 1 (4): 33–39. doi:10.1109/MSECP.2003.1219056.

  • Moore, David, Colleen Shannon, and Jeffery Brown. 2002. “Code-Red: A Case Study on the Spread and Victims of an Internet Worm.” In Proceedings of the Second ACM SIGCOMM Workshop on Internet Measurement (IMW ’02), 273–284. New York: ACM Press. doi:10.1145/637201.637244.

  • Moravec, Hans P. 1998. “When Will Computer Hardware Match the Human Brain?” Journal of Evolution and Technology 1. http://www.transhumanist.com/volume1/moravec.htm.

  • Muehlhauser, Luke, and Louie Helm. 2012. “The Singularity and Machine Ethics.” In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Muehlhauser, Luke, and Anna Salamon. 2012. “Intelligence Explosion: Evidence and Import.” In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Müller, V. C., and Bostrom, N. 2014. Future progress in artificial intelligence: A survey of expert opinion. Fundamental Issues of Artificial Intelligence.

    Google Scholar 

  • Omohundro, Stephen M. 2007. “The Nature of Self-Improving Artificial Intelligence.” Paper presented at Singularity Summit 2007, San Francisco, CA, September 8–9. http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/.

  • Omohundro, Stephen M. 2008. “The Basic AI Drives.” In Wang, Goertzel, and Franklin 2008, 483–492.

    Google Scholar 

  • Omohundro, Stephen M. 2012. “Rational Artificial Intelligence for the Greater Good.” In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Powers, Thomas M. 2011. “Incremental Machine Ethics.” IEEE Robotics & Automation Magazine 18 (1): 51–58. doi:10.1109/MRA.2010.940152.

  • Rajab, Moheeb Abu, Jay Zarfoss, Fabian Monrose, and Andreas Terzis. 2007. “My Botnet is Bigger than Yours (Maybe, Better than Yours): Why Size Estimates Remain Challenging.” In Proceedings of 1st Workshop on Hot Topics in Understanding Botnets (HotBots ’07).Berkeley, CA: USENIX. http://static.usenix.org/event/hotbots07/tech/full_papers/rajab/rajab.pdf.

  • Sandberg, Anders. 2010. “An Overview of Models of Technological Singularity.” Paper presented at the Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March 8. http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf.

  • Sandberg, Anders, and Nick Bostrom. 2008. Whole Brain Emulation: A Roadmap. Technical Report, 2008-3. Future of Humanity Institute, University of Oxford. http://www.fhi.ox.ac.uk/wpcontent/uploads/brain-emulation-roadmap-report1.pdf.

  • Sandberg, Anders, and Nick Bostrom. 2011. Machine Intelligence Survey. Technical Report, 2011-1. Future of Humanity Institute, University of Oxford. http://www.fhi.ox.ac.uk/reports/2011-1.pdf.

  • Shachtman, Noah. 2007. “Robot Cannon Kills 9, Wounds 14.” Wired, October 18. http://www.wired.com/dangerroom/2007/10/robot-cannon-ki/.

  • Shulman, Carl, and Anders Sandberg. 2010. “Implications of a Software-Limited Singularity.” In Mainzer, Klaus, ed. ECAP10: VIII European Conference on Computing and Philosophy. Munich: Dr. Hut.

    Google Scholar 

  • Shulman, Carl, Henrik Jonsson, and Nick Tarleton. 2009. “Machine Ethics and Superintelligence.” In Reynolds, Carson, and Alvaro Cassinelli, eds. AP-CAP 2009: The Fifth Asia-Pacific Computing and Philosophy Conference, October 1st-2nd, University of Tokyo, Japan, Proceedings, 95–97.

    Google Scholar 

  • Solomonoff, Ray J. 1985. “The Time Scale of Artificial Intelligence: Reflections on Social Effects.” Human Systems Management 5:149–153.

    Google Scholar 

  • Sotala, Kaj, and Roman V. Yampolskiy. 2013. Responses to catastrophic AGI risk: a survey. Technical report 2013-2. Berkeley, CA: Machine Intelligence Research Institute.

    Google Scholar 

  • Sotala, Kaj, and Roman V. Yampolskiy. 2015. Responses to catastrophic AGI risk: a survey. Physica Scripta, 90(1), 018001.

    Google Scholar 

  • Sotala, Kaj. 2012. “Advantages of Artificial Intelligences, Uploads, and Digital Minds.” International Journal of Machine Consciousness 4 (1): 275–291. doi:10.1142/S1793843012400161.

  • Staniford, Stuart, Vern Paxson, and Nicholas Weaver. 2002. “How to 0wn the Internet in Your Spare Time.” In Proceedings of the 11th USENIX Security Symposium, edited by Dan Boneh, 149–167. Berkeley, CA: USENIX. http://www.icir.org/vern/papers/cdc-usenix-sec02/.

  • Top500.org. 2016. Top500 list – June 2016. https://www.top500.org/list/2016/06/.

  • Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA Conference Publication 10129. NASA Lewis Research Center. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022855_1994022855.pdf.

  • Wallach, Wendell, and Colin Allen. 2009. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press. doi:10.1093/acprof:oso/9780195374049.001.0001.

  • Wallach, Wendell, Colin Allen, and Iva Smit. 2008. “Machine Morality: Bottom-Up and Top-Down Approaches for Modelling Human Moral Faculties.” In “Ethics and Artificial Agents.” Special issue, AI & Society 22 (4): 565–582. doi:10.1007/s00146-007-0099-0.

  • Whitby, Blay. 1996. Reflections on Artificial Intelligence: The Legal, Moral, and Ethical Dimensions. Exeter, UK: Intellect Books.

    Google Scholar 

  • Wiener, Norbert. 1960. “Some Moral and Technical Consequences of Automation.” Science 131 (3410): 1355–1358. http://www.jstor.org/stable/1705998.

  • Yampolskiy, Roman V. 2013. What to Do with the Singularity Paradox? Studies in Applied Philosophy, Epistemology and Rational Ethics vol 5, pp. 397–413. Springer Berlin Heidelberg.

    Google Scholar 

  • Yudkowsky, Eliezer. 1996. “Staring into the Singularity.” Unpublished manuscript. Last revised May 27, 2001. http://yudkowsky.net/obsolete/singularity.html.

  • Yudkowsky, Eliezer. 2001. Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures. The Singularity Institute, San Francisco, CA, June 15. http://intelligence.org/files/CFAI.pdf.

  • Yudkowsky, Eliezer. 2008a. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Bostrom, Nick, and Milan M. Ćirković, eds. Global Catastrophic Risks. New York: Oxford University Press., 308–345.

    Google Scholar 

  • Yudkowsky, Eliezer. 2008b. “Hard Takeoff.” Less Wrong (blog), December 2. http://lesswrong.com/lw/wf/hard_takeoff/.

  • Yudkowsky, Eliezer. 2009. “Value is Fragile.” Less Wrong (blog), January 29. http://lesswrong.com/lw/y3/value_is_fragile/.

  • Yudkowsky, Eliezer. 2011. Complex Value Systems are Required to Realize Valuable Futures. The Singularity Institute, San Francisco, CA. http://intelligence.org/files/ComplexValues.pdf.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roman Yampolskiy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer-Verlag GmbH Germany

About this chapter

Cite this chapter

Sotala, K., Yampolskiy, R. (2017). Risks of the Journey to the Singularity. In: Callaghan, V., Miller, J., Yampolskiy, R., Armstrong, S. (eds) The Technological Singularity. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-54033-6_2

Download citation

Publish with us

Policies and ethics