Skip to main content

Responses to the Journey to the Singularity

  • Chapter
  • First Online:
The Technological Singularity

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Summary

This chapter surveys various responses that have been made to the possibility of Artificial General Intelligence (AGI) possibly posing a catastrophic risk to humanity. The recommendations given for dealing with the problem can be divided into proposals for societal action, external constraints, and internal constraints. Proposals for societal action range from ignoring the issue entirely to enacting regulation to banning AGI entirely. Proposals for external constraints involve different ways of constraining and limiting the power of AGIs from the outside. Finally, proposals for internal constrains involve building AGIs in specific ways so as to make them safe. Many proposals seem to suffer from serious problems, or seem to be of limited effectiveness. Others seem to have enough promise to be worth exploring. We conclude by reviewing the proposals which we feel are worthy of further study. In the short term, these are regulation, merging with machines, AGI confinement, and AGI designs which make them easier to be controlled from the outside. In the long term, the most promising proposals are value learning and building the AGI systems to be human-like.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The opposite argument is that superior intelligence will inevitably lead to more moral behavior. Some of the arguments related to this position are discussed in the context of evolutionary invariants (Sect. 3.5.3.1), although the authors advocating the use of evolutionary invariants do believe AGI risk to be worth our concern.

  2. 2.

    Armstrong and Sotala (2012) point out that many of the task properties which have been found to be conducive for developing reliable and useful expertise are missing in AGI timeline forecasting. In particular, one of the most important factors is whether experts get rapid (preferably immediate) feedback, while a timeline prediction that is set many decades in the future might have been entirely forgotten by the time that its correctness could be evaluated.

  3. 3.

    For proposals which suggest that humans could use technology to remain competitive with AGIs and thus prevent them from acquiring excessive amounts of power, see Sect. 3.4.

  4. 4.

    An added benefit would be that this could also help avoid other kinds of existential risk, such as the intentional creation of dangerous new diseases.

  5. 5.

    However, this might not be true for AGIs created using some alternate means, such as artificial life (Sullins 2005).

  6. 6.

    Berglas (personal communication) has since changed his mind and no longer believes that it is possible to effectively restrict hardware or otherwise prevent AGI from being created.

  7. 7.

    For a definition of "bottom-up” approaches, see Sect. 5.3.

  8. 8.

    Note that utilitarianism is not the same thing as having a utility function. Utilitarianism is a specific kind of ethical system, while utility functions are general-purpose mechanisms for choosing between actions and can in principle be used to implement very different kinds of ethical systems, such as egoism and possibly even rights-based theories and virtue ethics (Peterson 2010).

  9. 9.

    But it should be noted that there are also promising nonconnectionist approaches for modeling human classification behavior—see, e.g., Tenenbaum et al. (2006, 2011).

  10. 10.

    On the other hand, this might incentivize the AGI to deceive its controllers into believing it was behaving properly, and also to actively hide any information which it even suspected might be interpreted as misbehavior.

References

  • Agliata, Daniel, and Stacey Tantleff-Dunn. 2004. “The Impact of Media Exposure on Males’ Body Image”. Journal of Social and Clinical Psychology 23(1): 7–22. doi:10.1521/jscp.23.1.7.26988.

  • Alexander, Scott. 2015. “AI researchers on AI risk”. Slate Star Codex [blog]. http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/.

  • Anderson, Monica. 2010. “Problem Solved: Unfriendly AI”. H + Magazine, December 15. http://hplusmagazine.com/2010/12/15/problem-solved-unfriendly-ai/.  

  • Anderson, Michael, Susan Leigh Anderson, and Chris Armen, eds. 2005a. Machine Ethics: Papers from the 2005 AAAI Fall Symposium. Technical Report, FS-05-06. AAAI Press, Menlo Park, CA. http://www.aaai.org/Library/Symposia/Fall/fs05-06.

  • Anderson, Michael, Susan Leigh Anderson, and Chris Armen. 2005b. “MedEthEx: Toward a Medical Ethics Advisor.” In Caring Machines: AI in Eldercare: Papers from the 2005 AAAI Fall Symposium, edited by Timothy Bickmore, 9–16. Technical Report, FS-05-02. AAAI Press, Menlo Park, CA. http://aaaipress.org/Papers/Symposia/Fall/2005/FS-05-02/FS05-02-002.pdf.

  • Anderson, Michael, Susan Leigh Anderson, and Chris Armen. 2006. “An Approach to Computing Ethics.” IEEE Intelligent Systems 21(4): 56–63. doi: 10.1109/MIS.2006.64.

  • Anderson, Susan Leigh. 2011. “The Unacceptability of Asimov’s Three Laws of Robotics as a Basis for Machine Ethics”. In Anderson and Anderson 2011, 285–296.

    Google Scholar 

  • Annas, George J., Lori B. Andrews, and Rosario M. Isasi. 2002. “Protecting the Endangered Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations”. American Journal of Law & Medicine 28(2–3): 151–178.

    Google Scholar 

  • Anthony, Dick, and Thomas Robbins. 2004. “Conversion and ‘Brainwashing’ in New Religious Movements”. In The Oxford Handbook of New Religious Movements, 1st ed., edited by James R. Lewis, 243–297. New York: Oxford University Press. doi:10.1093/oxfordhb/9780195369649.003. 0012.

  • Armstrong, Stuart. 2007. “Chaining God: A Qualitative Approach to AI, Trust and Moral Systems”. Unpublished manuscript, October 20. Accessed December 31, 2012. http://www.neweuropeancentury.org/GodAI.pdf.

  • Armstrong, Stuart. 2010. Utility Indifference. Technical Report, 2010-1. Oxford: Future of Humanity Institute, University of Oxford. http://www.fhi.ox.ac.uk/reports/2010-1.pdf.

  • Armstrong, Stuart, Anders Sandberg, and Nick Bostrom. 2012. “Thinking Inside the Box: Controlling and Using an Oracle AI”. Minds and Machines 22(4): 299–324. doi:10.1007/s11023-012-9282-2.

  • Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re Predicting AI — or Failing To”. In Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster, 52–75. Pilsen: University of West Bohemia. Accessed February 2, 2013. http://www.kky.zcu.cz/en/publications/1/JanRomportl_2012_BeyondAIArtificial.pdf.

  • Asimov, Isaac. 1942. “Runaround”. Astounding Science-Fiction, March, 94–103.

    Google Scholar 

  • Axelrod, Robert. 1987. “The Evolution of Strategies in the Iterated Prisoner’s Dilemma”. In Genetic Algorithms and Simulated Annealing, edited by Lawrence Davis, 32–41. Los Altos, CA: Morgan Kaufmann.

    Google Scholar 

  • Baars, Bernard J. 2002. “The Conscious Access Hypothesis: Origins and Recent Evidence”. Trends in Cognitive Sciences 6(1): 47–52. doi:10.1016/S1364-6613(00)01819-2.

  • Baars, Bernard J. 2005. “Global Workspace Theory of Consciousness: Toward a Cognitive Neuroscience of Human Experience”. In The Boundaries of Consciousness: Neurobiology and Neuropathology, edited by Steven Laureys, 45–53. Progress in Brain Research 150. Boston: Elsevier.

    Google Scholar 

  • Beavers, Anthony F. 2009. “Between Angels and Animals: The Question of Robot Ethics; or, Is Kantian Moral Agency Desirable?” Paper presented at the Annual Meeting of the Association for Practical and Professional Ethics, Cincinnati, OH, March.

    Google Scholar 

  • Beavers, Anthony F. 2012. “Moral Machines and the Threat of Ethical Nihilism”. In Lin, Patrick, Keith Abney, and George A. Bekey, eds. Robot Ethics: The Ethical and Social Implications of Robotics. Intelligent Robotics and Autonomous Agents. Cambridge, MA: MIT Press, 333–344.

    Google Scholar 

  • Benatar, David. 2006. Better Never to Have Been: The Harm of Coming into Existence. New York: Oxford University Press.

    Google Scholar 

  • Berglas, Anthony. 2012. “Artificial Intelligence Will Kill Our Grandchildren (Singularity)”. Unpublished manuscript, draft 9, January. Accessed December 31, 2012. http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html.

  • Bostrom, Nick. 2002. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9. http://www.jetpress.org/volume9/risks.html.

  • Bostrom, Nick. 2004. “The Future of Human Evolution”. In Two Hundred Years After Kant, Fifty Years After Turing, edited by Charles Tandy, 339–371. Vol. 2. Death and Anti-Death. Palo Alto, CA: Ria University Press.

    Google Scholar 

  • Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”. In “Theory and Philosophy of AI,” edited by Vincent C. Müller. Special issue, Minds and Machines 22(2): 71–85. doi:10.1007/s11023-012-9281-3.

  • Bostrom, Nick. 2014. Superintelligence: Paths, dangers, strategies. Oxford University Press.

    Google Scholar 

  • Bostrom, Nick, and Eliezer Yudkowsky. 2013. “The Ethics of Artificial Intelligence”. In Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William Ramsey. New York: Cambridge University Press.

    Google Scholar 

  • Branwen, Gwern. 2012. “Slowing Moore’s Law: Why You Might Want to and How You Would Do It”. gwern.net. December 11. Accessed December 31, 2012. http://www.gwern.net/Slowing%20Moore’s%20Law.

  • Brin, David. 1998. The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? Reading, MA: Perseus Books.

    Google Scholar 

  • Bringsjord, Selmer, and Alexander Bringsjord. 2012. “Belief in the Singularity is Fideistic”. In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Brooks, Rodney A. 2008. “I, Rodney Brooks, Am a Robot”. IEEE Spectrum 45(6): 68–71. doi:10.1109/MSPEC.2008.4531466.

  • Brynjolfsson, Erik, and Andrew McAfee. 2011. Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Lexington, MA: Digital Frontier. Kindle edition.

    Google Scholar 

  • Bryson, Joanna, and Phil Kime. 1998. “Just Another Artifact: Ethics and the Empirical Experience of AI”. Paper presented at the Fifteenth Internation Congress on Cybernetics, Namur, Belgium. http://www.cs.bath.ac.uk/~jjb/web/aiethics98.html.

  • Butler, Samuel [Cellarius, pseud.]. 1863. “Darwin Among the Machines”. Christchurch Press, June 13. http://www.nzetc.org/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html.

  • Cade, C. Maxwell. 1966. Other Worlds Than Ours. 1st ed. London: Museum.

    Google Scholar 

  • Cattell, Rick, and Alice Parker. 2012. Challenges for Brain Emulation: Why is Building a Brain so Difficult? Synaptic Link, February 5. http://synapticlink.org/Brain%20Emulation%20Challenges.pdf.

  • Chalmers, David John. 2010. “The Singularity: A Philosophical Analysis”. Journal of Consciousness Studies 17 (9–10): 7–65. http://www.ingentaconnect.com/content/imp/jcs/2010/00000017/f0020009/art00001.

  • Christiano, Paul F. 2012. “‘Indirect Normativity’ Write-up”. Ordinary Ideas (blog), April 21. http://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/.

  • Christiano, Paul F. 2014a Approval-directed agents. December 1 https://medium.com/ai-control/model-free-decisions-6e6609f5d99e.

  • Christiano, Paul F. 2014b. Approval-directed search. December 14 https://medium.com/@paulfchristiano/approval-directed-search-63457096f9e4.

  • Christiano, Paul F. 2014c. Approval-directed bootstrapping. December 20 https://medium.com/ai-control/approval-directed-bootstrapping-5d49e886c14f.

  • Christiano, Paul F. 2015. Learn policies or goals? April 21 https://medium.com/ai-control/learn-policies-or-goals-348add76b8eb.

  • Clark, Gregory. 2007. A Farewell to Alms: A Brief Economic History of the World. 1st ed. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Clarke, Roger. 1993. “Asimov’s Laws of Robotics: Implications for Information Technology, Part 1”. Computer 26(12): 53–61. doi:10.1109/2.247652.

  • Clarke, Roger. 1994. “Asimov’s Laws of Robotics: Implications for Information Technology, Part 2”. Computer 27 (1): 57–66. doi:10.1109/2.248881.

  • Daley, William. 2011. “Mitigating Potential Hazards to Humans from the Development of Intelligent Machines”. Synthese 2:44–50. http://www.synesisjournal.com/vol2_g/2011_2_44-50_Daley.pdf.

  • Davis, Ernest. 2012. “The Singularity and the State of the Art in Artificial Intelligence”. Working Paper, New York, May 9. Accessed July 22, 2013. http://www.cs.nyu.edu/~davise/papers/singularity.pdf.

  • Dayan, Peter. 2011. “Models of Value and Choice”. In Neuroscience of Preference and Choice: Cognitive and Neural Mechanisms, edited by Raymond J. Dolan and Tali Sharot, 33–52. Waltham, MA: Academic Press.

    Google Scholar 

  • De Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines.Palm Springs, CA: ETC Publications.

    Google Scholar 

  • Degabriele, Jean Paul, Kenny Paterson, and Gaven Watson. 2011. “Provable Security in the Real World”. IEEE Security & Privacy Magazine 9(3): 33–41. doi:10.1109/MSP.2010.200.

  • Dennett, Daniel C. 1987. “Cognitive Wheels: The Frame Problem of AI”. In Pylyshyn 1987, 41–64.

    Google Scholar 

  • Dennett, Daniel C. 2012. “The Mystery of David Chalmers”. Journal of Consciousness Studies 19(1–2): 86–95. http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00005.

  • Deutsch, David. 2011. The Beginning of Infinity: Explanations that Transform the World. 1st ed. New York: Viking.

    Google Scholar 

  • Dewey, Daniel. 2011. “Learning What to Value”. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 309–314.

    Google Scholar 

  • Dietrich, Eric. 2007. “After The Humans Are Gone”. Philosophy Now, May–June. http://philosophynow.org/issues/61/After_The_Humans_Are_Gone.

  • Docherty, Bonnie, and Steve Goose. 2012. Losing Humanity: The Case Against Killer Robots. Cambridge, MA: Human Rights Watch and the International Human Rights Clinic, November 19. http://www.hrw.org/sites/default/files/reports/arms1112ForUpload_0_0.pdf.

  • Douglas, Thomas. 2008. “Moral Enhancement”. Journal of Applied Philosophy 25(3): 228–245. doi:10.1111/j.1468-5930.2008.00412.x.

  • Eckersley, Peter, and Anders Sandberg. 2013. Is Brain Emulation Dangerous? Journal of Artificial General Intelligence 4.3: 170–194.

    Google Scholar 

  • Fox, Joshua, and Carl Shulman. 2010. “Superintelligence Does Not Imply Benevolence”. In Mainzer, Klaus, ed. 2010. ECAP10: VIII European Conference on Computing and Philosophy. Munich: Dr. Hut.

    Google Scholar 

  • Frankfurt, Harry G. 1971. “Freedom of the Will and the Concept of a Person”. Journal of Philosophy 68 (1): 5–20. doi:10.2307/2024717.

  • Franklin, Stan, and F. G. Patterson Jr. 2006. “The LIDA Architecture: Adding New Modes of Learning to an Intelligent, Autonomous, Software Agent”. In IDPT-2006 Proceedings.San Diego, CA: Society for Design & Process Science. http://ccrg.cs.memphis.edu/assets/papers/zo-1010-lida-060403.pdf.

  • Freeman, Tim. 2009. “Using Compassion and Respect to Motivate an Artificial Intelligence”. Unpublished manuscript, March 8. Accessed December 31, 2012. http://fungible.com/respect/paper.html.

  • Friedman, Batya, and Peter H. Kahn. 1992. “Human Agency and Responsible Computing: Implications for Computer System Design”. Journal of Systems and Software 17 (1): 7–14. doi:10.1016/0164-1212(92)90075-U.

  • Future of Life Institute. 2015. Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter. http://futureoflife.org/misc/open_letter.

  • Gewirth, Alan. 1978. Reason and Morality. Chicago: University of Chicago Press.

    Google Scholar 

  • Goertzel, Ben. 2004a. “Encouraging a Positive Transcension: Issues in Transhumanist Ethical Philosophy”. Dynamical Psychology.http://www.goertzel.org/dynapsyc/2004/PositiveTranscension.htm.

  • Goertzel, Ben. 2004b. “Growth, Choice and Joy: Toward a Precise Definition of a Universal Ethical Principle”. Dynamical Psychology. http://www.goertzel.org/dynapsyc/2004/GrowthChoiceJoy.htm.

  • Goertzel, Ben. 2010a. “Coherent Aggregated Volition: A Method for Deriving Goal System Content for Advanced, Beneficial AGIs”. The Multiverse According to Ben (blog), March 12. http://multiverseaccordingtoben.blogspot.ca/2010/03/coherent-aggregated-volitiontoward.html.

  • Goertzel, Ben. 2010b. “GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement”. Unpublished manuscript, May 2. Accessed December 31, 2012. http://goertzel.org/GOLEM.pdf.

  • Goertzel, Ben. 2012a. “CogPrime: An Integrative Architecture for Embodied Artificial General Intelligence”. OpenCog Foundation. October 2. Accessed December 31, 2012. http://wiki.opencog.org/w/CogPrime_Overview.

  • Goertzel, Ben. 2012b. “Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?” Journal of Consciousness Studies 19(1–2): 96–111. http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00006.

  • Goertzel, Ben, and Stephan Vladimir Bugaj. 2008. “Stages of Ethical Development in Artificial General Intelligence Systems”. In Wang, Pei, Ben Goertzel, and Stan Franklin, eds. Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Frontiers in Artificial Intelligence and Applications 171. Amsterdam: IOS, 448–459.

    Google Scholar 

  • Goertzel, Ben, and Joel Pitt. 2012. “Nine Ways to Bias Open-Source AGI Toward Friendliness”. Journal of Evolution and Technology 22(1): 116–131. http://jetpress.org/v22/goertzel-pitt.htm.

  • Gomes, Lee. 2015. Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter. IEEE Spectrum. http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning#qaTopicEight.

  • Good, Irving John. 1970. “Some Future Social Repercussions of Computers”. International Journal of Environmental Studies 1(1–4): 67–79. doi:10.1080/00207237008709398.

  • Gordon-Spears, Diana F. 2003. “Asimov’s Laws: Current Progress”. In Formal Approaches to Agent-Based Systems: Second International Workshop, FAABS 2002, Greenbelt, MD, USA, October 29–31, 2002. Revised Papers, edited by Michael G. Hinchey, James L. Rash, Walter F. Truszkowski, Christopher Rouff, and Diana F. Gordon-Spears, 257–259. Lecture Notes in Computer Science 2699. Berlin: Springer. doi:10.1007/978-3-540-45133-4_23.

  • Groesz, Lisa M., Michael P. Levine, and Sarah K. Murnen. 2001. “The Effect of Experimental Presentation of Thin Media Images on Body Satisfaction: A Meta-Analytic Review”. International Journal of Eating Disorders 31(1): 1–16. doi:10.1002/eat.10005.

  • Guarini, Marcello. 2006. “Particularism and the Classification and Reclassification of Moral Cases”. IEEE Intelligent Systems 21 (4): 22–28. doi:10.1109/MIS.2006.76.

  • Gubrud, Mark Avrum. 1997. “Nanotechnology and International Security”. Paper presented at the Fifth Foresight Conference on Molecular Nanotechnology, Palo Alto, CA, November 5–8. http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/.

  • Gunkel, David J. 2012. The Machine Question: Critical Perspectives on AI, Robotics, and Ethics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Haidt, Jonathan. 2006. The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom. 1st ed. New York: Basic Books.

    Google Scholar 

  • Hall, John Storrs. 2007a. Beyond AI: Creating the Conscience of the Machine. Amherst, NY: Prometheus Books.

    Google Scholar 

  • Hall, John Storrs. 2011. “Ethics for Self-Improving Machines”. In Anderson and Anderson 2011, 512– 523.

    Google Scholar 

  • Hanson, Robin. 1994. “If Uploads Come First: The Crack of a Future Dawn”. Extropy 6(2). http://hanson.gmu.edu/uploads.html.

  • Hanson, Robin. 2000. “Shall We Vote on Values, But Bet on Beliefs?” Unpublished manuscript, September. Last revised October 2007. http://hanson.gmu.edu/futarchy.pdf.

  • Hanson, Robin. 2008. “Economics of the Singularity”. IEEE Spectrum 45 (6): 45–50. doi:10.1109/MSPEC.2008.4531461.

  • Hanson, Robin. 2009. “Prefer Law to Values”. Overcoming Bias (blog), October 10. http://www.overcomingbias.com/2009/10/prefer-law-to-values.html.

  • Hanson, Robin. 2012. “Meet the New Conflict, Same as the Old Conflict”. Journal of Consciousness Studies 19(1–2): 119–125. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00008.

  • Hare, Robert D., Danny Clark, Martin Grann, and David Thornton. 2000. “Psychopathy and the Predictive Validity of the PCL-R: An International Perspective”. Behavioral Sciences & the Law 18(5): 623–645. doi:10.1002/1099-0798(200010)18:5<623::AID-BSL409>3.0.CO;2-W.

  • Harris, Grant T., and Marnie E. Rice. 2006. “Treatment of Psychopathy: A Review of Empirical Findings”. In Handbook of Psychopathy, edited by Christopher J. Patrick, 555–572. New York: Guilford.

    Google Scholar 

  • Hart, David, and Ben Goertzel. 2008. “OpenCog: A Software Framework for Integrative Artificial General Intelligence”. Unpublished manuscript. http://www.agiri.org/OpenCog_AGI-08.pdf.

  • Hayworth, Kenneth J. 2012. “Electron Imaging Technology for Whole Brain Neural Circuit Mapping”. International Journal of Machine Consciousness 4(1): 87–108. doi:10.1142/S1793843012500060.

  • Heylighen, Francis. 2007. “Accelerating Socio-Technological Evolution: From Ephemeralization and Stigmergy to the Global Brain”. In Globalization as Evolutionary Process: Modeling Global Change, edited by George Modelski, Tessaleno Devezas, and William R. Thompson, 284–309. Rethinking Globalizations 10. New York: Routledge.

    Google Scholar 

  • Heylighen, Francis. 2012. “Brain in a Vat Cannot Break Out.” Journal of Consciousness Studies 19 (1–2): 126–142. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00009.

  • Hibbard, Bill. 2001. “Super-Intelligent Machines”. ACM SIGGRAPH Computer Graphics 35 (1): 13–15. http://www.siggraph.org/publications/newsletter/issues/v35/v35n1.pdf.

  • Hibbard, Bill. 2005a. “Critique of the SIAI Collective Volition Theory”. Unpublished manuscript, December. Accessed December 31, 2012. http://www.ssec.wisc.edu/~billh/g/SIAI_CV_critique.html.

  • Hibbard, Bill. 2005b. “The Ethics and Politics of Super-Intelligent Machines”. Unpublished manuscript, July. Microsoft Word file, accessed December 31, 2012. https://sites.google.com/site/whibbard/g/SI_ethics_politics.doc.

  • Hibbard, Bill. 2008. “Open Source AI.” In Wang, Pei, Ben Goertzel, and Stan Franklin, eds. Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Frontiers in Artificial Intelligence and Applications 171. Amsterdam: IOS, 473–477.

    Google Scholar 

  • Hibbard, Bill. 2012a. “Avoiding Unintended AI Behaviors”. In Bach, Joscha, Ben Goertzel, and Matthew Iklé, eds. Artificial General Intelligence: 5th International Conference, AGI 2012, Oxford, UK, December 8–11, 2012. Proceedings. Lecture Notes in Artificial Intelligence 7716. New York: Springer. doi: 10.1007/978-3-642-35506-6, 107–116.

    Google Scholar 

  • Hibbard, Bill. 2012b. “Decision Support for Safe AI Design”. In Bach, Joscha, Ben Goertzel, and Matthew Iklé, eds. Artificial General Intelligence: 5th International Conference, AGI 2012, Oxford, UK, December 8–11, 2012. Proceedings. Lecture Notes in Artificial Intelligence 7716. New York: Springer. doi: 10.1007/978-3-642-35506-6, 117–125.

    Google Scholar 

  • Hibbard, Bill. 2012c. “Model-Based Utility Functions”. Journal of Artificial General Intelligence 3(1): 1–24. doi:10.2478/v10229-011-0013-5.

  • Hibbard, Bill. 2012d. The Error in My 2001 VisFiles Column, September. Accessed December 31, 2012. http://www.ssec.wisc.edu/~billh/g/visfiles_error.html.

  • Horvitz, Eric J., and Bart Selman. 2009. Interim Report from the AAAI Presidential Panel on Long- Term AI Futures. Palo Alto, CA: AAAI, August. http://www.aaai.org/Organization/Panel/panelnote.pdf.

  • Hughes, James. 2001. “Relinquishment or Regulation: Dealing with Apocalyptic Technological Threats”. Hartford, CT, November 14.

    Google Scholar 

  • IEEE Spectrum. 2008. “Tech Luminaries Address Singularity”: “The Singularity; Special Report”. (June).

    Google Scholar 

  • Jenkins, Anne. 2003. “Artificial Intelligence and the Real World”. Futures 35 (7): 779–786. doi:10.1016/S0016-3287(03)00029-6.

  • Joy, Bill. 2000. “Why the Future Doesn’t Need Us”. Wired, April. http://www.wired.com/wired/archive/8.04/joy.html.

  • Karnofsky, Holden. 2012. “Thoughts on the Singularity Institute (SI)”. Less Wrong (blog), May 11. http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/.

  • Karnofsky, Holden, and Jaan Tallinn. 2011. “Karnofsky & Tallinn Dialog on SIAI Efficacy”. Accessed December 31, 2012. http://xa.yimg.com/kq/groups/23070378/1331435883/name/Jaan+Tallinn+2011+05+-+revised.doc.

  • Kipnis, David. 1972. “Does Power Corrupt?”. Journal of Personality and Social Psychology 24(1): 33–41. doi:10.1037/h0033390.

  • Koene, Randal A. 2012a. “Embracing Competitive Balance: The Case for Substrate-Independent Minds and Whole Brain Emulation”. In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Koene, Randal A. 2012b. “Experimental Research in Whole Brain Emulation: The Need for Innovative in Vivo Measurement Techniques”. International Journal of Machine Consciousness 4(1): 35–65. doi:10.1142/S1793843012400033.

  • Kornai, András. 2014. Bounding the impact of AGI. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 417–438.

    Google Scholar 

  • Kurzweil, Ray. 2001. “Response to Stephen Hawking”. Kurzweil Accelerating Intelligence. September 5. Accessed December 31, 2012. http://www.kurzweilai.net/response-to-stephen-hawking.

  • Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

    Google Scholar 

  • Lampson, Butler W. 1973. “A Note on the Confinement Problem”. Communications of the ACM 16(10): 613–615. doi:10.1145/362375.362389.

  • Legg, Shane. 2009. “Funding Safe AGI”. Vetta Project (blog), August 3. http://www.vetta.org/2009/08/funding-safe-agi/.

  • Madrigal, Alexis C. 2015. The case against killer robots, from a guy actually working on artificial intelligence. http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/.

  • Mann, Steve, Jason Nolan, and Barry Wellman. 2003. “Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments”. Surveillance & Society 1(3): 331–355. http://library.queensu.ca/ojs/index.php/surveillance-and-society/article/view/3344.

  • McCauley, Lee. 2007. “AI Armageddon and the Three Laws of Robotics”. Ethics and Information Technology 9(2): 153–164. doi:10.1007/s10676-007-9138-2.

  • McCulloch, W. S. 1956. “Toward Some Circuitry of Ethical Robots; or, An Observational Science of the Genesis of Social Evaluation in the Mind-like Behavior of Artifacts”. Acta Biotheoretica 11(3–4): 147–156. doi:10.1007/BF01557008.

  • McDermott, Drew. 2012. “Response to ‘The Singularity’ by David Chalmers”. Journal of Consciousness Studies 19(1–2): 167–172. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00011.

  • McGinnis, John O. 2010. “Accelerating AI”. Northwestern University Law Review 104 (3): 1253–1270. http://www.law.northwestern.edu/lawreview/v104/n3/1253/LR104n3McGinnis.pdf.

  • McKibben, Bill. 2003. Enough: Staying Human in an Engineered Age. New York: Henry Holt.

    Google Scholar 

  • McLeod, Peter, Kim Plunkett, and Edmund T. Rolls. 1998. Introduction to Connectionist Modelling of Cognitive Processes. New York: Oxford University Press.

    Google Scholar 

  • Miller, James D. 2012. Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World. Dallas, TX: BenBella Books.

    Google Scholar 

  • Moore, David, Vern Paxson, Stefan Savage, Colleen Shannon, Stuart Staniford, and Nicholas Weaver. 2003. “Inside the Slammer Worm”. IEEE Security & Privacy Magazine 1(4): 33–39. doi:10.1109/MSECP.2003.1219056.

  • Moore, David, Colleen Shannon, and Jeffery Brown. 2002. “Code-Red: A Case Study on the Spread and Victims of an Internet Worm”. In Proceedings of the Second ACM SIGCOMM Workshop on Internet Measurment (IMW’02), 273–284. New York: ACM Press. doi:10.1145/637201.637244.

  • Moravec, Hans P. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Moravec, Hans P. 1992. “Pigs in Cyberspace”. Field Robotics Center. Accessed December 31, 2012. http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1992/CyberPigs.html.

  • Moravec, Hans P. 1999. Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press.

    Google Scholar 

  • Muehlhauser, Luke, and Louie Helm. 2012. “The Singularity and Machine Ethics”. In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Muehlhauser, Luke, and Anna Salamon. 2012. “Intelligence Explosion: Evidence and Import”. In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Mueller, Dennis C. 2003. Public Choice III. 3rd ed. New York: Cambridge University Press.

    Google Scholar 

  • Müller, Vincent C., and Nick Bostrom. 2014. Future progress in artificial intelligence: A survey of expert opinion. Fundamental Issues of Artificial Intelligence.

    Google Scholar 

  • Murphy, Robin, and David D. Woods. 2009. “Beyond Asimov: The Three Laws of Responsible Robotics”. IEEE Intelligent Systems 24(4): 14–20. doi:10.1109/MIS.2009.69.

  • Napier, William. 2008. “Hazards from Comets and Asteroids”. In Bostrom, Nick, and Milan M. Ćirković, eds. Global Catastrophic Risks. New York: Oxford University Press, 222–237.

    Google Scholar 

  • Ng, Andrew Y., and Stuart J. Russell. 2000. Algorithms for inverse reinforcement learning. In Icml (pp. 663–670).

    Google Scholar 

  • Nielsen, Thomas D., and Finn V. Jensen. 2004. “Learning a Decision Maker’s Utility Function from (Possibly) Inconsistent Behavior”. Artificial Intelligence 160(1–2): 53–78. doi:10.1016/j.artint.2004.08.003.

  • Nordmann, Alfred. 2007. “If and Then: A Critique of Speculative NanoEthics”. NanoEthics 1(1): 31–46. doi:10.1007/s11569-007-0007-6.

  • Nordmann, Alfred. 2008. “Singular Simplicity”. IEEE Spectrum, June. http://spectrum.ieee.org/robotics/robotics-software/singular-simplicity.

  • Olson, Mancur. 1982. The Rise and Decline of Nations: Economic Growth, Stagflation, and Social Rigidities. New Haven, CT: Yale University Press.

    Google Scholar 

  • Omohundro, Stephen M. 2007. “The Nature of Self-Improving Artificial Intelligence”. Paper presented at Singularity Summit 2007, San Francisco, CA, September 8–9. http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/.

  • Omohundro, Stephen M. 2008. “The Basic AI Drives”. In Wang, Pei, Ben Goertzel, and Stan Franklin, eds. Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Frontiers in Artificial Intelligence and Applications 171. Amsterdam: IOS, 483–492.

    Google Scholar 

  • Omohundro, Stephen M. 2012. “Rational Artificial Intelligence for the Greater Good”. In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Orseau, Laurent, and Mark Ring. 2011. “Self-Modification and Mortality in Artificial Agents”. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 1–10.

    Google Scholar 

  • Persson, Ingmar, and Julian Savulescu. 2008. “The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity”. Journal of Applied Philosophy 25(3): 162– 177. doi:10.1111/j.1468-5930.2008.00410.x.

  • Persson, Ingmar, and Julian Savulescu. 2012. Unfit for the Future. Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199653645.001.0001.

  • Peterson, Nathaniel R., David B. Pisoni, and Richard T. Miyamoto. 2010. “Cochlear Implants and Spoken Language Processing Abilities: Review and Assessment of the Literature”. Restorative Neurology and Neuroscience 28(2): 237–250. doi:10.3233/RNN-2010-0535.

  • Plaut, David C. 2003. “Connectionist Modeling of Language: Examples and Implications”. In Mind, Brain, and Language: Multidisciplinary Perspectives, edited by Marie T. Banich and Molly Mack, 143–168. Mahwah, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Posner, Richard A. 2004. Catastrophe: Risk and Response. New York: Oxford University Press.

    Google Scholar 

  • Potapov, Alexey, and Sergey Rodionov. 2012. “Universal Empathy and Ethical Bias for Artificial General Intelligence”. Paper presented at the Fifth Conference on Artificial General Intelligence (AGI– 12), Oxford, December 8–11. Accessed June 27, 2013. http://aideus.com/research/doc/preprints/04_paper4_AGIImpacts12.pdf.

  • Powers, Thomas M. 2006. “Prospects for a Kantian Machine”. IEEE Intelligent Systems 21(4): 46–51. doi:10.1109/MIS.2006.77.

  • Pylyshyn, Zenon W., ed. 1987. The Robot’s Dilemma: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex.

    Google Scholar 

  • Pynadath, David V., and Milind Tambe. 2002. “Revisiting Asimov’s First Law: A Response to the Call to Arms”. In Intelligent Agents VIII: Agent Theories, Architectures, and Languages 8th International Workshop, ATAL 2001 Seattle, WA, USA, August 1–3, 2001 Revised Papers, edited by John-Jules Ch. Meyer and Milind Tambe, 307–320. Berlin: Springer. doi:10.1007/3-540-45448-9_22.

  • Ramamurthy, Uma, Bernard J. Baars, Sidney K. D’Mello, and Stan Franklin. 2006. “LIDA: A Working Model of Cognition”. In Proceedings of the Seventh International Conference on Cognitive Modeling, edited by Danilo Fum, Fabio Del Missier, and Andrea Stocco, 244–249. Trieste, Italy: Edizioni Goliardiche. http://ccrg.cs.memphis.edu/assets/papers/ICCM06-UR.pdf.

  • Ring, Mark, and Laurent Orseau. 2011. “Delusion, Survival, and Intelligent Agents”. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 11–20.

    Google Scholar 

  • Russell, Stuart J. 2015. Will They Make Us Better People? Edge.org. http://edge.org/response-detail/26157.

  • Russell, Stuart J., Dewey, Daniel, Tegmark, Max. 2015. Research priorities for robust and beneficial artificial intelligence. http://futureoflife.org/static/data/documents/research_priorities.pdf.

  • Sandberg, Anders. 2001. “Friendly Superintelligence”. Accessed December 31, 2012. http://www.aleph.se/Nada/Extro5/Friendly%20Superintelligence.htm.

  • Sandberg, Anders. 2012. “Models of a Singularity”. In Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.

    Google Scholar 

  • Sandberg, Anders, and Nick Bostrom. 2008. Whole Brain Emulation: A Roadmap. Technical Report, 2008-3. Future of Humanity Institute, University of Oxford. http://www.fhi.ox.ac.uk/wpcontent/uploads/brain-emulation-roadmap-report1.pdf.

  • Schmidhuber, Jürgen. 2009. “Ultimate Cognition à la Gödel”. Cognitive Computation 1(2): 177–193. doi:10.1007/s12559-009-9014-y.

  • Scott, James C. 1998. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven, CT: Yale University Press.

    Google Scholar 

  • Shanahan, Murray. 2015. The Technological Singularity. MIT Press (forthcoming).

    Google Scholar 

  • Shulman, Carl. 2009. “Arms Control and Intelligence Explosions”. Paper presented at the 7th European Conference on Computing and Philosophy (ECAP), Bellaterra, Spain, July 2–4.

    Google Scholar 

  • Shulman, Carl. 2010a. Omohundro’s “Basic AI Drives” and Catastrophic Risks. The Singularity Institute, San Francisco, CA. http://intelligence.org/files/BasicAIDrives.pdf.

  • Shulman, Carl. 2010b. Whole Brain Emulation and the Evolution of Superorganisms. The Singularity Institute, San Francisco, CA. http://intelligence.org/files/WBE-Superorgs.pdf.

  • Snaider, Javier, Ryan Mccall, and Stan Franklin. 2011. “The LIDA Framework as a General Tool for AGI”. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 133–142.

    Google Scholar 

  • Soares, N., & Benja Fallenstein. 2014. Aligning Superintelligence with Human Interests: A Technical Research Agenda. Tech. rep. Machine Intelligence Research Institute, 2014. URL: http://intelligence.org/files/TechnicalAgenda.pdf.

  • Sobolewski, Matthias. 2012. “German Cabinet to Agree Tougher Rules on High-Frequency Trading”. Reuters, September 25. Accessed December 31, 2012. http://in.reuters.com/article/2012/09/25/germany-bourse-rules-idINL5E8KP8BK20120925.

  • Sotala, Kaj. 2012. “Advantages of Artificial Intelligences, Uploads, and Digital Minds”. International Journal of Machine Consciousness 4(1): 275–291. doi:10.1142/S1793843012400161.

  • Sotala, Kaj. 2015. Concept learning for safe autonomous AI. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  • Sotala, Kaj, and Harri Valpola. 2012. “Coalescing Minds: Brain Uploading-Related Group Mind Scenarios”. International Journal of Machine Consciousness 4(1): 293–312. doi:10.1142/S1793843012400173.

  • Sotala, Kaj, and Roman V. Yampolskiy. 2013. Responses to catastrophic AGI risk: a survey. Technical report 2013-2. Berkeley, CA: Machine Intelligence Research Institute.

    Google Scholar 

  • Sotala, Kaj, and Roman V. Yampolskiy. 2015. Responses to catastrophic AGI risk: a survey. Physica Scripta, 90(1), 018001.

    Google Scholar 

  • Spears, Diana F. 2006. “Assuring the Behavior of Adaptive Agents”. In Agent Technology from a Formal Perspective, edited by Christopher Rouff, Michael Hinchey, James Rash, Walter Truszkowski, and Diana F. Gordon-Spears, 227–257. NASA Monographs in Systems and Software Engineering. London: Springer. doi:10.1007/1-84628-271-3_8.

  • Stahl, Bernd Carsten. 2002. “Can a Computer Adhere to the Categorical Imperative? A Contemplation of the Limits of Transcendental Ethics in IT”. In, edited by Iva Smit and George E. Lasker, 13–18. Vol. 1. Windsor, ON: International Institute for Advanced Studies in Systems Research/Cybernetics.

    Google Scholar 

  • Staniford, Stuart, Vern Paxson, and Nicholas Weaver. 2002. “How to 0wn the Internet in Your Spare Time”. In Proceedings of the 11th USENIX Security Symposium, edited by Dan Boneh, 149–167. Berkeley, CA: USENIX. http://www.icir.org/vern/papers/cdc-usenix-sec02/.

  • Steunebrink, Bas R., and Jürgen Schmidhuber. 2011. “A Family of Gödel Machine Implementations”. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 275–280.

    Google Scholar 

  • Suber, Peter. 2002. “Saving Machines from Themselves: The Ethics of Deep Self-Modification”. Accessed December 31, 2012. http://www.earlham.edu/~peters/writing/selfmod.htm.

  • Sullins, John P. 2005. “Ethics and Artificial life: From Modeling to Moral Agents”. Ethics & Information Technology 7 (3): 139–148. doi:10.1007/s10676-006-0003-5.

  • Tarleton, Nick. 2010. Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics. The Singularity Institute, San Francisco, CA. http://intelligence.org/files/CEV-MachineEthics.pdf.

  • Tenenbaum, Joshua B., Thomas L. Griffiths, and Charles Kemp. 2006. “Theory-Based Bayesian Models of Inductive Learning and Reasoning”. In “Probabilistic Models of Cognition”. Special issue, Trends in Cognitive Sciences 10(7): 309–318. doi:10.1016/j.tics.2006.05.009.

  • Tenenbaum, Joshua B., Charles Kemp, Thomas L. Griffiths, and Noah D. Goodman. 2011. “How to grow a mind: Statistics, structure, and abstraction”. science 331, 6022: 1279–1285.

    Google Scholar 

  • Thomas, Michael S. C., and James L. McClelland. 2008. “Connectionist Models of Cognition”. In The Cambridge Handbook of Computational Psychology, edited by Ron Sun, 23–58. Cambridge Handbooks in Psychology. New York: Cambridge University Press.

    Google Scholar 

  • Trope, Yaacov, and Nira Liberman. 2010. “Construal-level Theory of Psychological Distance”. Psychological Review 117(2): 440–463. doi:10.1037/a0018963.

  • Turney, Peter. 1991. “Controlling Super-Intelligent Machines”. Canadian Artificial Intelligence, July 27, 3–4, 12, 35.

    Google Scholar 

  • Tversky, Amos, and Daniel Kahneman. 1981. “The Framing of Decisions and the Psychology of Choice”. Science 211 (4481): 453–458. doi:10.1126/science.7455683.

  • Van Gelder, Timothy. 1995. “What Might Cognition Be, If Not Computation?” Journal of Philosophy 92(7): 345–381. http://www.jstor.org/stable/2941061.

  • Van Kleef, Gerben A., Astrid C. Homan, Catrin Finkenauer, Seval Gundemir, and Eftychia Stamkou. 2011. “Breaking the Rules to Rise to Power: How Norm Violators Gain Power in the Eyes of Others”. Social Psychological and Personality Science 2(5): 500–507. doi:10.1177/1948550611398416.

  • Van Kleef, Gerben A., Christopher Oveis, Ilmo van der Löwe, Aleksandr LuoKogan, Jennifer Goetz, and Dacher Keltner. 2008. “Power, Distress, and Compassion: Turning a Blind Eye to the Suffering of Others”. Psychological Science 19(12): 1315–1322. doi:10.1111/j.1467-9280.2008.02241.x.

  • Verdoux, Philippe. 2010. “Risk Mysterianism and Cognitive Boosters”. Journal of Futures Studies 15 (1): 1–20. Accessed February 2, 2013. http://www.jfs.tku.edu.tw/15-1/A01.pdf.

  • Verdoux, Philippe. 2011. “Emerging Technologies and the Future of Philosophy”. Metaphilosophy 42(5): 682–707. doi:10.1111/j.1467-9973.2011.01715.x.

  • Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era”. In Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA Conference Publication 10129. NASA Lewis Research Center. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022855_1994022855.pdf.

  • Walker, Mark. 2008. “Human Extinction and Farsighted Universal Surveillance”. Working Paper, September. Accessed December 31, 2012. http://www.nmsu.edu/~philos/documents/sept-2008-smart-dust-final.doc.

  • Wallach, Wendell. 2010. “Robot Minds and Human Ethics: The Need for a Comprehensive Model of Moral Decision Making”. In “Robot Ethics and Human Ethics,” edited by Anthony Beavers. Special issue, Ethics and Information Technology 12(3): 243–250. doi:10.1007/s10676-010-9232-8.

  • Wallach, Wendell, and Colin Allen. 2009. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press. doi:10.1093/acprof:oso/9780195374049.001.0001.

  • Wallach, Wendell, and Colin Allen. 2012. “Framing Robot Arms Control”. Ethics and Information Technology. doi:10.1007/ s10676-012-9303-0.

    Google Scholar 

  • Wang, Pei. 2012. “Motivation Management in AGI Systems”. In Bach, Joscha, Ben Goertzel, and Matthew Iklé, eds. Artificial General Intelligence: 5th International Conference, AGI 2012, Oxford, UK, December 8–11, 2012. Proceedings. Lecture Notes in Artificial Intelligence 7716. New York: Springer. doi: 10.1007/978-3-642-35506-6, 352–361.

    Google Scholar 

  • Warwick, Kevin. 1998. In the Mind of the Machine: Breakthrough in Artificial Intelligence. London: Arrow.

    Google Scholar 

  • Warwick, Kevin. 2003. “Cyborg Morals, Cyborg Values, Cyborg Ethics”. Ethics and Information Technology 5(3): 131–137. doi:10.1023/B:ETIN.0000006870.65865.cf.

  • Waser, Mark R. 2008. “Discovering the Foundations of a Universal System of Ethics as a Road to Safe Artificial Intelligence”. In Biologically Inspired Cognitive Architectures: Papers from the AAAI Fall Symposium, 195–200. Technical Report, FS-08-04. AAAI Press, Menlo Park, CA. http://www.aaai.org/Papers/Symposia/Fall/2008/FS-08-04/FS08-04-049.pdf.

  • Waser, Mark R. 2009. “A Safe Ethical System for Intelligent Machines”. In Biologically Inspired Cognitive Architectures: Papers from the AAAI Fall Symposium, edited by Alexei V. Samsonovich, 194–199. Technical Report, FS-09-01. AAAI Press, Menlo Park, CA. http://aaai.org/ocs/index.php/FSS/FSS09/paper/view/934.

  • Waser, Mark R. 2011. “Rational Universal Benevolence: Simpler, Safer, and Wiser than ‘Friendly AI”’. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 153–162.

    Google Scholar 

  • Weld, Daniel, and Oren Etzioni. 1994. “The First Law of Robotics (A Call to Arms)”. In Proceedings of the Twelfth National Conference on Artificial Intelligence, edited by Barbara Hayes-Roth and Richard E. Korf, 1042–1047. Menlo Park, CA: AAAI Press. http://www.aaai.org/Papers/AAAI/1994/AAAI94-160.pdf.

  • Weng, Yueh-Hsuan, Chien-Hsun Chen, and Chuen-Tsai Sun. 2008. “Safety Intelligence and Legal Machine Language: Do We Need the Three Laws of Robotics?” In Service Robot Applications, edited by Yoshihiko Takahashi. InTech. doi:10.5772/6057.

  • Weng, Yueh-Hsuan, Chien-Hsun Chen, and Chuen-Tsai Sun. 2009. “Toward the Human–Robot Coexistence Society: On Safety Intelligence for Next Generation Robots”. International Journal of Social Robotics 1(4): 267–282. doi:10.1007/s12369-009-0019-1.

  • Whitby, Blay. 1996. Reflections on Artificial Intelligence: The Legal, Moral, and Ethical Dimensions. Exeter, UK: Intellect Books.

    Google Scholar 

  • Whitby, Blay, and Kane Oliver. 2000. “How to Avoid a Robot Takeover: Political and Ethical Choices in the Design and Introduction of Intelligent Artifacts”. Paper presented at Symposium on Artificial Intelligence, Ethics and (Quasi-) Human Rights at AISB-00, University of Birmingham, England. http://www.sussex.ac.uk/Users/blayw/BlayAISB00.html.

  • Wilson, Grant. 2013. Minimizing global catastrophic and existential risks from emerging technologies through international law. Va. Envtl. LJ, 31, 307.

    Google Scholar 

  • Wood, David Murakami, and Kirstie Ball, eds. 2006. A Report on the Surveillance Society: For the Information Commissioner, by the Surveillance Studies Network. Wilmslow, UK: Office of the Information Commissioner, September. http://www.ico.org.uk/about_us/research/~/media/documents/library/Data_Protection/Practical_application/SURVEILLANCE_SOCIETY_SUMMARY_06.ashx.

  • Yampolskiy, Roman V. 2012. “Leakproofing the Singularity: Artificial Intelligence Confinement Problem”. Journal of Consciousness Studies 2012(1–2): 194–214. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00014.

  • Yampolskiy, Roman V. 2013. “What to Do with the Singularity Paradox?” Studies in Applied Philosophy, Epistemology and Rational Ethics vol 5, pp. 397–413. Springer Berlin Heidelberg.

    Google Scholar 

  • Yampolskiy, Roman V., and Joshua Fox. 2012. “Safety Engineering for Artificial General Intelligence”. Topoi. doi:10.1007/s11245-012-9128-9.

  • Yudkowsky, Eliezer. 2001. Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures. The Singularity Institute, San Francisco, CA, June 15. http://intelligence.org/files/CFAI.pdf.

  • Yudkowsky, Eliezer. 2004. Coherent Extrapolated Volition. The Singularity Institute, San Francisco, CA, May. http://intelligence.org/files/CEV.pdf.

  • Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk”. In Bostrom, Nick, and Milan M. Ćirković, eds. Global Catastrophic Risks. New York: Oxford University Press, 308–345.

    Google Scholar 

  • Yudkowsky, Eliezer. 2011. Complex Value Systems are Required to Realize Valuable Futures. The Singularity Institute, San Francisco, CA. http://intelligence.org/files/ComplexValues.pdf.

  • Yudkowsky, Eliezer. 2012. “Reply to Holden on ‘Tool AI”’. Less Wrong (blog), June 12. http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/.

Download references

Acknowledgementss

Special thanks to Luke Muehlhauser for extensive assistance throughout the writing process. We would also like to thank Abram Demski, Alexei Turchin, Alexey Potapov, Anders Sandberg, Andras Kornai, Anthony Berglas, Aron Vallinder, Ben Goertzel, Ben Noble, Ben Sterrett, Brian Rabkin, Bill Hibbard, Carl Shulman, Dana Scott, Daniel Dewey, David Pearce, Evelyn Mitchell, Evgenij Thorstensen, Frank White, gwern branwen, Harri Valpola, Jaan Tallinn, Jacob Steinhardt, James Babcock, James Miller, Joshua Fox, Louie Helm, Mark Gubrud, Mark Waser, Michael Anissimov, Michael Vassar, Miles Brundage, Moshe Looks, Randal Koene, Robin Hanson, Risto Saarelma, Steve Omohundro, Suzanne Lidström, Steven Kaas, Stuart Armstrong, Tim Freeman, Ted Goertzel, Toni Heinonen, Tony Barrett, Vincent Müller, Vladimir Nesov, Wei Dai, and two anonymous reviewers as well as several users of lesswrong.com for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roman Yampolskiy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer-Verlag GmbH Germany

About this chapter

Cite this chapter

Sotala, K., Yampolskiy, R. (2017). Responses to the Journey to the Singularity. In: Callaghan, V., Miller, J., Yampolskiy, R., Armstrong, S. (eds) The Technological Singularity. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-54033-6_3

Download citation

Publish with us

Policies and ethics