Skip to main content

The Slowdown Hypothesis

  • Chapter
  • First Online:
Singularity Hypotheses

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

The so-called singularity hypothesis embraces the most ambitious goal of Artificial Intelligence: the possibility of constructing human-like intelligent systems. The intriguing addition is that once this goal is achieved, it would not be too difficult to surpass human intelligence. While we believe that none of the philosophical objections against strong AI are really compelling, we are skeptical about a singularity scenario associated with the achievement of human-like systems. Several reflections on the recent history of neuroscience and AI, in fact, seem to suggest that the trend is going in the opposite direction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.techjournalsouth.com/2011/08/ibms-brain-like-cognitive-chips-can-learn-video/

References

  • Adorjan, P., Piepenbrock, C., & Obermayer, K. (1999). Contrast adaptation and infomax in visual cortical neurons. Reviews in the Neurosciences, 10, 181–200.

    Google Scholar 

  • Anderson, M., & Anderson, S. L. (Eds.), (2011). Machine Ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Cadieu, C., Kouh, M., Pasupathy, A., Connor, C. E., Riesenhuber, M., & Poggio, T. (2007). A model of V4 shape selectivity and invariance. Journal of Neurophysiology, 98, 1733–1750.

    Google Scholar 

  • Campbell, M., Hoane, A. J., & Hsuc, F. (2002). Deep blue. Artificial Intelligence, 134, 57–83.

    Google Scholar 

  • Chalmers, D. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17, 7–65.

    Google Scholar 

  • Deco, G. (2001). Biased competition mechanisms for visual attention in a multimodular neurodynamical system. In S. Wermter, J. Austin, & D. Willshaw (Eds.), Emergent neural computational architectures based on neuroscience: towards neuroscience-inspired computing (pp. 114–126). Berlin: Springer-Verlag.

    Google Scholar 

  • Dittman, J. S., Kreitzer, A. C., & Regehr, W. G. (2000). Interplay between facilitation, depression, and residual calcium at three presynaptic terminals. Journal of Neuroscience, 20, 1374–1385.

    Google Scholar 

  • Dreyfus, H. (1972). What Computers Can’t Do: A Critique of Artificial Reason. New York: Harper and Row Pub. Inc.

    Google Scholar 

  • Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind Over Machine: The Power of Human Intuition and the Expertise in the Era of the Computer. New York: The Free Press.

    Google Scholar 

  • Dufort, P. A., & Lumsden, C. J. (1991). Color categorization and color constancy in a neural network model of V4. Biological Cybernetics, 65, 293–303.

    Google Scholar 

  • Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14, 179–221.

    Google Scholar 

  • Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A. et al. (2010). Building Watson: An overview of the DeepQA project. The AI magazine, 31, 59–79.

    Google Scholar 

  • Gardner, H. (2006). Multiple Intelligences: New Horizons. New York: Basic Books.

    Google Scholar 

  • Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. In F. L. Alt & M. Rubinoff (Eds.), Advances in Computers (Vol. 6, pp. 31–88). New York: Academic Press.

    Google Scholar 

  • Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of ion currents and its applications to conduction and excitation in nerve membranes. Journal of Physiology, 117, 500–544.

    Google Scholar 

  • Hubel, D., & Wiesel, T. (1959). Receptive fields of single neurones in the cat’s striate cortex. Journal of Physiology, 148, 574–591.

    Google Scholar 

  • Jiang, X., Rosen, E., Zeffiro, T., VanMeter, J., Blanz, V., & Riesenhuber, M. (2006). Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques. Neuron, 50, 159–172.

    Google Scholar 

  • McCormick, D. A., & Huguenard, J. R. (1992). A model of the electrophysiological properties of thalamocortical relay neurons. Journal of Neurophysiology, 68, 1384–1400.

    Google Scholar 

  • Menary, R. (Ed.), (2010). The Extended Mind. Cambridge: MIT Press.

    Google Scholar 

  • Miikkulainen, R. (1993). Subsymbolic Natural Language Processing: and Integrated Model of Scripts, Lexicon and Memory. Cambridge: MIT Press.

    Google Scholar 

  • Modha, D. S., Ananthanarayanan, R., Esser, S. K., Ndirango, A., Sherbondy, A. J., & Singh, R. (2011). Cognitive computing. Communications of the Association for Computing Machinery, 54, 62–71.

    Google Scholar 

  • Plebe, A. (2007). A model of angle selectivity development in visual area V2. Neurocomputing, 70, 2060–2066.

    Google Scholar 

  • Plebe, A., & Domenella, R. G. (2006). Early development of visual recognition. BioSystems, 86, 63–74.

    Google Scholar 

  • Rolls, E. (1992). Neurophysiological mechanisms underlying face processing within and beyond the temporal cortical visual areas. Philosophical transactions of the Royal Society B, 335, 11–21.

    Google Scholar 

  • Rudolph, M., & Desrexhe, A. (2005). An extended analytic expression for the membrane potential distribution of conductance-based synaptic noise. Neural Computation, 17, 2301–2315.

    Google Scholar 

  • Rudolph, M., & Desrexhe, A. (2007). An extended analytic expression for the membrane potential distribution of conductance-based synaptic noise. Neural Computation, 17, 2301–2315.

    Google Scholar 

  • Searle, J. R. (1980). Mind, brain and programs. Behavioral and Brain Science, 3, 417–424.

    Google Scholar 

  • Shannon, C. (1950). Programming a computer for playing chess. Philosophical Magazine, 41, 256–275.

    Google Scholar 

  • Shapiro, L. (2011). Embodied Cognition. London: Routledge.

    Google Scholar 

  • Simoncelli, E. P., & Heeger, D. J. (1992). A computational model for perception of two-dimensional pattern velocities. Investigative Opthalmology and Visual Science Supplement, 33, 1142.

    Google Scholar 

  • Tomasello, M. (2009). Why We Cooperate. Cambridge: MIT Press.

    Google Scholar 

  • van Pelt, J., Carnell, A., de Ridder, S., Mansvelder, H. D., & van Ooyen, A. (2010). An algorithm for finding candidate synaptic sites in computer generated networks of neurons with realistic morphologies. Frontiers in Computational Neuroscience, 4, 1–17.

    Google Scholar 

  • Vinge, V. (1993). The Coming Technological Singularity: How to Survive in the Post-human Era”. Proc. Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace (pp.11–22). NASA: Lewis Research Center.

    Google Scholar 

  • von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetic, 14, 85–100.

    Google Scholar 

  • von der Malsburg, C., & Willshaw, D. J. (1976). A mechanism for producing continuous neural mappings: ocularity dominance stripes and ordered retino-tectal projections. Experimental Brain Research, 1, 463–469.

    Google Scholar 

  • Wallach, W., & Allen, C. (2008). Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press.

    Google Scholar 

  • Weisberg, M. (2007). Three kinds of idealization. The Journal of Philosophy, 12, 639–661.

    Google Scholar 

  • Wilson, M. A., & Bower, J. M. (1989). The simulation of large-scale neural networks. In C. Koch & I. Segev (Eds.), Methods in Neuronal Modeling (pp. 291–333). Cambridge: MIT Press.

    Google Scholar 

  • Zhaoping, L. (2005). Border ownership from intracortical interactions in visual area V2. Neuron, 47, 143–153.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessio Plebe .

Editor information

Editors and Affiliations

Eliezer Yudkowsky on Plebe and Perconti’s “The Slowdown Hypothesis”

Eliezer Yudkowsky on Plebe and Perconti’s “The Slowdown Hypothesis”

The hypothesis presented for a curve of diminishing returns of optimization power in versus intelligence out is incompatible with the historical case of natural selection, in which it did not take a hundred times as long to go from Australopithecus to humans as it did to go from the first brainstto Australopithecus, but rather the reverse. Many people have postulated logarithmic returns or other such diminishing returns to intelligence. They are easy to postulate.

It is much harder to make them fit the observed facts of either the evolution of intelligence (for talk about diminishing returns to brain size, genome size, or optimization pressure on the brain) or the history of technology (for talk about diminishing returns to knowledge or intelligence). Specifically exponential theories of progress are probably wrong, of course; Moore's Law has already broken down. But the historical cases we've observed are for roughly constant input processes producing increasing (though not always exponential!) outputs. Constant evolutionary pressure has produced, not exponential, but increasing outputs from hominid intelligence. A fourfold increase in hominid brains has not produced exponential returns, but to characterize the resulting returns as sublinear seems rather odd. In a nuclear pile, neutron multiplication factors are strictly linear—each neutron giving rise to 1.0006 output neutrons on average, for example—and the resulting pile of neutrons sparking other fissions would produce an exponential meltdown if not for external braking processes such as cadmium rods. For the novel phenomenon of recursively self-improving intelligence, where AI intelligence in is a direct function of AI intelligence out, rather than the AI intelligence being produced by a constant external optimization pressure such as human programmers, to fail to go FOOM once a threshold level of intelligence is reached, we need all these observed curves to exhibit a sudden sharp turnaround the moment they are past the level of human intelligence, and produce extremely sharply diminishing curves of intelligence-out versus optimization power in. Simply put, nobody has ever devised a realistic model of optimization power in versus optimization power out which both accounts for the observed curves of hominid history and human technology, which fails to exhibit an intelligence explosion once intelligences are designing new intelligences and a feedback loop is added from design intelligence to output intelligence. In fact, nobody has ever tried to develop such a model, and all attempts to postulate the lack of an intelligence explosion have done so by making up models which either completely ignore the new feedback loop and simply project normal economic growth out into the indefinite future without considering that AIs creating AIs might be in any way qualitatively different from a world of humans making external gadgets without tinkering with brain designs; or which simply ignore the observed parameters of evolutionary history and technological history in favor of making up plausible-sounding mathematical models in isolation which would have vastly mispredicted the observed course of history over the last ten million or ten thousand years, predicting observed diminishing returns rather than increasing ones. This paper falls into the second class.

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Plebe, A., Perconti, P. (2012). The Slowdown Hypothesis. In: Eden, A., Moor, J., Søraker, J., Steinhart, E. (eds) Singularity Hypotheses. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32560-1_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32560-1_17

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32559-5

  • Online ISBN: 978-3-642-32560-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics