Abstract
A number of research projects have recently taken up the challenge of formulating large-scale models of brain mechanisms at unprecedented levels of detail. These research enterprises have raised lively debates in the press and in the scientific and philosophical literature, some of them revolving around the question whether the incorporation of so many details in a theoretical model and in a computer simulations of it is really needed for the model to be explanatory. Is there a “right” level of detail? In this article I analyse the claim, made by two leading neuroscientists, according to which the content of the why-question addressed and the amount of computational resources available constrains the choice of the most appropriate level of detail in brain modelling. Based on the recent philosophical literature on (neuro)scientific explanation, I distinguish between two kinds of details, called here mechanistic decomposition and property details, and argue that the nature of the why-question provides only partial constraints to the choice of the most appropriate level of detail under the two interpretations of the term considered here.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Mechanisms are often said to be more or less “simple” (or “complex”) in the cognitive science and neuroscience literature depending on the number of their components and on the number and nature of the connections among the components. The adjectives “simple” and “complex” are also often used to qualify behaviours, a complex behaviour being one which is relatively difficult to predict without computational instruments. Providing a precise definition of these terms is out of the scope of this article: here they will be used in the common-sense interpretation sketched here, just for the purpose of introducing the subject of the paper. In the following pages they will be abandoned, and mechanisms will be said to be more or less detailed according to a more precisely defined notion of “level of detail”.
- 2.
http://spectrum.ieee.org/tech-talk/semiconductors/devices/blue-brain-project-leader-angry-about-cat-brain (last visited on September 14, 2016).
- 3.
Several accounts of the formal semantics of why-questions can be found in the philosophical literature, most notably in (Bromberger 1966; Van Fraassen 1980; Hintikka and Halonen 1995). In what follows, I assume that the distinction between same-level and inter-level questions made here is compatible with all these accounts. Examples of why-questions are provided below in the text.
- 4.
This argument is to be refined based on a formal account of the notion of “mechanistic decomposition level”, which is out of the scope of this paper. Note that why-questions can be classified as same-level or inter-level only with respect to a particular mechanistic decomposition hierarchy. No why-question is “intrinsically” same-level or inter-level.
References
Ananthanarayanan, R., S.K. Esser, H.D. Simon, and D.S. Modha. 2009. The cat is out of the bag: Cortical simulations with 109 neurons, 1013 synapses. In High performance computing networking, storage and analysis, proceedings of the conference on, (c), 1–12. https://doi.org/10.1145/1654059.1654124.
Braitenberg, V. 1986. Vehicles. Experiments in synthetic psychology. Cambridge, MA: The MIT Press.
Bromberger, S. 1966. Why-questions. In Mind and Cosmos: Essays in contemporary science and philosophy, ed. R. Colodny, 68–111. Pittsburgh: University of Pittsburgh Press.
Brooks, R.A. 1991. New approaches to robotics. Science 253 (5025): 1227–1232. https://doi.org/10.1126/science.253.5025.1227.
Cordeschi, R. 2002. The discovery of the artificial. Behavior, mind and machines before and beyond cybernetics. Dordrecht: Springer. https://doi.org/10.1007/978-94-015-9870-5.
Craver, C.F. 2002. Interlevel experiments and multilevel mechanisms in the neuroscience of memory. Philosophy of Science 69: September), 83–September), 97.
Craver, C. 2007. Explaining the brain: Mechanisms and the mosaic unity of neuroscience. New York: Clarendon Press.
Cummins, R. 1975. Functional analysis. Journal of Philosophy 72 (20): 741–765.
Datteri, E., and F. Laudisa. 2014. Box-and-arrow explanations need not be more abstract than neuroscientific mechanism descriptions. Frontiers in Psychology 5 (MAY): 1–10. https://doi.org/10.3389/fpsyg.2014.00464.
———. 2016. Large-scale simulations of brain mechanisms: Beyond the synthetic method. Paradigmi 3: 23–46. https://doi.org/10.3280/PARA2015-003003.
Eliasmith, C., and O. Trujillo. 2014. The use and abuse of large-scale brain models. Current Opinion in Neurobiology 25: 1–6. https://doi.org/10.1016/j.conb.2013.09.009.
Eliasmith, C., T.C. Stewart, X. Choo, T. Bekolay, T. DeWolf, C. Tang, and D. Rasmussen. 2012. A large-scale model of the functioning brain. Science 338 (6111): 1202–1205. https://doi.org/10.1126/science.1225266.
Glennan, S. 2002. Rethinking mechanistic explanation. Philosophy of Science 69 (S3): S342–S353. https://doi.org/10.1086/341857.
Grey Walter, W. 1950. An imitation of life. Scientific American 182 (5): 42–45.
Grillner, S., N. Ip, C. Koch, W. Koroshetz, H. Okano, M. Polachek, and M. Poo. 2016. Worldwide initiatives to advance brain research. Nature 19 (9): 1118–1122. https://doi.org/10.1038/nn.4371.
Hintikka, J., and I. Halonen. 1995. Semantics and pragmatics for why-questions. Journal of Philosophy 92 (12): 636–657.
Komer, B., and C. Eliasmith. 2016. A unified theoretical approach for biological cognition and learning. Current Opinion in Behavioral Sciences 11: 14–20. https://doi.org/10.1016/j.cobeha.2016.03.006.
Levy, A., and W. Bechtel. 2013. Abstraction and the organization of mechanisms. Philosophy of Science 80: 241–261. https://doi.org/10.1086/670300.
Markram, H. 2006. The blue brain project. Nature Reviews. Neuroscience 7 (2): 153–160. https://doi.org/10.1038/nrn1848.
Markram, H., K. Meier, T. Lippert, S. Grillner, R. Frackowiak, S. Dehaene, A. Knoll, H. Sompolinsky, K. Verstreken, J. DeFelipe, S. Grant, J.P. Changeux, and A. Sariam. 2011. Introducing the human brain project. Procedia Computer Science 7: 39–42. https://doi.org/10.1016/j.procs.2011.12.015.
Miłkowski, M. 2015. Explanatory completeness and idealization in large brain simulations: A mechanistic perspective. Synthese 193: 1457–1478. https://doi.org/10.1007/s11229-015-0731-3.
Pfeifer, R., and J. Bongard. 2006. How the body shapes the way we think. A new view of intelligence. Cambridge, MA: The MIT Press.
Pfeifer, R., and C. Scheier. 1999. Understanding Intelligence. Cambridge, MA: The MIT Press.
Piccinini, G., and C. Craver. 2011. Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese 183: 283–311.
Rosenblueth, A., and N. Wiener. 1945. The role of models in science. Philosophy of Science 12 (4): 316–321. https://doi.org/10.1086/286874.
Simon, H.A. 1996. The sciences of the artificial. Cambridge, MA: The MIT Press.
Tamburrini, G., and E. Datteri. 2005. Machine experiments and theoretical modelling: From cybernetic methodology to neuro-robotics. Minds and Machines 15 (3–4): 335–358. https://doi.org/10.1007/s11023-005-2924-x.
Van Fraassen, B. 1980. The scientific image. Oxford: Clarendon Press.
Woodward, J. 2002. What is a mechanism? A counterfactual account. Philosophy of Science 69: S366–S377 JOUR.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Datteri, E. (2019). Large-Scale Simulations of the Brain: Is There a “Right” Level of Detail?. In: Berkich, D., d'Alfonso, M. (eds) On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Philosophical Studies Series, vol 134. Springer, Cham. https://doi.org/10.1007/978-3-030-01800-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-01800-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01799-6
Online ISBN: 978-3-030-01800-9
eBook Packages: Computer ScienceComputer Science (R0)