One of the best known arguments against the connectionist approach to artificial intelligence and cognitive science is that neural networks are black boxes, i.e., there is no understandable account of their operation. This difficulty has impeded efforts to explain how categories arise from raw sensory data. Moreover, it has complicated investigation about the role of symbols and language in cognition. This state of things has been radically changed by recent experimental findings in artificial deep learning research. Two kinds of artificial deep learning networks, namely the convolutional neural network and the generative adversarial network have been found to possess the capability to build internal states that are interpreted by humans as complex visual categories, without any specific hints or any grammatical processing. This emergent ability suggests that those categories do not depend on human knowledge or the syntactic structure of language, while they do rely on their visual context. This supports a mild form of empiricism, while it does not assume that computational functionalism is true. Some consequences are extracted regarding the debate about amodal and grounded representations in the human brain. Furthermore, new avenues for research on cognitive science are open.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
An online demonstration is available at http://ganpaint.io/.
GANs have been called “the coolest idea in deep learning in the last 20 years” (Metz 2017) by Yann LeCun, recipient of the Turing award, the computer science equivalent to the Nobel prize.
There are artificial deep neural networks that do manage sequential linguistic information, such as deep recurrent neural networks (DRNNs), including long short-term memory units (LSTMs), but they are not considered in this work.
Barsalou, L. (2016). On staying grounded and avoiding quixotic dead ends. Psychonomic Bulletin and Review, 23(4), 1122–1142.
Bau, D., Strobelt, H., Peebles, W., Wulff, J., Zhou, B., Zhu, J., et al. (2019). Semantic photo manipulation with a generative image prior. ACM Transactions on Graphics, 38(4), 59.
Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. (2017). Network dissection: Quantifying interpretability of deep visual representations. CoRR arxiv:1704.05796.
Bau, D., Zhu, J. Y., Strobelt, H., Zhou, B., Tenenbaum, J. B., Freeman, W. T., & Torralba, A. (2018). GAN dissection: Visualizing and understanding generative adversarial networks. CoRR arxiv:1811.10597.
Bechtel, W. (1993). The case for connectionism. Philosophical Studies, 71(2), 119–154.
Benitez, J. M., Castro, J. L., & Requena, I. (1997). Are artificial neural networks black boxes? IEEE Transactions on Neural Networks, 8(5), 1156–1164.
Berkeley, I. S. N. (2019). The curious case of connectionism. Open Philosophy, 2(1), 190–205.
Brainerd, C. J., & Reyna, V. F. (2002). Fuzzy-trace theory and false memory. Current Directions in Psychological Science, 11(5), 164–169.
Buckner, C. (2015). A property cluster theory of cognition. Philosophical Psychology, 28(3), 307–336.
Buckner, C. (2018). Empiricism without magic: Transformational abstraction in deep convolutional neural networks. Synthese, 195(12), 5339–5372.
Buckner, C. (2019). Deep learning: A philosophical introduction. Philosophy Compass,. https://doi.org/10.1111/phc3.12625.
Buckner, C., & Garson, J. (2019). Connectionism and post-connectionist models. In M. Sprevak & M. Colombo (Eds.), The Routledge handbook of the computational mind (pp. 76–90). New York: Routledge.
Butz, M. V., & Kutter, E. F. (2017). How the mind comes into being. Oxford: Oxford University Press.
Cantwell Smith, B. (2019). The promise of artificial intelligence. Cambridge: The MIT Press.
Carabantes, M. (2019). Black-box artificial intelligence: An epistemological and critical analysis. AI & Society. https://doi.org/10.1007/s00146-019-00888-w.
Chin-Parker, S., & Ross, B. H. (2002). The effect of category learning on sensitivity to within-category correlations. Memory & Cognition, 30(3), 353–362.
Chollet, F. (2018). Deep learning with Python. Shelter Island, NY: Manning.
Clark, A. (1993). Associative engines: Connectionism, concepts, and representational change. Cambridge, MA: The MIT Press.
Creel, K. A. (2020). Transparency in complex computational systems. Philosophy of Science. https://doi.org/10.1086/709729.
Dawson, M. R. W. (2004). Minds and machines: Connectionism and psychological modeling. Malden, MA: Blackwell Publishing.
Dove, G. (2009). Beyond perceptual symbols: A call for representational pluralism. Cognition, 110(3), 412–431.
Dove, G. (2016). Three symbol ungrounding problems: Abstract concepts and the future of embodied cognition. Psychonomic Bulletin and Review, 23(4), 1109–1121.
Elber-Dorozko, L., & Shagrir, O. (2019). Computation and levels in the cognitive and neural sciences. In M. Sprevak & M. Colombo (Eds.), The Routledge handbook of the computational mind (pp. 205–222). New York: Routledge.
Foster, D. (2019). Generative deep learning. Sebastopol, CA: O’Reilly.
Gärdenfors, P. (2000). Conceptual spaces: The geometry of thought. Cambridge, MA: The MIT Press.
Gärdenfors, P. (2014). The geometry of meaning: Semantics based on conceptual spaces. Cambridge, MA: The MIT Press.
Gomes, L. (2014). Machine-learning maestro Michael Jordan on the delusions of Big Data and other huge engineering efforts. In IEEE spectrum, October 20, 2014.
Gonzalez-Garcia, A., Modolo, D., & Ferrari, V. (2018). Do semantic parts emerge in convolutional neural networks? International Journal of Computer Vision, 126(5), 476–494.
Gulli, A., & Pal, S. (2017). Deep learning with keras. Birmingham: Packt Publishing Ltd.
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the 2015 IEEE international conference on computer vision (ICCV) (pp. 1026–1034). IEEE Computer Society.
Horgan, T., & Tienson, J. (1996). Connectionism and the philosophy of psychology. Cambridge, MA: The MIT Press.
Kaipainen, M., & Hautamäki, A. (2015). A perspectivist approach to conceptual spaces. In F. Zenker & P. Gärdenfors (Eds.), Applications of conceptual spaces: The case for geometric knowledge representation (pp. 245–258). Cham: Springer.
Ketkar, N. (2017). Deep learning with Python. New York: Apress.
Kiefer, M., & Pulvermüller, F. (2012). Conceptual representations in mind and brain: Theoretical developments, current evidence and future directions. Cortex, 48(7), 805–825.
Langr, J., & Bok, V. (2019). GANs in action. Shelter Island, NY: Manning.
Leshinskaya, A., & Caramazza, A. (2016). For a cognitive neuroscience of concepts: Moving beyond the grounding issue. Psychonomic Bulletin & Review, 23(4), 991–1001.
Machery, E. (2005). Concepts are not a natural kind. Philosophy of Science, 72(3), 444–467.
Machery, E. (2007). Concept empiricism: A methodological critique. Cognition, 104(1), 19–46.
Machery, E. (2016). The amodal brain and the offloading hypothesis. Psychonomic Bulletin and Review, 23(4), 1090–1095.
Mahon, B., & Caramazza, A. (2009). Concepts and categories: A cognitive neuropsychological perspective. Annual Review of Psychology, 60, 27–51.
Marcus, G. (2018). Innateness, AlphaZero, and artificial intelligence. CoRR arxiv:1801.05667.
Mayor, J., Gomez, P., Chang, F., & Lupyan, G. (2014). Connectionism coming of age: Legacy and future challenges. Frontiers in Psychology, 5, 187.
Metz, C. (2017). Google’s dueling neural networks spar to get smarter, no humans required. https://www.wired.com/2017/04/googles-dueling-neural-networks-spar-get-smarter-no-humans-required/.
Neudeck, P., & Wittchen, H. U. (2012). Introduction: Rethinking the model-refining the method. In P. Neudeck & H. U. Wittchen (Eds.), Exposure therapy (pp. 1–8). New York, NY: Springer.
Newell, A., & Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19(3), 113–126.
Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. New York: Basic Books.
Piccinini, G., & Bahar, S. (2013). Neural computation and the computational theory of cognition. Cognitive Science, 37(3), 453–488.
Piccinini, G., & Scarantino, A. (2011). Information processing, computation, and cognition. Journal of Biological Physics, 37(1), 1–38.
Qin, Z., Yu, F., Liu, C., & Chen, X. (2018). How convolutional neural networks see the world—A survey of convolutional neural network visualization methods. Mathematical Foundations of Computing, 1(2), 149–180.
Ross, B. (2001). Natural concepts, psychology of. In N. J. Smelser & P. B. Baltes (Eds.), International encyclopedia of the social & behavioral sciences (pp. 10380–10384). Oxford: Pergamon. https://doi.org/10.1016/B0-08-043076-7/01489-3.
Salisbury, J., & Schneider, S. (2019). Concepts, symbols and computation. In M. Sprevak & M. Colombo (Eds.), The Routledge handbook of the computational mind (pp. 310–322). New York: Routledge.
Shum, J., Hermes, D., Foster, B. L., Dastjerdi, M., Rangarajan, V., Winawer, J., et al. (2013). A brain area for visual numerals. Journal of Neuroscience, 33(16), 6709–6715.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., et al. (2017). Mastering the game of Go without human knowledge. Nature, 550, 354–359.
Stephan, A. (2006). The dual role of ‘emergence’ in the philosophy of mind and in cognitive science. Synthese, 151(3), 485.
Tu, J. (1996). Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. Journal of Clinical Epidemiology, 49(11), 1225–1231.
Vondrick, C., Pirsiavash, H., & Torralba, A. (2016). Generating videos with scene dynamics. arxiv:1609.02612.
Wajnerman Paz, A. (2018). An efficient coding approach to the debate on grounded cognition. Synthese, 195(12), 5245–5269.
Zhang, Q. S., & Zhu, S. C. (2018). Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27–39.
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2014). Object detectors emerge in deep scene CNNs. arxiv:1412.6856.
The author is deeply indebted to Dr. Rosa María Ruiz-Domínguez (Universidad de Málaga, Málaga, Spain), for her insightful comments about the connections of this work with psychology and cognitive science. He is also grateful to David Teira (Universidad Nacional de Educación a Distancia, Madrid, Spain) and Emanuele Ratti (University of Notre Dame, Notre Dame, Indiana, USA) for their valuable comments. Finally, he would like to thank the anonymous reviewers for their constructive suggestions, which have greatly improved the original manuscript.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
López-Rubio, E. Throwing light on black boxes: emergence of visual categories from deep learning. Synthese (2020). https://doi.org/10.1007/s11229-020-02700-5
- Deep learning
- Visual categories
- Machine learning
- Cognitive science