Skip to main content

Human Uniqueness

  • Chapter
  • First Online:
The Human Being, the World and God
  • 318 Accesses

Abstract

I have already mentioned that the human species differs from other animal species because of its advanced neocortex which enable humans to create language, art, myth and culture, and to make plans and create a self-image as well as a worldview. I also mentioned differences in DNA between the species. Humans and chimpanzees share 98.8 % of their DNA. Nevertheless, the species are very different in several ways. This is because two identical stretches of DNA can work very differently. Humans and chimpanzees use their genes in different ways, and even though the same genes are expressed in the same brain area in both species, they are expressed in different amounts. Hence already on a biological, neurological level, humans can be said to be unique, if we mean by unique that they have features no other animal species have.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Empedocles, see Audi 1999: 261–262.

  2. 2.

    The Allee effect is a biological theory stating that there is a positive correlation between population density and the per capita population growth rate in very small populations. Simply put, for very large populations, the reproduction and survival rates of individuals decrease with population density. This contrasts with smaller populations, where less population density slows the growth rate of the population, due to intra-specific competition.

  3. 3.

    Biological altruism if found in the sterile workers in the social insect colonies (ants, wasps, bees and termites). These workers devote their whole lives to the queen, constructing and protecting the nest, gather in the food, etc. According to Samir Okasha, such behavior is the most altruistic, (though biological,) because the workers, since they do not leave any offspring of their own, have zero personal fitness. They devote their whole lives to increase the reproductive possibilities of the queen (Okasha 2008).

  4. 4.

    It has to be mentioned that, while they were trained to hear high versus low sounds which means that there were stimuli involved in training them, the escape paddle was not under any stimulus!

  5. 5.

    For those who are not familiar with analytic philosophy: An argument is valid if it is deductively valid. This is, if the premises necessarily imply the conclusion. However, a valid argument does not need to have true premises. An argument is sound if it is valid, noncircular and contains only true premises. If an argument is sound, it means that the conclusion is not only necessarily implied by the premises but also necessary true.

  6. 6.

    The term co-evolution was coined by Paul Ehrlich and Peter Raven in 1964 in order to describe the evolutionary relationship between butterflies and plants. Caterpillars prey on plants, and the plants in turn evolve chemical defenses to mitigate the damage of insect attack, which leads to the evolution of caterpillar detoxification. Since then its meaning has been extended to any case in which two distinct evolutionary systems interact in interesting ways (Richerson and Boyd 2006: 276).

  7. 7.

    See also, Runehov 2008.

  8. 8.

    It is not my intention to dig deeper into the philosophical debate concerning intelligence. The point I want to make is that whether AI can be regarded to have intelligence depends on how intelligence is understood.

  9. 9.

    Androids see, http://roboticsandtechnology.blogspot.se/p/humanoid-robot.html, accessed Wednesday, September 30, 2015.

References

  • Allee, Walter Clyde. 1931. Co-operation among animals. The American Journal of Sociology 37(3): 386–398.

    Article  Google Scholar 

  • Arkin, R.C. 2005. Moving up the food chain: Motivation and emotion in behaviour-based robots. In Who needs emotions: The brain meets the robot, eds. J. Fellous and M. Arbib. Oxford: Oxford University Press. http://www.cc.gatech.edu/ai/robot-lab/online-publications/moral-final2.pdf

    Google Scholar 

  • Audi, Robert (ed.). 1999. The Cambridge dictionary of philosophy, 2nd ed. Cambridge/New York/Melbourne/Madrid: Cambridge University Press.

    Google Scholar 

  • Bowlby, John. 1951. Maternal care and mental health. Bulletin of the World Health Organization, Geneva: WHO.

    Google Scholar 

  • Browne, Derek. 2004. Do Dolphins know their own minds? Biology and Philosophy 19: 633–653.

    Article  Google Scholar 

  • Byrne, Richard, and Andrew Whiten (eds.). 1988. Machiavellian intelligence: Social expertise and the evolution of intellect in monkeys, apes and humans. Oxford: Oxford University Press.

    Google Scholar 

  • Dennett, Daniel. 1987. Intentional systems in cognitive ethology. The intentional stance. Cambridge: MIT Press.

    Google Scholar 

  • Dretske, Fred I. 1999. Machines, plants and animals: The origin of agency. Erkenntnis 51: 19–31.

    Article  Google Scholar 

  • Espinas, Alfred Victor. 1878. Des Sociétés animals. Paris: Germer Baillière.

    Google Scholar 

  • Green, Joel B. 2008. Body, soul, and human life. Grand Rapids: Baker Academic.

    Google Scholar 

  • Ishiguro, Hiroshi. 2005. Android science: Toward a new cross-interdisciplinary framework. Cognitive Science Society 25–26 (July): 1–6.

    Google Scholar 

  • MacFarlane, Alistair. 2013. Information, knowledge & intelligence. Philosophy now: A magazine of ideas. https://philosophynow.org/issues/98/Information_Knowledge_and_Intelligence. Accessed 1 Feb 2016.

  • McCarthy, John. 1979. Ascribing mental qualities to machines. In Philosophical perspectives in artificial intelligence, ed. M. Ringle. New Jersey: Atlantic Highlands.

    Google Scholar 

  • McCarthy, John and Patrick J. Hayes. 1969. Some philosophical problems from the standpoint of artificial intelligence. http://www-formal.stanford.edu/jmc/mcchay69.pdf, 1–51. Accessed 1 Feb 2016.

  • Okasha, Samir. 2008. Biological Altruism. Stanford encyclopedia of philosophy http://stanford.library.usyd.edu.au/archives/spr2008/entries/altruism-biological/. Accessed 26 Jan 2016.

  • Pryor, Karin, and Ingrid Kang Shallenberger. 1998. Social structure in spotted dolphin (Stenella attenuate) in the Tuna Purse Seine Fishery in the Eastern Tropical Pacific. In Dolphin societies, discoveries and puzzles, ed. Karin Pryor and Kenneth S. Norris. Berkeley/Los Angeles: University of California Press.

    Google Scholar 

  • Puddefoot, John. 1996. God and the mind machine: Computer, artificial intelligence and the human soul. London: SPCK.

    Google Scholar 

  • Reiss, Diana. 2011. The dolphins in the mirror. Exploring dolphin minds and saving dolphin lives. Boston/New York: Houghton Mifflin Harcourt.

    Google Scholar 

  • Reiss, Diana, and Lori Marino. 2001. Mirror self-recognition in the bottlenose dolphin: A case of cognitive convergence. PNAS 98(10): 5937–5942.

    Article  Google Scholar 

  • Richerson, Peter J., and Robert Boyd. 2006. Not by genes alone: How culture transformed human evolution. London: The University of Chicago Press.

    Google Scholar 

  • Ristau, Carolyn A., ed. 1991. Cognitive ethology: The minds of other animals. Volume compiled in honor of Donald R. Griffin. Hillsdale: Lawrence Erlbaum.

    Google Scholar 

  • Rubin, Charles T. 2003. Artificial intelligence and human nature. The Atlantis. A Journal of Technology & Society 1: 88–100.

    Google Scholar 

  • Runehov, Anne L.C. 2012a. The uniqueness of human social ontology. In Pensamiento: Cienta, Filosofía y Religión. Madrid: Universidag Pontificia Comillas, serie especial n. 5. vol. 67:254, 708–721.

    Google Scholar 

  • Searle, John R. 1980. Minds, brains and programs. Behavior and Brain Sciences 3: 417–424.

    Article  Google Scholar 

  • Searle, John R. 1983. Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Searle, John R. 1990. Is the brain a computer program? Scientific American 261(1): 26–31.

    Article  Google Scholar 

  • Searle, John R. 2008. Philosophy in a New century: Selected essays. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Searle, John R. 2010. Making the social world: The structure of human civilization. Oxford/New York: Oxford University Press.

    Book  Google Scholar 

  • Smith, J.D., J. Schull, J. Strote, K. McGee, R. Egnor, and L. Erb. 1995. The uncertain response in the bottlenosed dolphin (Tursiops truncatus). Journal of Experimental Psychology. General 124: 391–408.

    Article  Google Scholar 

  • Tomassello, Michael, M. Carpenter, J. Call, T. Behne, and H. Moll. 2005. Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences 28: 675–735.

    Google Scholar 

  • Tummolini, Luca, Christiano Catelfranchi, and Hannes Rakoczy. 2006. Pretend play and the develpment of collective intentionality. Cognitive Systems Research 7: 113–127.

    Article  Google Scholar 

  • Tuomela, Raimo. 2013. Social ontology, collective intentionality and group agents. Oxford/New York: Oxford University Press.

    Book  Google Scholar 

  • Turing, Alen M. 1990. Computing machinery and intelligence. In The philosophy of artificial intelligence, ed. Margareth Boden, 40–66. Oxford: Oxford University Press.

    Google Scholar 

  • Wheeler, William Morton. 1928. Emergent evolution and the development of societies. New York: W.W. Morton & Company Inc.

    Google Scholar 

  • Whiten, Andrew. 2000. Chimpanzee cognition and the question of mental re-representation. In Metarepresentations: A multidisciplinary perspective, ed. Sperber Dan. Oxford: Oxford University Press.

    Google Scholar 

  • Winograd, Terry. 1991. Thinking machines: Can there be? Are we? In The boundaries of humanity: Humans, animals, machines, ed. J. Sheehan and M. Sosna, 198–223. Berkeley: University of California Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Conclusion Part I

Conclusion Part I

The first part of this book concerned what human being is about. Obviously what is meant by the thorny concept ‘self’ needed to be considered. Also the question of what we mean by human being needed to be answered. In order to answer this question, I investigated whether and in what sense human beings are unique, and what would be the sine qua non of human being.

I argued for the view that the mind and the brain are two sides of the same coin called human being, that there is mutual causation that influences each side of the coin in a real sense, that there is some kind of overlap between the mental and the physical. I suggested two elements of being, namely ens (a being) and esse (being), because, human beings are not only advanced neocortices and DNA (ens), they are mothers and fathers, colleagues and friends; they love and hate, and strive for better life-conditions. They paint, write poetry, create music, build monuments and advanced houses, build societies and institutions. And many are religious. They also are ‘esse’.

Humans experience having some kind of a self, some kind of an ‘I’ that ‘feels’ like a deeper or elevated part of the mind and body. I argued that the self is a complex phenomenon that refers to our experiences, our relations with and understanding of us, the world and, for many people, God or ultimate reality, our feelings and emotions, our attitudes and behavior, our thinking, dreaming, and our uniqueness. Therefore we expect the underlying and correlating neural activity to be complex as well, if it is to sustain the self in all its expressions. This was confirmed by neuroscientific studies of the self in terms of complexity, embodied within the entire hierarchical system across multiple hierarchical neural levels. I suggested that what we call the self is a phenomenon of strong emergence. I argued that there is but one process which being of a specific complexity has the quality of a self. I argued that mental phenomena do play a causal role (e.g. piano players enhancing their brain). I also argued for the view that mental states, similar to physical states, have their legitimate place in space and time; it is real and is itself a multiple level hierarchical property.

By looking at neuroscientific studies I suggested an emergent self (ES), containing the objective neural self (ONS) and the subjective neural self (SNS) between which there is mutual causation. ES(ONS ↔ SNS). By looking at clinical and experimental neuroscientific studies I could establish a third parameter of the emergent self which I called the Subjective Transcendent Self (STS). The three parameters of the self being in place, I postulated a (strong) emergent three-fold self (ES) that comprises an Objective Neural Self (ONS), a Subjective Neural Self (SNS) and a Subjective Transcendent Self (STS). The relationship between the three elements is as follows:

$$ \mathrm{E}\mathrm{S}\left(\mathrm{O}\mathrm{N}\mathrm{S}\leftrightarrow \left(\mathrm{S}\mathrm{N}\mathrm{S}\ {\displaystyle \cap }\ \mathrm{S}\mathrm{T}\mathrm{S}\right)\right);\ \mathrm{S}\mathrm{T}\mathrm{S} > \left(\mathrm{O}\mathrm{N}\mathrm{S}\ {\displaystyle \cap }\ \mathrm{S}\mathrm{N}\mathrm{S}\right). $$

The function of the ONS is to neurologically sustain the subjective selves. The function of the SNS then is to express the neural self. Finally, the task of the STS is to be the essential observing subjective self, transcending the former two. The STS then is seen as the part of the self that always was and always is, itself, irreducible to neither the neural self nor the subjective neural self. By way of mutual causation, the three elements of the self trigger an emergent process of the whole self (ES).

In search for the sine qua non of human being (esse) I suggested that the sine qua non of human being has to be to experience. Human beings cannot not experience. By human experiences I intended all experiences a human being is able to have. Since a human being is related to the world, I designated a human experience as an experience of things and events whose existence ultimately is constituted of properties and relations, actions and interactions of whatever the physical and psychological worlds treat of. Experiences are what constitute our reality. Experiences are always absolute, they are unique and individual. Experiences are real, something which I tried to show by making a distinction between the terms concept, conception and conceiving. That experiences are real does not mean that the reality of our experiences, i.e. how the world appears to us, is how the world really is. The reality of the experiences, so it seems, is not (always) as we conceive of it. Experiences have consequences, whatever behavior comes from whatever experience, it will bring about new experiences for us and others, shaping new behaviors shaping new experiences. Human beings and other animals also, cannot exist without experiences. One difference between human beings and animals is that humans have the ability to ‘analyze’ their experiences and understand why they experience. They understand that others also experience and compare their experiences with others. Obviously, this raises ontological and epistemological questions.

I argued that all experiences are subjectively ontologically real. Experiences cannot always be epistemologically justified but can be non-epistemologically justified in the sense of being ex-past justified. I argued that the human species differs from other animal species because of its advanced neocortex and differences in DNA compared to other species of the animal kingdom.

Concerning the uniqueness of human esse, I compared human and non-human sociology. The criteria for being a social animal in the true sense I suggested to be firstly, to possess a self-identity as well as a group-identity and a self-identity within a group. Secondly, to understand intentionality as well as understanding collective intentionality something which requires possessing the Theory of Mind. Thirdly, the members of a group have to be committed to the joint action. I suggested three types of animals that possess self- and group identity. The first group consists of animals that are part of a social group but are part of it as individuals. They may have their specific places within the group. It was suggested that these animals have a sense of ‘I’ that has a specific task within a we. These animals were said to be more self-centered than group-centered. Most mammals belong to this group. The second group consists of animals having a group-identity but no or little self-identity. These animals are group-centered rather than self-centered. Ants belong to this group of animals. The third group consists of animals that have both self- and group identity. They are able to think in an I-we-mode as well as in a we-mode that includes the I-mode. Some mammals (the great apes and dolphins) and humans belong to this third group. Concerning intentionality, it was shown that there is at least some evidence that some non-human animals possess self and group identity and that they may understand intentionality. In order to possess collective intentionality, it was argued that the following criteria are necessary, mutual responsiveness, a form of coordination between the members, sensitivity to each other’s behavior, understanding of each other’s action as a specific intentional action which one is able to respond to, and commitment to the joint action. Several experiments showed that some non-human animals fulfill the criteria for possessing self and group identity, intentionality and collective intentionality. Still, I argued that there is something unique to human social being. Humans are not only social animals; they are also cultural or institutional animals. Humans not only live in societies, they create their societies; they create culture and institutions. Only human social ontology includes deontic powers, which have their function not on the basis of their physical appearance, but on the basis of a collective acceptance. Human beings are creators.

One creation I investigated was the creation of androids and how ‘human’ they are or could become. I proposed that even though androids possess intelligence and social abilities, they do not possess emotional intelligence. While a human ens can be copied, human esse cannot entirely be copied, at least not yet. Androids are entirely externally oriented while humans are both externally and internally oriented.

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Runehov, A.L.C. (2016). Human Uniqueness. In: The Human Being, the World and God. Springer, Cham. https://doi.org/10.1007/978-3-319-44392-8_3

Download citation

Publish with us

Policies and ethics