Introduction

Scientific and philosophical studies of consciousness have repeatedly put forward the idea that consciousness is a form of information processing (Aleksander and Gamez 2011; Baars 1988; Earl 2014; Fingelkurts and Fingelkurts 2017; Jonkisz 2015, 2016; Tononi 2008, 2012; Tononi and Koch 2015). Focusing mainly on human consciousness, I elaborate on this idea, trying partly to integrate the explanations of consciousness offered by some of these scholars, and partly to overcome what I consider their shortcomings.

I will argue that consciousness is a special way of processing information. It is special, firstly, because it allows not only for the transmission of information but also and above all for the production of information. Secondly, because the information it produces, is meaningful for the person who consciously experiences it: the person, who consciously experiences it, knowsFootnote 1 what it means for him.Footnote 2 Thirdly, because the information it produces is “individuated” (Jonkisz 2015), in the sense that it has “that” meaning only for the person experiencing it, and not for other people: for example, I know what it means for me to experience “fear,” but another person cannot directly know what it means for me to experience “fear” (and vice versa).

This view of consciousness allows me to put forward a biologically inspired proposal of what the basic cognitive processes underlying conscious experience are, namely the self, attention and working memory.

My argumentation that consciousness is a special way of processing information, is based on various kinds of considerations, which I mainly provide in “Which theory of consciousness?” and “The information provided by conscious experience” sections (I will abbreviate “the information provided by conscious experience” to CI, for the sake of simplicity).

Which theory of consciousness?” section shows that, among the theories proposing that consciousness is information, only those that take into consideration the individuated way in which information is processed by a person’s brain, can offer an account of consciousness that is fully consistent with the constraints posed by the working and needs of biological systems, and with what we currently phenomenally experience. “Which theory of information?” section provides a complementary overview about which theory of information can adequately deal with CI.

The information provided by conscious experience” section outlines the essential features of CI, that is, the features that CI must necessarily possess to be defined as such, and describes what it implies for a person to process such a kind of information. These are the essential features of CI that I have identified: (1) the content of CI coincides with its form, that is, the message delivered by CI is the phenomenal quality of conscious experiences; (2) CI is individuated: only the person who experiences it can know it; (3) CI always presupposes the existence of some, albeit elementary and minimal, form of self.

These features allow one to distinguish CI from the information provided by other kinds of mental states. If one of these features is altered but still present, CI is altered but not suppressed: this happens for example with anosognosia, when the autobiographical self is impaired but not the core self (Damasio 1999). If one of these features is lacking, there is no CI: this is the case of unconscious information, the content of which is not delivered via phenomenal qualities.

I have mainly drawn on Husserl’s (2002) phenomenological work for the identification of the essential features of CI, because it offers a privileged vantage point to capture them. Even though there might be no agreement about how many and what such features are, there is no doubt that their identification is a prerequisite for the subsequent identification of the cognitive systems and processes responsible for the production of CI. You must first know how a phenomenon looks like if you want to find the organs and processes that produce it. The phenomenological analysis, however, is not sufficient for the identification of these organs and processes, which can only be accomplished by means of empirical disciplines such as neuropsychology and neurobiology.

In “The basis for the production of CI” section, I show that the self is one of the basic cognitive systems underlying conscious experience. The self, which develops on the individual’s biological, naturally selected, and culturally acquired values, is the machinery that helps maintain and expand the well-being of the individual in its entirety. It is primarily expressed via the central and peripheral nervous systems, which map the individual’s body, his environment, and his interactions with the environment. The self is the principal means by which the complexity inherent to the composite structure of an organism is reduced into the “single voice” of a unique individual. The variations of the state of the self are the raw material used for the construction of CI.

In order to be adequately dealt with, the multiplicity of the variations of the state of the self must be processed by a mechanism that selects and emphasizes those variations that are most relevant in the given situation, and excludes the non-relevant ones from being further processed. This mechanism is attention. After having argued in “Attention” section that attention operates in a periodic manner, and that the periodicity of attention is the product of brain oscillations, the section “The form of conscious experience is determined by the activity that attention performs to detect the variations of the self” describes how attention determines the main features of conscious experiences: periodicity, the egocentric spatial organization of conscious experience, and phenomenal quality.

While attention ensures the selection and shaping of basic pieces of information of conscious experience, another mechanism is needed to combine and assemble them: working memory (WM). “Complex forms of conscious experience” section describes how WM enables more complex forms of consciousness, such as the stream of consciousness, and the various modes of givenness of conscious experience.

Before I begin my article proper, let me make some considerations about the problem of the function of consciousness. In this article I maintain that consciousness plays a role in an individual’s behavior: it informs the individual about what is going on, that what is going on is related in a certain way to him, it helps the individual build a knowledge of himself as a subject and of the environment he lives in, and so on. Contrary to what I maintain, there are scholars who do not think that consciousness makes a difference to the individual’s behavior. For example, Huxley (1874) maintains that consciousness is just an epiphenomenon, Rosenthal (2008, p. 839) argues that “the consciousness of thoughts, desires, and volitions adds little if any benefit for rational thinking, intentional action, executive function, or complex reasoning,” and Dennett claims that consciousness does not play any causal role because it is a fantasy, a story told by a pandemonium of multiple, competing unconscious specialists, which makes the system illusorily believe that it has thoughts, intentions, beliefs, a self, conscious contents, etc. (Dennett 1991).

One can certainly be led to accept these ideas if one just considers the bulk of empirical evidence and daily observations showing that human beings process much information unconsciously. We all have experienced the fact that we are able to find a solution to a difficult problem only after having literally “slept on it”; the best way for us to retrieve a word from memory when experiencing a tip-of-the-tongue state, is to stop consciously searching for it and let our unconscious do the work; many forms of behavior can be initiated without conscious decision (Bargh 1990, 1997, 2006; Bargh and Chartrand 1999); perception can occur unconsciously (Debner and Jacoby 1994; Merikle et al. 2001); unconsciously perceived information remains in memory for a considerable period of time (Merikle and Daneman 1996)Footnote 3; phenomena such as ventriloquism, binocular rivalry and the McGurk effect reveal that diverse kinds of information, including data from different modalities, can be integrated by unconscious processes and sensory conflicts resolved unconsciously (Morsella 2005); complex and rational decision processes can occur unconsciously (Dijksterhuis et al. 2006; Dijksterhuis and Nordgren 2006; Zhong et al. 2008); a freely voluntary act is not initiated by the subject’s conscious free will, but by his brain’s unconscious processes (Haggard 1999; Haggard and Eimer 1999; Haggard et al. 1999; Libet 2004).

However, there is also a great deal of empirical evidence showing that consciousness does play a role (Cheesman and Merikle 1986; Clark and Squire 1998; Fu et al. 2008; Groeger 1984, 1988; Knight et al. 2006; Kunst-Wilson and Zajonc 1980; Marcel 1980; Merikle and Cheesman 1987; Merikle and Joordens 1997; Mudrik et al. 2014; Sackur and Dehaene 2009). For example, in Pavlovian conditioning studies, Clark and Squire (1998) show that trace conditioning requires an awareness of the conditioned stimulus–unconditioned stimulus relationship for conditional response acquisition, whereas awareness does not appear to be necessary for simple delay conditioning. According to Clark and Squire, the more complex condition involved in trace conditioning versus a simpler form of conditioning such as delay conditioning, would require consciousness to represent and remember the temporal conditioned stimulus–unconditioned stimulus relationship.

Moreover, even the interpretation of empirical data supporting the view that conscious processes play no or a small role in human behavior compared to unconscious ones, is not as straightforward as it could have initially seemed. For example, Rey et al.’s (2009) experiment clarifies the claim made by Dijksterhuis et al. (2006) about the supremacy of unconscious over conscious thought at solving complex decisions. By using an experimental design similar to the one used by Dijksterhuis et al. (2006) but with an additional control condition (the “immediate condition”) in which subjects made their choice immediately without any period of thought (conscious or unconscious), Rey et al. (2009) showed that decisions made by subjects in the immediate condition were just as good as those in the unconscious one, hence challenging Dijksterhuis et al.’s (2006) interpretation. The same finding was replicated by Waroquier et al. (2003). Additionally, they found that while too much conscious deliberation can actually deteriorate high-quality first impressions, conscious thought enhances the quality of decisions in the absence of such prior first impressions.Footnote 4

The view that consciousness has no effects on behavior not only is contradicted by the existence of contrary empirical evidence, but also conflicts with the consideration that we have evolved with pleasant feelings toward what is good for us and unpleasant feelings toward what harms us. Why are pleasant states not associated with avoidant behaviors, or unpleasant states with approach behaviors? Why does tissue damage not happen to feel good, or drinking when thirsty not happen to feel bad? Is it only a case or is it not rather a sign of a specific correlation? Likewise, the view that consciousness has no effects should also explain why it matters that conscious experience must somehow correlate with reality, that is, it cannot be completely fantastical (Earl 2014; James 1890/1983; Morsella 2005).

A final remark on Dennett’s (1991) hypothesis that consciousness is a story told by a pandemonium of multiple, mindless, unconscious agencies, which are able to “interpret” the behavior of a system and make the system illusorily believe that it has thoughts, intentions, beliefs, a self, conscious contents, etc. In my view, Dennett’s hypothesis is subject to some major criticisms. How can mindless agencies, which are designed to dumbly operate on a very strict deterministic principle, “interpret” such a highly complex, unpredictable and ever-changing behavior as the human one? Human beings continuously create and invent new strategies, ideas, goals, etc., which can only be properly understood by whoever is able to continuously evolve and progress as human beings do. By definition, dumb agencies that were designed to respond in a certain way to given signals cannot adequately interpret new signals, unless they are allowed to evolve and transform themselves precisely into something that, like human beings, are able to adopt new goals and strategies.

Moreover, how can my sub-personal, mindless agencies’ interpretation resemble my own interpretation? The capacity to interpret behaviors, objects and events in the way that I do, requires assuming my specific observation level (Negrotti 1997). Interpreting, understanding and evaluating are activities that are always done by someone having certain experiences, competencies, and values. Something is important, dangerous or insignificant for me as a person-in-my-wholeness, that is, a being having a specific body, history, values, etc. Another person, an animal or a mindless agency would interpret the same object or event in a completely different way, according to their own observation level. The way that I interpret things is specific to me as a (unique) person-as-a-whole, and cannot be reduced to any part of me.

Finally, I do not see how the Dennettian mindless agencies can account for the facts that our interpretation of things can change over time (what is significant for me now can become insignificant later), and that we can interpret the same thing in various ways (a person’s action can be interpreted in itself, or as a part of a more articulated sequence, or as a means to an end, etc.). Dumb “switches” designed to respond in a given way to a given signal, will always respond to the same signal in the same way.

Which theory of consciousness?

Some of the first theories of consciousness put forward by cognitive psychologists were based on the mind-as-a-computer metaphor. According to these theories, the mind is a computer that processes information coming from external or internal sources in order for the system to provide appropriate behavioral responses. The information flows from one module of the mind to the next, until it reaches the last module of the chain: consciousness. Following the computer metaphor of the mind, this last module has been variously termed as “operating system” (Johnson-Laird 1988), “central processor” (Umiltà 1988), or “supervisory system” (Shallice 1988).

The mind-as-a-computer approach has certainly yielded various and positive results in psychological research on the mind. For example, it can tell how long it takes for information to become conscious (Cleeremans and Sarrazin 2007; Libet 2004), the different levels of processing information involved by conscious versus unconscious processes (Dehaene 2009; Kouider and Dehaene 2007), the different consequences that consciously versus unconsciously processing information has on memory, learning, etc. However, it is not the most appropriate approach when studying conscious experience because it can neither provide the adequate level of analysis of the phenomenal aspect of consciousness, nor account for how conscious experience allows a person to develop a sense of being an independent individual.

The mind-as-a-computer approach cannot provide the adequate level of analysis of the phenomenal aspect of consciousness simply because the latter is outside the scope of investigation of the former. The mind-as-a-computer approach analyzes the processes involved in transforming and elaborating information, the time needed to process information, how information is transformed, transmitted and disseminated, and so on, but not why these processes give rise to phenomenal experience.

The mind-as-a-computer approach cannot account for how a person develops the sense of being an independent individual because its main concern is to analyze the piece of information processed by the person, or how this piece of information is transformed, rather than to analyze the implications that it has for a person to consciously experience the piece of information that he is processing.Footnote 5

Adopting Negrotti’s terminology (1997, 1999), we can say that the observation level of the mind-as-a-computer approach is that of the information processed, or that of the processes involved in processing information, not that of the person processing the information. As such, the mind-as-a-computer approach cannot account for how a person develops and transforms through consciously processing information, but only for how some parts of a person’s organism—sense organs, attention, memory, central processor, and so on—process information. As various researchers (Cisek 1999; Edelman 1989; Freeman 1999; Searle 1980, 1984, 1992) have highlighted, most of the problems raised by the mind-as-a-computer approach are due to the fact that this approach considers information as made up of ready-made symbols representing the external world, whose meanings derive not so much from the personal history of the person, the importance they have for the person, his relations with other entities, but from the researcher’s research goals.

Therefore, whoever wants to study consciousness has to change perspective and no longer consider information, as well as the person processing it, as ready-made entities. On the contrary, one needs to investigate how a person develops, changes and transforms by processing information, why and how something becomes information for a person, and how something acquires a meaning for a person.

An important contribution in this perspective was offered by Baars (1988). Baars uses the word information in a psychological adapted version of the Shannonian, conventional sense of reduction of uncertainty in a set of choices defined within a stable context. Consciousness is informative in the conventional sense because it reduces the uncertainty introduced by novel and unpredictable stimuli. The uncertainty is reduced by means of adaptive and learning processes, which consciousness triggers and facilitates by globally broadcasting—via a Global Workspace—the stimulus message to the whole system of (unconscious) specialized processors. The role of consciousness in reducing uncertainty is evidenced by the phenomenon of the redundancy effects: after a new event has been adapted to and learned, whether via repeated processing or practice, the event fades from consciousness. That is, consciousness completes its function once it has reduced the uncertainty introduced by a new event. Baars also observes that this tendency to reduce conscious access by adaptation is counterbalanced in human beings by an opposing natural tendency to increase conscious access by actively searching for informative stimulation.

Baars’ usage of the term information differs slightly from the conventional one in that it is adapted to the psychological contexts, which are more complex and less stable than the one-dimensional message context of classical information theory. Baars defines a psychological context as a system (or groups of specialized processors) that constrains conscious content without itself being conscious. Psychological contexts are not totally stable, but rather continuously change by adapting to informative input whenever possible. Adaptation to conscious input implies the creation of new (unconscious) contexts, which then shape and constrain later conscious experiences: “In a sense, context consists of those things to which the nervous system has already adapted; it is the ground against which new information is defined” (Baars 1988, p. 197). Every event can then be said to be experienced with respect to prior conscious events. In this view, adaptation and learning are processes that develop unconscious contexts that cause us to experience the same event in new and different ways. Therefore, conscious experience continuously modifies the person and the way he processes information, in the sense that the person can never experience the same object twice in the same way.

Baars’ model is certainly highly valuable in explaining how the person develops and changes by processing information, as well as in explaining a number of cognitive processes such as the person’s access to information, voluntary control, and reportability. However, Baars’ model has the major drawback of not directly addressing the question of why conscious experience has the peculiar phenomenal aspect and quality it has: if conscious experience had not the phenomenal aspect it has, would it still be able to perform its function (that is, to reduce uncertainty)? As Chalmers (1996) observes, the best that Baars’ theory can do is to state that the information processed within the Global Workspace is experienced because it is globally accessible. But the question of why global accessibility should give rise to conscious experience, remains unanswered. Not directly addressing the problem of the phenomenal aspect of consciousness, but rather addressing derivative characteristics of conscious states (such as being “largely widespread and broadcast”), Baars’ model can explain the latter, but not the former.

In this view, the Integrated Information Theory of consciousness (IIT) put forward by Tononi (2008, 2012) and Tononi and Koch (2015), has certainly the advantage over the various theories of consciousness, of directly tackling the phenomenological aspects of consciousness. IIT starts by phenomenologically identifying the essential properties of consciousness (or “axioms”), then derives a set of postulates that specify the requirements that must be satisfied by any physical system to account for such properties, and finally develops a detailed mathematical framework in which the properties are defined precisely and made operational. IIT identifies five main essential properties of consciousness (Oizumi et al. 2014; Tononi and Koch 2015): (1) intrinsic existence: as Descartes realized, we are absolutely sure that each conscious experience exists and is real; (2) composition: consciousness is structured, composed of many phenomenological distinctions (within the same experience, we may distinguish various features and objects); (3) information: each conscious experience is informative in that it differs from other possible experiences (an experience of darkness is what it is because, among other things, it is not filled with light, there are no objects, etc.); (4) integration: consciousness is unified, each conscious experience is irreducible to non-interdependent subsets of phenomenal distinctions (we experience a whole visual scene, not the left side of the visual field independent of the right side); (5) exclusion: each conscious experience excludes all others, at any given time there is only one experience having its full content.

IIT postulates parallel the phenomenological properties of consciousness, and help directly link consciousness to information. IIT postulates are used to define (among other things) information in Batesonian terms as a difference that makes a difference to a system (from its intrinsic perspective, not relative to an external observer), and integrated information (Φ) as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. IIT’s main claim is that consciousness is integrated information. More specifically: “(1) the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements; (2) the quality of experience is specified by the set of informational relationships generated within that complex” (Tononi 2008, p. 216).

Despite its advantages, IIT has raised several concerns (Cerullo 2015; Jonkisz 2015; Searle 2013). One of the main difficulties with IIT derives from its identification of consciousness with integrated information. In fact, taken to extremes, this claim holds that any system that has integrated states of information is conscious. This leads to some counterintuitive consequences, such as the attribution of consciousness to simple artifacts, such as photodiodes: “Strictly speaking, then, the IIT implies that even a binary photodiode is not completely unconscious, but rather enjoys exactly 1 bit of consciousness” (Tononi 2008, p. 236).

As Mudrik et al. (2014) observe, in some cases this claim of IIT is untenable. Mudrik et al. identify four integrative processes that occur without consciousness: short-range spatiotemporal integration, low-level semantic integration, single sensory (vs. multisensory) integration, and previously learned (vs. new) integration. This shows that not all informational states are conscious, and that information integration, even if it turns out that it is most probably necessary for consciousness, is not sufficient. Additional conditions must therefore be invoked in order to account for consciousness.

Jonkisz (2015) identifies these conditions by assuming a biological perspective. This perspective allows Jonkisz to better circumscribe the concept of information within the context of conscious studies. In fact, the biological perspective makes one realize that, when dealing with consciousness, the only kind of information that is important, is the one that is functionally relevant for a biological system. And this kind of information is formed:

in the unique and particular interactions occurring between the system (e.g., its sensory organs located on its body and in the relevant area in the brain) and its environment (e.g., photons, sound waves, chemicals) or just between parts of the system itself, taken in isolations (e.g., its memory subsystems, insofar as these are able to induce states internally) (Jonkisz 2015, p. 9).

As Jonkisz (2015) observes, this kind of information is evolutionarily embedded, socially altered, and subjectively grounded. In fact, biological systems evolve in interaction with their environment and with other creatures. Consequently, their bodies are attuned to the specific informational resources available in the ecological niche of their ancestors. Furthermore, the way in which information is interpreted by biological systems may also be socially altered. Finally, the information processed by biological systems is always individually grounded or private, in the sense of being determined by individual-related factors. In conclusion, the kind of information functionally relevant for a given biological system is always “individuated,” with respect to just the system. In addition to this condition, Jonkisz specifies that information must be pragmatically useful, that is, it must help coordinate the system’s action at a given moment in time. According to Jonkisz, overall these conditions place empirically justified constraints on IIT, and help avoid the problematic and counterintuitive consequences of a radical extension of consciousness to processes and systems hitherto considered non-conscious or unconscious.

IIT has also raised some other concerns related to its choice to limit the understanding of consciousness to its phenomenal essential properties. IIT considers the phenomenal essential properties of consciousness, so to say, in themselves, without any connection to the possible cognitive functions they may have for the system (such as planning and initiation of behavior). This choice has led some critics to define IIT as a theory of “protoconsciousness” or “non-cognitive consciousness,” rather than a theory of “cognitive-consciousness” (Cerullo 2015). According to this criticism, IIT tackles a kind of consciousness that is very general and differs from the one tackled by cognitive neuroscience, psychology and neurology. While the latter kind is supposed to have evolved in association with the other cognitive functions of the system (such as memory and the executive function) in order to assist the system in controlling and guiding its own behavior, the former kind does not necessarily imply a functional role in guiding the system’s behavior, and lacks the cognitive properties usually associated with such a functional role. Indeed, IIT does not intend to explain why, to what purpose, a system or complex of elements should generate integrated information (and ultimately consciousness); rather, IIT intends to explain how the generation of integrated information by a system leads to consciousness. This renders IIT useless in accounting for the possible functions of consciousness, as well as for other cognitive functions of the system (memory, attention, etc.) associated with consciousness. In turn, this makes it difficult for IIT to account for the existence of the specialized brain circuits underpinning such cognitive functions, and for important functional dissociations between brain regions, such as the different role played by frontal and parietal cortex in guiding attention (Buschman and Kastner 2015).Footnote 6 It is true that IIT acknowledges that “integrated information requires networks that conjoin functional specialization (…) with functional integration” (Tononi 2010, p. 312), but IIT does not explain why precisely those specialized circuits, and not others, exist.

Adopting a biological perspective helps solve all these difficulties. This is because in a biological perspective, consciousness (and any other cognitive function, as well as the underpinning neurobiological processes) did not originate for nothing and from nothing: rather it evolved from processes and brain circuits that already existed, and it was selected for its capability to support the system in controlling its behavior. As William James (1890/1983, p. 147) observed, consciousness evolved to function as a selective mechanism able to control a nervous system that had grown “too complex to regulate itself.” In fact, consciousness provides a means to represent (in the sense of standing for) the whole system in a condensed (albeit partial) and unified way: that is, it supplies the system with the sense of being a unique, single entity, which evolutionarily culminates in the appearance of self-consciousness. This allows the system to “observe” itself and the world surrounding it from the vantage point of the perspective of a single entity (as opposed to a set of separated, disconnected parts), which in turn implies the possibility for the system to control itself from that single perspective.

It could be argued that these biological and evolutionary considerations are not decisive in explaining why consciousness is what it is, and why it has the function it has. After all, a different evolutionary history could have led to a different form of consciousness. However, the form of consciousness we are dealing with in this article, that is, human consciousness, undeniably has some relevant evolutionary advantages (for example, in terms of levels of autonomy) over other forms of consciousness, such as non-human animal consciousness. As such, biological and evolutionary considerations do help to explain consciousness and its functions.

In this view, various suggestive proposals about the possible biological and evolutionary foundations of consciousness have been put forward. For example, according to Damasio (2010), the conscious mind emerges within the history of life regulation (homeostasis) which begins in unicellular living creatures, progresses in individuals whose behavior is managed by simple brains, and it continues its march in individuals whose brains generates both behavior and mind. From there on, an organized self process could be added to the mind, thereby providing the beginning of elaborate conscious minds (Damasio 2010, pp. 25–26).

The core principle of life regulation can be traced back to single cells, which “possess a decisive, unshakable determination to stay alive for as long as the genes inside their microscopic nucleus commanded them to do so.” (Damasio 2010, p. 35). This brainless, mindless will to stay alive was transferred from single cells to the conscious mind by means of the working of nervous cells or neurons, which functionally differ from the other kinds of cells in that the former produce electrochemical signals capable of changing the state of the latter. At the same time, the working of neurons transformed the “collective voice” of “the aggregate of the inchoative wills of all the cells of the body” into the “single voice” of a unique, single entity. Among other things, this transformation allowed for the move from simple life regulation, focused on the survival of the organism, to progressively more deliberated regulation, which offered the system the possibility to maintain and expand well-being (as opposed to merely seek survival) in virtually any conceivable environment.

Which theory of information?

Not all theories of information can adequately deal with the information provided by conscious experience (CI). The distinctive characteristic of CI makes many theories of information useless. For example, a classical theory of information such as Shannon and Weaver’s (1949) cannot adequately account for CI. In fact, Shannon and Weaver’s mathematical theory of information (MTI) does not deal with the meaning of messages (semantics) but—using Mingers and Standing’s (2014) semiotic classification—only with their possible structure (syntax) and transmission (see also Floridi 2015). Shannon and Weaver’s MTI is syntactical because it is concerned with the rules governing symbols, not with the meaning of the symbols. As Mingers and Standing (2014, p. 6) observe: “MTI is syntactical because it is based on the number of possible messages or codes within a system, but says nothing about the actual meaning or content of the message. It is like measuring the size of a container but knowing nothing of what is inside.”

CI cannot even be satisfactorily dealt by theories of information that maintain that information is fundamentally objective, existing independently of the person who produces and interprets it, such as Dretske (1981), Floridi (2005, 2015) and Mingers and Standing (2014). In fact, the existence of CI requires the existence of a person who produces it: using Searle’s words (2013), we could say that information only exists relative to a conscious observer.Footnote 7

This does not mean that the theories of information that cannot account for CI are wrong or of no value in general, but only that they cannot be used to do more than they were designed to do.

In my view, one of the most suitable theories of information to deal with CI is Hofkirchner’s (2013, 2014), because it shows how (self-organizing) systems produce information. According to Hofkirchner’s unified theory of information (UTI), human beings are self-organizing systems, that is, systems that are able to configure their own internal structure in response to perturbations from the environment in a manner that is anti-entropic. Information emerges from the activity that the system performs when relating to an object. More precisely, information is produced when “self-organizing systems relate to some external perturbation by the spontaneous buildup of order they execute when exposed to this perturbation” (Hofkirchner 2013, p. 9). Hofkirchner equates this with Bateson’s (1972) famous definition of information as a “difference which makes a difference.” Bateson’s “making a difference” is the buildup of the system’s self-organized order; Bateson’s “difference that makes a difference” is a perturbation in the inner or outer environment of the system that triggers the buildup; Bateson’s “difference that is made” is made to the system because the perturbation serves a function for the system’s self-organization. Importantly, Hofkirchner (2011) highlights that one can speak of information production only when novelty emerges. In this sense, self-organizing systems produce information because they transform the input into an output in a non-deterministic and non-mechanical way. On the contrary, computers (including probabilistic machines) and other systems that compute and work according to strict deterministic rules, which by definition do not yield novelties, cannot produce information.

It should be noted that according to Hofkirchner, self-organization is a necessary but not sufficient condition for consciousness. There can be self-organizing systems, such as biotic and social ones, that have no consciousness. In his view, what makes a self-organizing system conscious are the social relations it has with the other self-organizing systems of the social structure in which it lives.Footnote 8 I think that this is not so much a necessary condition as a consequence of a more basic requirement, that is, the capacity of a system to use the produced information to build up a “virtual” order. Contrary to the kind of order that is built up by a non-conscious self-organized system, the “virtual” order that is built up by a conscious self-organized system (1) can be of various kinds (such as time, space, series, schemas), (2) can be used for functions and in experiential dimensions different from the ones that originally occasioned the difference to the system and (3) does not need to be permanently active. The information produced by a non-conscious self-organizing system such a social one, allows the system to build up only order of a social kind, which can be used by the system only for social purposes, and which is permanently active. On the contrary, the information produced by a conscious self-organizing system such as a human being, allows him to build up various kinds of orders (for example, a theory rather than a sequence of physical movements), which he can use for various purposes (the theory can be used in various fields), and which he can retrieve and use only when needed (he thinks about and uses the theory only when he needs it: when he does not need it, he can think about other things).

A final consideration about the relationship between information and meaning as it is defined by many theories of information. When dealing with CI, this relationship must be revisited. Theories of information usually distinguish between information and meaning, because information is not the same as meaning (Mingers 1996; Mingers and Standing 2014). For example, Dretske (1981) argues that information, which is objective and analogue, produces meaning through a process of digitalization. Information is objective because it is independent of the receiver: it is transmitted whether or not it is received or understood by anyone. Information is analogue because it consists of continuous rather than discrete events (such as heat and light). When information is received by a receiver, the amount of information available to the receiver depends on the receiver’s prior state of knowledge: at every stage, the receiver’s knowledge determines what particular aspects of the available analogue information are digitalized. Digitalization implies that only a limited amount of the available analogue information is converted into, and carried by meaning (a linguistic description of a picture carries only some of the information in digital form: much information in the picture is not conveyed). Consequently, a message may carry information but have no meaning for a particular person who does not understand the language because he is unable to digitalize. Conversely, a message may have meaning but carry no information if it is not true. Finally, while information must always be true, the meaning or belief generated from information may be false.

The peculiarity of CI renders analytically useless the distinction between information and meaning in the field of consciousness studies (which, however, does not lessen the validity of the distinction in other fields of study). Contrary to Dretske’s argumentations, CI is always meaningful for the person, in the sense that the person understands it and knows what it means for him. It may happen that a given conscious experience lacks sufficient clarity for the person: however, this is precisely (part of) the meaning that the conscious experience has for the person, that is, being a conscious experience lacking sufficient clarity.Footnote 9 It may also happen that the person does not fully understand all the implications of a given conscious experience: however, the implications exceed and are consequent upon the first experienced meaning of the given conscious experience. Moreover, contrary to Dretske’s observation that a message may have meaning but carry no information if it is not true, it should be noted that CI always communicates information, even when it is not true. What renders dreams and hallucinations special conscious experiences is precisely this: they (partly) communicate false information. This is also the first consciously experienced meaning they have for the person. Finally, every CI, whether it is true or false, may be misinterpreted and generate false belief. However, the interpretation of a given CI, and the beliefs it can generate, are always derivative of the first consciously experienced meaning.

Another scholar who distinguishes between information and meaning, is Luhmann (1995). Luhman argues that information is subjective and meaning objective, or at least intersubjective. Information is the surprisal value of a meaning complex for the receiver’s structure of expectation. Consequently, the same message may generate different information for different people, and a repeated message is still meaningful but not informative. In my view, one should distinguish between the primary, public, intersubjective meaning of a sign or linguistic message, that is, the conventional meaning which any competent speaker of the language should understand, and all the other private, individual, subjective meanings that can be derived from the primary meaning (Mingers 1995, 1996). Luhmann’s observation that the same message may generate different information for different people, certainly holds when referring to the information that people can derive from the primary meaning. However, it cannot hold for the primary meaning itself. In fact, in order to understand the message, a person must first consciously experience the primary meaning of the message, that is, gain the information the primary meaning carries. Only subsequently can the person gain additional, derivative information. Finally, it should be noted that contrary to what Luhmann’s theory implies, a repeated message loses not only its informative capacity but also its consciously experienced meaning: as the semantic satiation effect shows, when a word or phrase is repeated over and over again, it will soon lose its meaning (Baars 1988).

Consequently, in this article I will use the terms information and meaning quite interchangeably.

The information provided by conscious experience (CI)

Conscious experience provides several pieces of information to the person who is having it. Firstly, it tells the person that his experience has a content, and what the content is: when we feel pain, we realize that something is going wrong in our body, that our tissues are damaged, and so on. Secondly, it tells the person that the experience he is having differs from other kinds of experiences that he already had or could potentially have: when we are remembering the excellent food we ate last night, we know that we are not actually eating it now. Thirdly, it tells the person that the experience he is having is his own, that is, it belongs to him and to no one else: when we feel thirsty, we know that our thirst will be quenched not when, say, our friend drinks but only when we drink. In short, conscious experience informs the person about the objectFootnote 10 of his experience, how he is differently affected by perceiving, remembering or imagining the same object, and more in general what the relationship is between himself and the object of his experience.

From the point of view of the content, at least part of CI does not seem to differ much (if it were not for its complexity) from the information that other information systems, such as computers, can theoretically provide. However, there is an important aspect in which CI differs from the information provided by other systems. While the latter can have various forms, in the sense that the same content can be encoded in various codes or languages, the former can only have one form, precisely the form we experience in and through consciousness. That is, the content of CI must be encoded in the form (or code) of conscious experience. In fact, in order for us to know a given CI, we must consciously experience it: we know that we have pain, and we know what pain is, because we feel pain; we know that what is occurring to us now is only occurring inside us, and not outside in the world, because we are remembering it; and so on. More simply put, the message delivered by CI is the phenomenal quality of conscious experiences: the content of CI coincides with its form.

What does this feature of CI imply? Does it imply any advantage over the information provided by other information systems? Does it have any evolutionary advantage?

The fact that CI must be consciously experienced in order to be known, has the main implication that only the person who experiences it can know it. I know what I experience, and you know what you experience, but I cannot directly know what you experience, and vice versa. I can only indirectly come to know or infer what you experience (and vice versa) by means of some communication system, such as language, which in turn works on the basis of what we consciously experience when using it anyway (from the sounds of the words that are spoken, to their meaning and the meaning of sentences, and so on). This characteristic of conscious experience primarily responds to the needs of the person as an individual. Every one of us is an ontogenetically determined individual, who uses information on the basis of his actual physiological state, acquired behavioral habits, and subjective needs, goals and plans. Every one of us evolves in interaction with his environment and with other human beings and animals, and as such has a private and subjective history, which shapes his behavior in an individual way. Every one of us is also socially determined: despite belonging to the same species, human beings are grouped into different groups that speak different languages, practice different religions, have different political systems, and so on. Moreover, we are phylogenetically determined: we are the result of the evolution of our species, which differs from other animal species. We are sensitive only to the kind of information (acoustic frequencies, chemicals, wavelength of light) provided by the ecological niche in which our ancestors evolved. In short, every one of us is a unique individual who results from a complex combination of multiple (ontogenetic, phylogenetic, social, etc.) conditions.

As individuals, every one of us has his own way of extracting, storing and interpreting information, which is determined by his evolutionary antecedents, by his interactions with other creatures, and by unique individual histories and actual situations (Jonkisz 2015, p. 9). The fact that in order to know the information provided by a given conscious state, the person must experience it, is precisely a consequence of the fact that we are individuals. Conscious experience is the primary means we have to process information in a way that can meet our individual, personal needs. It allows all of us to process information, but in an individual way, that is, according to the personal, individual (evolutionary, socially and subjectively defined) history of each of us. In this sense, CI is always individual, or using Jonkisz’s expression (2015), individuated, that is, functionally relevant for, and relative to, only the person who experiences it.

The fact that CI is individuated, implies a potentially infinite variability of the conscious experiences that different individuals can have of the same object (artists are particularly skilled in showing how an object can be variously perceived by different people, see Gombrich 1960), and that the same individual can have of the same object in different times.

This variability of conscious experiences among different individuals and within the same individual, shows that the information that a person consciously experiences about a certain object, is not solely produced by the object, but is also produced by the person himself. In fact, if this information was produced only by the object, all of us would have the same conscious experience of the object.Footnote 11 This indicates that CI is never ready-made information, processed independently of the person. On the contrary, CI is also dependent on the person: it is the result of the interaction between the person and the object.

The person contributes variously to the construction of CI. At one extreme, his contribution can be considered indirect and involuntary. This is the case, for example, of the sensible qualities of objects. CI can only vehiculate those sensible qualities that the person’s sense organs allow the person to perceive, such as the redness of a tomato. This kind of contribution is clearly indirect and involuntary, in the sense that it is determined by the organs that were shaped by the adaptation to the ecological niche in which human beings evolved. At another extreme, the person’s contribution to the construction of CI is direct and voluntary, such as when the person imagines something that does not exist. Between these two extremes ranges a great variety of combinations, most of which shaped and occasioned by the specific needs, goals and plans of the person. For example, hunger makes the person actively look for food, thus selecting and bringing to consciousness certain pieces of information rather than others.

The construction of CI by the person is facilitated by the transitoriness of conscious experience. Every thought, idea, or perception soon fades away and leaves its place to a new conscious experience. Among other things, this allows for creating ever-new conscious experiences of an object that differs from the previous ones, viewing the same object under different perspectives, and stopping an ongoing conscious experience and initiating a new one, thus constructing new CI.

By (directly and indirectly) contributing to build his own CI, the person contributes to build and shape his knowledge of objects. The person can know only what his conscious experience allows him to know. It is through and on the basis of his conscious experience that the person comes to know objects. The person’s knowledge of objects has the form and the content that his conscious experience delivers. Such a knowledge originates from and develops thanks to the person’s continuous activity of exploration of, and interaction with objects. This activity leads the person to understand how objects relate to himself, define them on the basis of his own needs and goals, and recognize them for the uses he can make of them. An object becomes an object and acquires a form and meaning for the person only as long as, in some way, he can interact with it and relate it to himself (Cisek 1999). Consciousness allows the person to directly experience the various relations existing between himself and the object, and have an immediate, personal understanding of the object.Footnote 12

Conversely, by consciously experiencing what relation exists between himself and the object, the person is able to become explicitly, reflectively self-aware, that is to acquire and construct a knowledge of himself as a subject-distinct-from-the-object (shortly, a subject). The knowledge of being a subject is not just given, but must be learnt and achieved: it emerges from the continuous process of differentiation between himself and the object (Cleeremans 2008; Rochat 2003). As Rochat observes, self-awareness is a dynamic process, emerging chronologically in development “like onions, layers after layers, in a cumulative consolidation” (Rochat 2003, p. 730).

According to Vosgerau and Newen (2007), the distinction between the subject and the object (which they call “self-world” distinction) requires some kind of division of input sensation into self-related and world-related information. This division is achieved through the development of a common coding (or a “table” grouping the different codes containing the same content as the basis of a common coding) that groups the systematic co-variation of a certain afference (e.g., the proprioceptive information of my limb’s movement) with certain efferences (the corresponding motor commands). Such systematic co-variations can be found for example between motor commands and tendon receptor responses. Tendon receptors fire only when the muscle is contracted; in contrast, in passive movement, when no contraction of the muscle is involved, the tendon receptors do not respond. Since muscle contraction always involves an efference of the system (action), there is a systematic co-variation between efferences and tendon receptor afferences. The efferences can then be grouped with their caused reafferences. Sensation can hence be divided into two classes: the class that is caused by the system itself and the class that is caused by the world. Another example of systematic co-variation that allows for self-world distinction is represented by self-touch: when touching ourselves, there is a systematic covariance between two different haptic afferences (the sensation derived from touching and the sensation derived from being touched) which does not occur when we touch an object.

Vosgerau and Newen observe that, in order for a system to develop such self-world distinctions, all it needs is to start to move somehow,Footnote 13 and it has the ability to detect and store systematic co-variations, thereby creating a table grouping the efferences with the appropriate reafferences. Vosgerau and Newen (2007, p. 30) explain this last ability as a “system-inherent feature of neuronal networks that simply register systematic contingencies.” However, I think that this explanation is insufficient: given the incredibly vast amount of signals and frequencies present in our brain, there is a high probability for any signal to be associated with whatever other signal. In order to detect and store co-variations and, consequently, develop a common coding, a specific principle governing the association process is needed. As proposed by Hommel et al. (2001), one such principle could be instantiated by a representational scheme that allows the system to autonomously and flexibly plan its goal-directed actions (which, in turn, allows the system to move from a very primitive form of behavior in which the system only responds automatically to given stimulus condition, to a more sophisticated one in which the system generates and plans its goal-directed actions).

The process that leads the person to construct a full knowledge of himself, implies for the person to gain awareness of some facts, among which the most noteworthy one is that he is a being who can set his own aims, objectives and plans. This self-awareness makes him understand that: his behavior is not fully determined from the outside, from environmental stimuli; he can control himself and be independent from the control and influence of other social-cognitive agents; he can independently build his own knowledge of objects and himself, that is, resisting possible wrong information or suggestions.

Here an important qualification is in order. Even when the person is not explicitly (or reflectively) self-aware, either because he has not yet fully developed this capacity or because he is temporarily fully focused on a given event, he is pre-reflective self-aware of his experience. As explained by Gallagher and Zahavi (2008; but see also Legrand 2007), pre-reflective self-awareness is tacit or intrinsic, non-observational (it does not imply an introspective observation of oneself), and non-objectifying (it does not turn one’s experience into a perceived or observed object). On the contrary, reflective self-awareness is explicit, observational and objectifying: it posits the self as an object, and as such it introduces a self-division or self-distanciation between the reflecting and the reflected-on experience. While pre-reflective self-awareness is the common, constitutive structural feature of any conscious state, and as such exists independently of reflective self-awareness, reflective self-awareness always presupposes pre-reflective self-awareness.

The peculiarity of reflective self-awareness compared to pre-reflective self-awareness (and other simpler forms of consciousness) can also be delineated in terms of temporal processing levels. Reflective self-awareness requires a temporal processing level lasting more than about 3 s, which Wittmann (2011) calls “mental presence.” This is the temporal horizon that is needed for the occurrence of the conscious experience of a narrative subject acting in its environment, remembering the past and planning the future: that is, a subject fully aware of himself as an individual differentiated from the objects surrounding him. Mental presence is made possible by various mechanisms and memory functions, which allow mental representations to be maintained in an active state for a certain period of time. Below the temporal processing level of about 3 s, reflective self-awareness is not possible: other forms of consciousness are instead possible, such as what Wittmann (2011) identifies as “experienced moments,” which group moments in the range of milliseconds (“functional moments”) on a time scale of up to 3 s, and provide a logistical basis for the conscious experience of nowness, that is, what is occurring now as immediate experience.

That every conscious mental state always involves pre-reflective self-awareness, is evidenced by a number of observations. Firstly, as Husserl remarked: “Each thing that appears has eo ipso an orienting relation to the Body, and this refers not only to what actually appears but to each thing that is supposed to be able to appear. If I am imagining a centaur I cannot help but imagine it as in a certain orientation and in a particular relation to my sense organs” (Husserl 1989, §18a).

Secondly, it is always possible for us to return to an experience we had and remember it as our experience, even if it was not given thematically as our experience when we originally lived it through (if I am deeply engaged in reading a story, and someone interrupts my reading by asking what I am doing, I will reply that I am reading, despite the fact that my attention was on the story and not on myself: that is, in order to answer, I do not need to infer or observe who was reading). This would not be possible if the experience were completely anonymous, that is, lacking the property of intrinsically belonging to us (Gallagher and Zahavi 2008, p. 54).

Thirdly, all our conscious experiences are given immediately as ours. We do not first have a conscious experience and only later the feeling or inference that it was ours. The quality of it being ours, of being ourselves and not someone else who underwent it, is intrinsic to all our conscious experiences.

Fourthly, all conscious experiences of any object or event are always and unavoidably accompanied by a specific mode of givenness that is dependent not so much on the object or event as on the person. The same object can present itself in a variety of manners: it can be perceived, remembered, imagined, etc. These various manners are not at all an external feature of the object, but rather a feature added by the person. Without the person’s specific activity, objects and events would always look the same: they would never appear under different modes of givenness.

In view of this qualification, I will use the substantive “self” to refer to the system or machinery that underlies and makes possible (together with the working of attention) what Gallagher and Zahavi (2008) call “pre-reflective self-consciousness (or awareness).” This system comprises the person’s organism and mind, and therefore it includes not only the person’s physical dimension but also his psychological and mental dimension. Most importantly, in humans, the self is expressed via the central and peripheral nervous systems, in the sense that a person’s organism and mind, and his interactions with the environment are represented and mapped in networks of neurons, mainly located in the brain.

The self constitutes the invariant (albeit evolving) locus in the ever-changing flux of phenomena: an “invariant dimension” to which the multitude of changing experiences constantly refer (Gallagher and Zahavi 2008, p. 204). It is the basis out of which all conscious experiences emerges,Footnote 14 included the conscious experience of being a subject (as defined above, that is, as distinct-from-the-object). As such the conscious experience of being a subject is to be considered like any other conscious experiences of anything else: it stands on the same level as the others, as compared to the self, that is, they derive from and are made possible thanks to the self, which stands on a separate, more basic level. As already observed by some philosophers (see for example Avenarius 1891, §143; Mach 1890), every conscious experience of being a subject, such as that expressed in language by the pronouns “I” and “me,” has no special or higher ontological status compared to other conscious experiences. Actually, the former is a (dynamic and changing) product of the activity of the self, much the same as the latter. In this sense, it can be maintained that there can be consciousness without a subject, but not consciousness without a self.

It could be argued that some brain disorders (such as schizophrenia, alien hand syndrome and anosognosia) and altered states of consciousness (such as mystical experiences and drug-induced states) show that consciousness can occur without any experience of a phenomenal self. However, this kind of evidence suffers from well-known methodological problems (Legrand 2006; Marcel 2003). For example, Marcel (2003, p. 56) observes: “we are faced with the constant problem of what to infer from pathology, neurological or other: whether a psychological dissociation reveals a basic separation that is hidden by normal integrated functioning or whether it reflects an abnormal mode or some compensatory attempt to deal with this dysfunction.” These methodological problems originate primarily from conceiving the self as a monolith, and disappear once various levels of self are distinguished. For example, by distinguishing between the protoself, the core self and the autobiographical self,Footnote 15 Damasio (1999, 2010) shows that impairments of the autobiographical self allow the protoself and the core self to remain intact (whereas impairments of the protoself or the core self cause the autobiographical self to collapse). In this view, conscious disorders (such as schizophrenia, asomatognosia and anosognosia) that are usually reported as evidence of a dissociation between consciousness and the whole self, in reality appear to refer to a dissociation between consciousness and the autobiographical self. On the contrary, impairments of the core self or the protoself (akinetic mutism, absence seizures, epileptic automatism, coma, deep anesthesia) lead to conscious disruption in general. In sum, there can be consciousness without the autobiographical self, but never consciousness without some, albeit minimal, form of self (the core self or the protoself). In the same vein, Gallagher and Zahavi (2008) shows that, when dealing with pathologies of the self, it is necessary to distinguish between a sense of ownership and a sense of agency. Phenomena like thought insertion and delusion of control, which seem to support the claim that some conscious states completely lack the sense of self (in the sense that what is experienced is not attributed to oneself), actually show a lack of sense of self-agency and a mis-attribution of agency to someone else, but not a lack of sense of ownership:

Subjects who experience thought insertion or delusion of control (…) are not confused about where the alien movements or thoughts occur—the sites of such movements or thoughts are their own bodies and minds. Some sense of ownership is still retained, and that is the basis of their complaint (…) The inserted thoughts or alien movements (…) cannot lack the quality of mineness completely, since the afflicted subject is quite aware that it is he himself rather than somebody else who is experiencing these alien thoughts and movements (Gallagher and Zahavi 2008, p. 210).

The main difference between systems that construct their CI and systems that cannot construct their CI

The possibility that the person has to construct his own CI, and consequently the knowledge of objects and of himself, provides him with a fundamental advantage over systems that cannot construct their CI: autonomy. While the working of the latter fully depends on an external agent or programmer, who defines what information is for the system, and how the system processes it, the working of the former primarily depends on the person himself, who defines what information is for himself. To better understand this fact, let us compare the way information is dealt with by human beings with the way it is dealt with by other information systems, such as computers.

Computers work based on what programmers have (consciously) decided to program, and their plans and goals. Computers just syntactically manipulate what they have been instructed to manipulate. Nothing more nothing less. They do not care if what they manipulate has a meaning or not, and what meaning it has. They do not care if the symbols they manipulate have changed their original meaning. They are built on the basis of, and in order to manipulate, the kind of information that programmers provide (for this reason, the same information can be manipulated by different computers, and can be encoded in various forms). They are ultimately controlled by the programmer.

Compared to computers, which merely manipulate information according to strictly deterministic rules, we human beings produce the information that we manipulate, because we transform inputs into outputs in a non-deterministic and non-mechanical way (Hofkirchner 2011): we are able to act on, conceive of, and deal with the same object or event in different ways, and to act on, conceive of, and deal with different objects and events in the same way (Marchetti 2010). We produce information on the basis of our personal needs, intentions, plans, and the ends that we have set for ourselves (for this reason, every one of us produces, processes and experience his own CI, and the CI we produce can only have the form of our personal, individual conscious experience). On the basis of that information, we continuously and ultimately control ourselves. We are autonomous agents.

Compared to computers, we care if what we manipulate has a meaning or not. We semantically manipulate information: we manipulate symbols primarily on the basis of their meaning. More than that: we create and assign meanings, make something into a symbol, and make symbols change their original meaning. We behave and deal with objects differently according to the meaning we assign to them. And every one of us behaves and deals with the same object differently from the way the others do.

We can summarize the difference between human beings and computers by saying that while the former can choose between several alternatives (which are continuously and newly redefined and figured out based on the person’s present intentions, needs, goals, etc.), the latter have no possibility to choose, because they have only one possibility (Hofkirchner 2013): the one determined by the programmer.

The basis for the production of CI: the variations of the state of the self

How do we produce CI? More in general, what is it that makes it possible for us to define what information is? How can we create meanings?

The information we build is primarily based on our biological, naturally selected, and culturally acquired values. Our values provide the reference point, the baseline for defining what information is for us. Something is information and we assign it a meaning because it answers our needs, helps us manage our plans, and ultimately helps maintain and shape us. As Damasio (2010, p. 49) observes:

objects and processes we confront in our daily lives acquire their assigned values by reference to (the) primitive of naturally selected organism value. The values that humans attribute to objects and activities would bear some relation, no matter how direct or remote, to the two following conditions: first, the general maintenance of living tissue within the homeostatic range suitable to its current context; second, the particular regulation required for the process to operate within the sector of the homeostatic range associated with well-being relative to the current context.

More in general, using Zlatev’s (2002, p. 258) words, we can define meaning in the following way: “Meaning (M) is the relation between an organism (O) and its physical and cultural environment (E), determined by the value (V) of E for O” (see also Brizio and Tirassa 2016). The meaning that, for example, water has for us, is mainly linked to the fact that water quenches thirst, which in turn, ensures that our living tissues are maintained within a homeostatic range that is necessary for their normal functioning. We also attribute meaning to objects on the basis of other kinds of values. In Christian tradition, for example, water has the religious meaning associated with the sacrament of Baptism. The meaning we assign to something can vary according to the context: water can assume a completely different meaning when we are drowning.Footnote 16

Data and pieces of information are built on the ground of our values via the primary vehicle of our self. The structure and working of our self developed on this ground; they are functional to the preservation of our values. Our self, which embeds and materializes our values, constitutes a stable frame of reference against which objects and events (including our activity, and its results) can be mapped, processed and assessed. Something becomes a datum of CI because of the effect it has on, or implies for, the state of our self. In this sense, CI data are not objective entities existing independently of the person. On the contrary, something becomes a datum of CI because the effect it has on the self, defines it as such.Footnote 17

The effect that something can have on the self can be of various kinds. Most frequently, objects and events induce some kind of (more or less temporary) change in the state of the self. This fact is well captured by Damasio, when he acknowledges that: “We become conscious when the organism’s representation devices exhibit a specific kind of wordless knowledge—the knowledge that the organism’s own state has been changed by an object—and when such knowledge occurs along with the salient representation of an object” (Damasio 1999, p. 25). However, this is not always the case. Sometimes, the events or objects of the word can leave us indifferent or untouched, and bring no change on us. This is why our languages have verbs and nouns that allow us to express the conscious experience of a lack of change, and say for example that we noted no differences, or that nothing happened. For the sake of simplicity, I will use the term “variation” to generally refer to both cases where the self actually undergoes a change and cases where the self does not undergo any change.

Moreover, it must be noted that variations in the state of the self can be induced not only by “external,” physical objects and events (as Damasio’s description seems sometimes to imply) but also by objects as I have more generally defined them in this work, that is, including imagined objects, ideas, emotions, dreams, and so on. Actually variations in the state of our self can spontaneously occur, regardless of whether we are engaged by external objects and events, such as when the blood sugar level in our body drops. Furthermore, variations can not only be triggered by external stimuli, such as when a sudden flash of light surprises us: they can also be voluntarily initiated, such as when we purposefully look for something.

Variations in the state of the self can occur at various levels and scales. For example, when interacting with a physical object, our organism undergoes changes at the levels of the specialized sensory system involved (such as the photoreceptors, and the various cerebral areas for vision). Moreover, some changes can occur because of the motor adjustments (of organs not strictly belonging to the involved specialized sensory system) that are necessary to obtain the perception. Additionally, changes can be induced by the emotional reaction generated by the perception of the object (Damasio 1994, 1999, 2010).

The variations of the state of the self can have various durations, ranging from very short intervals of the orders of milliseconds to long intervals of the order of several seconds. Sometimes, these variations can induce specific behavioral reactions intended to reestablish the homeostatic range associate with well-being (such as when they originate from disruptions of the homeostatic state), but they can also require no specific corrective activity by the person.

These variations may trigger adaptation and learning processes that can lead to more or less permanent changes of the self. As psychologists have showed, whatever we process, whether consciously or unconsciously, usually contribute to modify us, by making us learn something new, change our behavior, customs and plans, adapt to the new circumstances or else.Footnote 18 These modifications affect various brain areas and structures, and various mental processes. Once implemented, the modifications alter the way our brain processes information: for example, repeated processing of a stimulus leads to habituation, and repeated practice to automatization of the practiced skill (Baars 1988). Consequently, taking it to the extremes, it can be said that we can never experience the same object twice in the same way because the relationship between us and the object undergoes continuous transformations. In short, conscious and unconscious processes modify us and the way we processes information, and therefore contribute to newly shaping information at every successive processing step. It goes without saying that the most relevant modifications are those which lead to the development of reflective self-awareness, which fundamentally enhances our autonomy by allowing us to set our own objectives and directly control ourselves.

The variations of the state of the self (whether endogenously or exogenously generated) provide the first raw material for the construction of the information we manipulate. The import that a datum has for us depends on the extent of the variation our values undergo. The content of the datum depends on the structures and levels of the self that are involved by the variation. Variations can directly affect the most stable parts of the organism, which Damasio (2010) identifies with the internal milieu and the viscera.Footnote 19 In this case, variations engender “primordial feelings” (Damasio 2010), such as pain, pleasure, hunger, thirst, cold, heat. Most frequently, variations involve parts of the organism such as the head, trunk and limbs, which are less stable from a development point of view (the musculoskeletal system of a toddler is not the same as that of an adult), but which nonetheless provide the fundamental schema of our body. Variations are almost continuously affecting the externally directed sensory organs (eyesight, hearing, smell, taste, touch), which play a crucial role in providing the organism with some fundamental qualitative aspects of conscious experience and a standpoint relative to the outside world. The combination of all these more or less stable parts of the self (internal milieu, viscera, musculoskeletal system, sensory organs, etc.) constitutes an “island of stability within a sea of motion. It preserves a relative coherence of functional state within a surround of dynamic processes whose variations are quite pronounced” (Damasio 2010, p. 200). This island is a sufficiently stable platform and source of continuity of the self, which allows for the detection and registration of the variations that the self undergoes.

The variations of the state of the self are, however, not sufficient to generate CI. A multiplicity of values governs our self, countless variations affect us at any given time, we are constantly overwhelmed by an incredible amount and variety of internal and external stimuli, the body is continuously performing different actions and changing shape accordingly, and so on. All this requires an additional mechanism that selects and emphasizes the data that are most relevant in the given situation and for our current plans and goals, and excludes the non-relevant data from being further processed. This mechanism is attention.

Attention

Attention allows us to deal with the vast amount of information with which we are continuously confronted, by selecting and focusing on those aspects that most count for our goals, that are most physically salient for us, or that most meet our selection history (what we have learnt in the past: Awh et al. 2012). This selection process can be achieved in various ways: via an exogenous, involuntary, bottom-up or an endogenous, voluntary, top-down processing (Carrasco 2011; Chica et al. 2013); via a stimulus-driven or goal-driven processing (Corbetta and Shulman 2002; Corbetta et al. 2008)Footnote 20; via internally or externally addressing the focus of attention (Chun et al. 2011); by applying attention for variable levels of intensity (La Berge 1983); by narrowly focusing attention or widely distributing it (Alvarez 2011; Chong and Evans 2011; Demeyere and Humphreys 2007; Treisman 2006); by sustaining or maintain attention for variable, albeit limited, amount of time (La Berge 1995), even when it is distributed over separate objects (Eimer and Grubert 2014); and so on.

Psychologists have long studied attention as a mechanism capable of coping with the limits of our sensory, perceptual and memory systems in managing the flow of information with which we are constantly confronted. By allowing for the selection of the information, attention reduces the input to a manageable amount: it isolates and amplifies pieces of information, which can be variously combined to yield theoretically infinite chains of constructs. As observed by VanRullen et al. (2007), attention might have evolved from a more basic sampling process that ubiquitously characterizes sensory systems (saccades in vision, sniffs in olfaction, whisker movements in rat somatosensation, and even electrolocation in electric fish) as a more economical means of covertly sampling endogenous representations.

Moreover, as argued by Duncan (2013, p. 36), attention proves to be an effective tool in dealing with the complex problems posed by the environment: in fact, it allows for the segmentation of the flow of information into “attentional episodes,” each episode admitting into consideration only the contents of momentary, focused subproblems. More specifically, attentional discrete processing has various advantages from a purely computational point of view. According to Buschman and Miller (2010), restricting computations to discrete windows of time would: (a) ensure that informative spikes occur with the temporal precision that is both necessary for integration by downstream neurons and for spike-timing dependent plasticity; (b) act to stabilize and organize the neural network and its computations: periods of inhibition may act to “reset” the network to a base state, effectively limiting the number of states that neurons could obtain; (c) allow for easier coordination of processing within and between brain regions, by providing a specific moment at which information must be available for computation in a specific region, and at which the outcome of the computation is available.

Finally, the attentional selection process has the side effect of creating new experiential dimensions on top of the ones from which they originate. By selecting and combining otherwise unrelated elements, we can imagine and simulate new events, scenarios and conditions that we would have never consciously experienced if we had not been endowed with selective and constructive capacities. As observed by Baumeister and Masicampo (2010, p. 958), the full power of human consciousness consists in using the mental capacity for constructing sequential thoughts to conduct simulations during wakefulness, without relying on sensory input.

Although the working of attention can be theoretically conceived as an uninterrupted, continuous process, which rapidly switches between different targets, evidence from various research methods and techniques seems to favor the hypothesis that attention operates in a periodic, pulse-like manner.

By analyzing the correlation between detection performance for attended and unattended stimuli and the phase of ongoing EEG oscillations, Buschman and Miller (2010) showed that detection performance for attended stimuli fluctuated over time along with the phase of spontaneous oscillations in the theta (≈ 7 Hz) frequency band just before stimulus onset. This fluctuation was absent for unattended stimuli. Although this kind of evidence still raises some methodological concerns, in that the changes in the initial phase of the perceptual cycle explains only a small percentage of the variability in perception (VanRullen 2016), further support to the hypothesis of the periodicity of attention comes from behavioral and psychophysical measurements (Dugué et al. 2015; Fiebelkorn et al. 2013; Landau and Fries 2012; Song et al. 2014; VanRullen et al. 2007). For example, VanRullen et al. (2007) found that attention, even when focused on a single target location, samples information periodically like a blinking spotlight, and that it cannot be allocated at any given time but only at specific phases of an oscillatory cycle.

The periodic nature of attention is also evidenced by studies in which oscillatory brain responses are entrained by periodic stimulation (Jones et al. 2002), and by the special role that the alpha phase plays in the attentional blink phenomenon (Hanslmayr et al. 2011). Furthermore, it is also noticeable at wider temporal scales: spontaneous eyeblinks, which occur 15–20 times per minute on average, are closely correlated with attentional processing in that they tend to occur at breakpoints of attention, such as the end of a sentence while reading, a pause by the speaker while listening to a speech, and implicit breakpoints while viewing videos. This close correlation has led Nakano et al. (2013, p. 702) to hypothesize that “eyeblinks are actively involved in the process of attentional disengagement during cognitive behavior by momentarily activating the default-mode network while deactivating the dorsal attention network.”

In sum, this pattern of results clearly shows, on the one hand, that attention exerts its effect in a periodic fashion, and, on the other hand, that the periodicity of attention is the product of brain oscillations.

The form of conscious experience is determined by the activity that attention performs to detect the variations of the self

In “The basis for the production of CI” section, we have seen that something becomes a datum of CI because of the variation it induces on the state of the self. It is the variation in the state of the self that defines something as a datum of CI. Given the multiplicity of values governing the self, and the vast amount of internal and external stimuli we continuously face, the variations of the state of the self must be filtered by a selective mechanism such as attention. Attention then provides a kind of “template” that shapes all our conscious experiences; that is, our conscious experiences assume the form determined by the activity that attention performs to detect the variations of the self. Let’s see some of the most relevant features of conscious experience determined by this activity: periodicity, the egocentric spatial organization and phenomenal quality.

Periodicity

The most apparent effects that attention exerts on conscious experience is visible in the periodicity of conscious experience: just as attention operates in a periodic fashion, so too conscious experience is formed by a succession of discrete states, each one being unique and different from the others.

As William James (1890/1983) observed:

The number of things we may attend to is altogether indefinite, depending on the power of the individual intellect, on the form of the apprehension, and on what the things are. When apprehended conceptually as a connected system, their number may be very large. But however numerous the things, they can only be known in a single pulse of consciousness for which they form one complex ‘object’ (p. 383).

James further highlights this feature when considering the perception of time:

In the experience of watching empty time flow (…) we tell it off in pulses (…) The discreteness is, however, merely due to the fact that our successive acts of recognition or apperception of what it is are discrete (p. 585).

James’ observation is backed up by the existence of perceptual phenomena such as apparent simultaneity (Hirsh and Sherrick 1961; Szymaszek et al. 2009), which shows that there is a certain minimal interstimulus interval (variable across senses, and estimated to be around 20–50 ms) below which two successive events are perceived as simultaneous, the continuous wagon-wheel illusion (Simpson et al. 2005; VanRullen and Koch 2003; VanRullen et al. 2006a, b), and perceived causality (Shallice 1964).

Although some authors have argued against the theory of discontinuous perceptual cycles (Allport 1968; Kline and Eagleman 2008), a growing body of neurophysiological investigation confirms that our conscious experience of the surrounding world as a seamless flow of information is actually the result of the combination and assembly of distinct processing epochs, which are produced by a periodic processing whose physiological basis is provided by electrical neural oscillations (Baumgarten et al. 2015; Blais et al. 2013; Busch et al. 2009; Doesburg et al. 2009; Fingelkurts and Fingelkurts 2006, 2014; Fingelkurts et al. 2010; Kranczioch et al. 2007; Mathewson et al. 2009; Neuling et al. 2012; Romei et al. 2010; Van Dijk et al. 2008; Varela et al. 1981; Wutz and Melcher 2014).

It is still unclear which parameter of neural oscillations—amplitude, phase consistency, or phase coupling—predicts periodicity in perception better than others (Hanslmayr et al. 2011). In this regard, a promising model (accounting for cognitive control functions in general) is put forward by Sadaghiani and Kleinschmidt (2016). According to this model, three large-scale brain networks differentially and hierarchically modulate α-oscillations: the cingulo-opercular/insular (CO) network upregulates global α-oscillations power, which underpins tonic alertness; the dorsal attention (DAN) network reduces focal α-oscillations power, which underpins selective attention; the frontoparietal (FP) network modulates long-range α-band phase synchrony, which underpins phasic adaptive control. The amplitude of widespread α-oscillations (under CO control) hierarchically affects both selective α-power reductions and long-range α-phase-locking.

Likewise, it still remains unclear why different frequencies correlate with different periodic perceptual phenomena. Various hypotheses could be supported. For example, one can think that the sampling frequency could vary as a function of the kind of stimulation, synchronizing with stimulation, or that it could evolve without synchronizing with the external world, as a passive ongoing, random oscillation (Blais et al. 2013).

Despite all these open questions, however, the bulk of current studies clearly shows that our conscious perceptual experience has a periodical nature and that such a periodical nature is the product of brain oscillations.

Although it has not yet been definitely ascertained that the periodicity of conscious experience is directly determined by the periodicity of attention, in my view it can be quite reliably inferred that the latter causes the former from the bulk of empirical evidence that shows that attention determines conscious perception. This is shown by psychological studies of visual perception (Carrasco et al. 2004, 2008; Carrasco 2011; Liu et al. 2009; Treisman 2006), perception of time (Brown 1985; Coull et al. 2004; Hicks et al. 1976, 1977; Mattes and Ulrich 1998; Shore et al. 2001), Inattentional blindness (Mack and Rock 1998) and Change blindness (Rensink et al. 1997).

It should be noted that currently there is not yet a shared view on the nature of the relationship between attention and consciousness (for a review, see Tsuchiya and van Boxtel 2013). The positions range from those who maintain that attention and consciousness are distinct phenomena that need not occur together (Bachman 2011; Koch and Tsuchiya 2006; Lamme 2003; van Boxtel et al. 2010) to those who maintain that the two are inextricably linked (De Brigard and Prinz 2010; Mack and Rock 1998; Posner 1994). However, as highlighted by various scholars (De Brigard and Prinz 2010; Koivisto et al. 2009; Kouider et al. 2010; Marchetti 2012a; Srinivasan 2008), the view that there can be consciousness without some form of attention originates primarily from the failure to notice the varieties of forms and levels of attention (Alvarez 2011; Chun et al. 2011; Demeyere and Humphreys 2007; La Berge 1995; Lavie 1995; Nakayama and Mackeben 1989; Pashler 1998; Treisman 2006) and consciousness (Bartolomeo 2008; Edelman 1989; Iwasaki 1993; Northoff 2013; Tulving 1985; Vandekerckhove and Panksepp 2009). Not all forms of attention produce the same kind of consciousness, and vice versa not all forms of consciousness are produced by the same kind of attention. There are cases of consciousness in the absence of a certain form of top-down attention, but in the presence of some other form of top-down attention. There are cases of consciousness in the absence of top-down attention but in the presence of some other form of attention, such as bottom-up attention. There can be low-level or preliminary attention without consciousness. But there are never cases of consciousness in complete absence of some form of attention (Marchetti 2012a).Footnote 21 In this view, mental contents remain unconscious as long as they are not processed by attention, or if the level of attention processing them is below a certain threshold.

The egocentric spatial organization of conscious experience

As we have seen in the previous sections, the existence of consciousness is closely linked to the existence of a self. This is due to the biological function of consciousness. Consciousness provides a means for an organism composed of multiple and complexly interconnected parts to be represented in a unified and condensed way, as a single and unique entity, as an individual (Damasio 2010; James 1890). Among other things, this helps the organism to avoid responding in conflicting ways to external and internal stimuli, prioritize its activities according to the most relevant homeostatic demand, devise plans and actions that best fit its existence as a whole (rather than favoring some of its parts to the detriments of the other ones), and coordinate its behavior accordingly: in a word, to maintain and expand the well-being of the organism in its entirety.

The process of reduction of the complexity inherent to the composite structure of an organism into the “single voice” of a unique individual was phylogenetically achieved via multiple steps. Among these steps, one of the most fundamental was the creation of representational patterns (such as topographic maps or transient neural patterns) capable of mapping the organism’s activity, the external world, and the interactions of the organism with the external world (Damasio 2010). The ultimate step of this process of reduction is realized by attention.

Attention is a unique means of realizing this “single voice.” Not only, as we have seen, does it allow—thanks to its periodical nature—for the formation of “pulses of consciousness,” which make one experience single, unitary contents per unit of time, but it also allows for the realization of a single “point of view” from which objects and events are experienced. Whatever we perceive, experience, feel, and think, is always perceived, experienced, felt and thought from a unique perspective or angle.

This is made possible by the fact that attention originates and is deployed from a single point, which is located inside our body. All the objects that we consciously experience are arrayed around this single point, and can be localized relative to it. It is precisely this point that allows us to say that objects are “near” or “far from” us, “left” or “right” of us, “around,” “in front of” or “behind” us, etc. According to Merker (2013b, p. 9), this point “is located at the proximal-most end of any line of sight or equivalent line of attentional focus,” and is characterized by the fact that it “is excluded from the contents of consciousness by the same geometric necessity that prevents an eye from viewing itself, though it is the instrument for viewing all else” (Merker 2013b, p. 10).

All our conscious experiences are egocentrically organized around this point from which attention is deployed. This point constitutes the center of a reference system that defines the space in which all the objects of conscious experience are located. Whenever it moves, “it is always in the same centered position with regard to the other coordinates of the system. The center of phenomenal space anchors the rest of the coordinate system” (Revonsuo 2006, p. 168). The distance of an object from that point, and its direction relative to it, allows us to uniquely localize the object in space.

Moreover, because of the typical asymmetry involved by attentional focusing (attention is always directed “toward something”), this point partitions the world into the asymmetric space of what monitors and what is monitored (Merker 2013a, b). We see the objects of the world from our perspective; we are the “here” with respect to which the objects are “there”; the objects that are nearer to us occlude the view of the objects that are farther from us, etc.

Attentional focusing gives all our conscious experiences a spatial dimension: “all phenomena events take place within what is experienced as being a single spatial volume” Revonsuo (2006, p. 168). Even the most abstract conscious experiences are spatially localized: our emotions are located somewhere in our body; we feel our memories as originating and located in ourselves; we have ideas and thoughts in our mind, and another person’s thoughts and ideas are located far away from us, that is, in his mind.Footnote 22

In sum, every act of focusing of attention comprises and is made up of a point of origin, which is located in our body, and a direction toward which it points. Consequently, whatever is focused by attention, appears in our consciousness as possessing a spatial quality that is defined by our act of focusing of attention: it is oriented in a certain way with respect to us (turning toward me rather than away from me); it is located to the right of me rather than to the left of me; we perceive it as having a face or a front and a back, etc. That is, it inherits a perspective that it did not possess originally in itself, but that is fully determined by us, by our self. This perspective accompanies the object like a shadow (even though the consistency of this shadow can vary with time and according to the plans and goals of the person): it is the hallmark of our omnipresent self, of the “pre-reflective self-consciousness” (Gallagher and Zahavi 2008), which characterizes all our conscious experiences.

Finally, it should be noted that this egocentric organization of conscious experience enhances the coordination between perception and action (Hommel et al. 2001; Land 2012; Merker 2013b) by simplifying “the conversion of locational differences in phenomenal space to directional displacements in our most ubiquitous category of behavioral output, namely the targeting movements of spatial orienting behavior” (Merker 2013b, p. 3).Footnote 23

The phenomenal quality of conscious experience

Attention is also the main contributor to the phenomenal quality or what-it-is-like of conscious experience, that is, the fact that what it is like for us to experience red is very different from what it is like to experience yellow.

As I have argued (Marchetti 2010), the phenomenal quality of conscious experience is brought about by attentional activity through the modulation of the energy state of the neural substrate that underpins attention itself. More specifically, I propose that:

  1. (a)

    Attentional activity can be performed thanks to the neural energy that a certain neural substrate (let’s call it, the organ of attention) provides;

  2. (b)

    When attentional activity is performed to detect the variation of the state of the self, the latter proportionally modulates the energy state of the organ of attention;

  3. (c)

    It is precisely the modulation of the energy state of the organ of attention that generates the phenomenal aspect of consciousness. The quantitative aspect of conscious experience is defined by the amount of variation of the energy level of the organ of attention caused by the modulation; the qualitative aspect of conscious experience is defined by the area of the self where the variation occurs and the part of the organ of attention focusing on it.

My argumentation is based on the following observations and considerations. In order for us to perform any attentional activity (whether voluntarily or involuntarily), we need to use our neural energy. This is primarily evidenced by the constraints imposed on our attentional activity by the limited amount of available neural energy: there is a limit to the possibility of sharing attention (when one task demands more resources, there will be less capacity left over for the other tasks), as well as of increasing mental processing capacity by increasing mental effort and arousal; and an extensive use of attention, as is demanded by complex, time-consuming, demanding tasks, requires some time to recover the consumed energy. The fact that attentional activity is underpinned by a neural energy pool is also evidenced by the flexibility with which attention can be deployed: attention can be flexibly (albeit up to a certain extent) allocated from moment to moment according to the person’s needs, goals and motivations; and the amount of attentional capacity can vary according to motivation and arousal.Footnote 24

Whenever attentional activity is performed, a modulation of the state and working of the neural energy pool underpinning attention occurs. This is primarily noticeable in those cases where the modulation is brought to its extremes, such as when a person’s attentional (and consequently also physical and in general mental) activity is made to dramatically slow down or even stop. This happens, for example, when we feel a sudden pain that absorbs all our attention, to the extent of blocking it (Haikonen 2003), or must shut our eyes because of an intense light. In such cases, in order to reestablish the normal state, we must either divert our attention toward something else, or try to remove the cause of the pain that fully draws our attention.

The conscious phenomenal experiences of pain and the blinding light consist precisely in the complete absorption or blocking of the attentional (as well as physical and mental) activity resulting from the modulation of the state of the neural energy: it is this modulation that brings about the phenomenal quality of the conscious experience related to the attentional activity we have performed.

The transformation of the variations of the self into modulations of the energy state of the organ of attention, allows for the translation of the various kinds of variations of the self (chemical, electrical, mechanical, etc.) into the common language or code of consciousness. It is this common language that makes it possible to subsume the various dimensions of life under a common conscious experiential dimension, thus making them comparable and differentiable.

This transformation constitutes what Bateson defines as “a difference which makes a difference,” that is, information, and more specifically the information that is vehiculated by conscious experience. The “difference” that makes the difference, is the variation of the self detected by attention; the “difference that is made,” is the modulation of the energy state of the organ of attention consequent upon the detection of the variations of the self; most importantly, the “making a difference” is virtual, in the sense that the order that can be built up on the basis of such modulations (1) can be of various kinds (space, time, schemas, series, etc.); (2) can be used for purposes and in experiential dimensions different from the ones that originally occasioned the difference; (3) does not need to be permanently active (Marchetti 2012b, 2016).

This property of attentional processing together with its periodic nature, enable the segmentation of the continuous and undifferentiated stream of bodily and environmental information into elemental data. These data can be used as basic mental (whether perceptual or purely abstract) atoms or building blocks for a constructive process in which (by means of some supplementary organ, such as working memory, as we will see in the next section) they are variously and recurrently assembled, combined and related.Footnote 25

This constructive process, by allowing for the possibility of creating new experiential dimensions (such as art and ethics) and using symbolic means (Baumeister and Masicampo 2010; Benedetti 2011), paves the way to very fundamental enhancements of our autonomy.

Complex forms of conscious experience

Attention alone is not sufficient for the more complex forms of conscious experiences to occur. For sure, attention ensures the selection and shaping of basic pieces of information of conscious experience. However, another mechanism is needed in order to combine and assemble them. This mechanism is working memory.

Working memory (WM)

Generally speaking, WM helps to maintain information in a heightened state of activity in the absence of the corresponding (sensory, motor, cognitive, emotional, etc.) input over a short period, in order to manipulate and access it during ongoing cognition and action, and update it in memory. Moreover, it helps to correctly discriminate between relevant and irrelevant information with regard to the task to be performed, by preventing the interference of automatic tendencies and routines (Unsworth and Engle 2007). In this sense, as Broadway and Engle (2011, p. 1) point out, WM is “not directly about remembering per se, but instead reflects a more general ability to control attention and exert top-down control over cognition.”

More specifically, WM allows, among other things, for the sequentially ordered combination of elements. The role of WM in flexibly and freely combining content elements into new structures is explicitly theorized by Oberauer (2009). According to Oberauer, one of the main functions of WM is to build and maintain new structural representations by establishing and holding temporary bindings between contents (objects, events, words) and contexts (such as positions in a generic cognitive spatial or coordinate system, or argument variables in structure templates). Oberauer identifies this function or “component” of WM with the “region of direct access” for the declarative part of WM and with the “bridge” for the procedural part of WM.

Neurophysiological studies have started to elucidate this system. Experimental findings using the OA (operational architectonics) methodology in EEG analysis clearly point to the fact that the encoding, maintenance and retrieval of phenomenal mental objects by WM are critically dependent on dynamic millisecond-range synchronization of multiple operations performed by local neuronal assemblies that operate on different temporal (oscillations) scales nested within the same operational hierarchy (Fingelkurts et al. 2010; Monto 2012). In particular, medium life span OMs (operational modulesFootnote 26) of brain activity (that “cover” certain cortical areas) seem necessary to achieve successful memorization (Fingelkurts et al. 1998, 2003). Indeed, although memory encoding, retention and retrieval often share common regions of the cortex, the operational synchrony of these areas is always unique and presented as a mosaic of nested OMs for each stage of the short-term memory task (Fingelkurts et al. 1998, 2003). When there are too few or too many OMs and their lifespan is either too short or too long, then such conditions lead to cessation of efficient memorization.

Roux and Uhlhaas (2014) propose that cross-frequency interactions between theta-band oscillations (4–7 Hz) and gamma-band oscillations (30–200 Hz)—such as the coordination of cycles of gamma-band oscillations by an underlying theta rhythm—underpin the organization of sequentially ordered WM items (see also Lisman and Jensen 2013).

By supporting temporary bindings between virtually any content, WM enables the assembly, combination and compositionality of various “pulses of consciousness” (Marchetti 2014). This allows for the creation of two of the most relevant features of consciousness: the stream of consciousness, and the various modes of givenness of conscious experience.

The stream of consciousness

Despite the periodicity of conscious experience, our conscious mental life seems to flow continuously like a stream in which “the transition between the thought of one object and the thought of another is no more a break in the thought than a joint in a bamboo is a break in the wood” (James 1890/1983, pp. 233–234). This fact led William James to adopt the metaphor of “the stream of consciousness.” How is it possible to conciliate and explain the apparent contradiction present in the metaphor of the stream of consciousness as something flowing uninterruptedly, but which is, nevertheless, composed of single “pulses” of consciousness? How can we explain the quality of the conscious experience as something continuous and coherent and, at the same time, made of states of mind each of which is inevitably unique, different from the others, and characterized by its own qualities?

I think that in order to provide a proper answer to these questions, we must first distinguish the various time scales at which the feeling of continuity of conscious experience can occur. After all, one thing is to experience the feeling of being a person having a long and well-established past history and with a future ahead of himself; quite another thing is to experience the short-lived feeling of continuity encompassing a specific, definite action, such as filling a glass of water and drinking it. Different time scales imply different feelings of continuity.

For example, studies on the phenomenology of temporal perception of events (Pöppel 1997, 2004; Pöppel and Bao 2014; Wittmann 2011) show that one can identify at least three different time scales at which successive events can be combined to form distinct subjective experiences, each possessing its own specific qualitative characteristics. According to Wittmann (2011), on a first, basic level there is the “functional moment,” an elementary temporal building block of perception in the range of milliseconds, which has no perceivable duration because it processes individual events as co-temporal. On a second level, successive functional moments are grouped on a time scale of up to around 3 s, yielding the “experienced moment,” where events are perceived as occurring in an extended now. Within the experienced moment, successive events are strongly and orderly bound together: when listening to a metronome at moderate speed, we do not hear so much a train of individual beats, as perceptual gestalts having an accent on every nth beat, such as “1-2, 1-2” or “1-2-3, 1-2-3.” If, on the contrary, the metronome is too fast, we experience a fast train of beats that does not contain any temporally ordered structure of distinct events. Likewise, if the metronome is too slow, we perceive only individual beats which are not related to each other. A third level of integration exceeding about 3 s leads to “mental presence,” a temporal platform of multiple seconds in which an individual is aware of himself as a person having his own identity extending over time, acting in his environment, remembering the past and planning the future. That is, “mental presence” provides the basis for the emergence of a self-aware subject.

Most important for the current discussion are the levels of the “experienced moment” and “mental presence,” because they define the main kinds of temporal windows within which conscious activities can occur.

The feeling of continuity of conscious experience occurring in the operating range of up to 3 s that Wittmann (2011) defines “experienced moment” and Pöppel and Bao (2014) “subjective present,” is primarily made possible thanks to the activity of WM. In fact, by means of WM, single pulses of consciousness can be combined so as to produce ordered, albeit limited in time, sequences of conscious experiences.Footnote 27 As we have seen, neurophysiological studies (Fingelkurts et al. 1998, 2003, 2010; Roux and Uhlhaas 2014) have suggested that the organization of sequentially ordered WM items can be explained with the nested, synchronized activity of populations of neurons oscillating at different frequencies, which are coupled and interact with each other. In this view, it can be hypothesized that the feeling of continuity we experience in the “experienced moment,” results from the orderly and sequentially combination of single pulses of consciousness (which are underpinned by brain oscillations of a certain band frequency, such as the gamma-band) performed by means of brain oscillations of a lower band frequency (such as the theta-band or lower frequency ranges, such as infra-slow fluctuations < 0.1 Hz).

A similar proposal can be found in Northoff (2013), even though not specifically related to the activity of WM. In Northoff’s view, the brain’s intrinsic activity exhibits a certain degree of temporal continuity, which is constituted by low–high frequency entrainment via phase-locking or phase-synchronization between slow frequency oscillation and faster frequency oscillations. It would be precisely such a temporal continuity provided by the brain’s intrinsic activity that makes it possible to phenomenally experience the temporal continuity of consciousness.

It is interesting to note that according to some authors, WM is not a specific cognitive system, isolated and distinct from the other cognitive systems. On the contrary, WM emerges when attention is internally oriented toward the neural systems (such as motor and sensory ones) that were originally involved in the processing of the event or object to be remembered or reprocessed. In so doing, attention would help reinitiate and maintain the activation of these neural systems, thus allowing for the reprocessing of the event or object (Lückmann et al. 2014; Postle 2006). That is, the activity of WM could be ultimately traced back to the working of attention. In this light, and considering the conscious phenomena (such as temporal continuity) that the entrainment of low and high frequency brain oscillations could underpin (Northoff 2013; Roux and Uhlhaas 2014), attention appears to be a composite, but structured, process, which would allow for the concurrent processing of a single item and its embedding into a sequence of, or combinations with, some other items.

As to the feeling of continuity characterizing the longer temporal intervals that Wittmann (2011) defines “mental presence,” WM is not sufficient to produce it, and some other mechanisms and memory systems are needed. Mental presence provides the person with a temporal platform that allows him to possess an identity extending over time, embracing all that he did in the past and will do in the future. In order to exist, such a platform requires that the fundamental temporal dimensions of past and future are made available, which, as we will see in the next section, depends on a constructive process articulated around a temporal coordinate system having the “present” as its reference point.

Modes of givenness of conscious experience

As phenomenologists argue: “we are never conscious of an object simpliciter, but always of the object as appearing in a certain way: as judged, seen, described, feared, remembered, smelled, anticipated, tasted, etc.” (Gallagher and Zahavi 2008, p. 119). Objects always appear to us in different modes of givenness: one thing is to imagine a lemon, quite another to perceive it, and still another to expect to see it.

The different modes of givenness of an object, which are constitutive of the very way the object appears to us, are not at all due to the external features of the object, but rather to us, to our activity. If we consider for example the ability we have to remember personal past events, various kinds of observations show that it results from an active process of construction that we perform (Marchetti 2014; Rosenfield 1988; Suddendorf et al. 2009). Similarly, the ability to simulate specific personal episodes that may potentially occur in the future (which is defined as episodic future thought: Szpunar 2010), involves a process of active construction of events that have not yet occurred, such as when future thought depends on a (novel) recombination of episodic details (whether of perceptual or imaginal source) into a hypothetical event.

As suggested by phenomenological analysis (Thompson 2008), the subjective experience of remembering an event derives from “adding” the temporal dimension of past to the event. In remembering, one lives experiences as having occurred in the “past” and not as occurring now. In a similar way, the subjective experience of imagining a future event derives from “adding” the temporal dimension of future to the event.

According to my analysis (Marchetti 2014), the operation of “adding” a (past or future) temporal dimension to an event is performed thanks to WM, which allows to bind the event to a position in a temporal coordinate system (Oberauer 2009) that has the “experienced moment” (Wittmann 2011; or “subjective present” in Pöppel and Bao 2014) as its reference point. Once a temporal event is placed in such a temporal coordinate system, it can be related to this reference point, and consequently assumes either a past or a future property.

This “adding” process requires some intermediate steps, such as the mental construction of a temporal coordinate system and the most primitive experience of time, that is, duration, which in turn are realized thanks to attention and WM (Marchetti 2009).

Empirical evidence supporting this analysis is still partial and indirect, and a specific investigation must be performed in order to validate the analysis. Hill and Emery’s work (2013) confirms the role played by WM in mental time travel, specifically when imagining future events. The close link between attention and the conscious experience of reliving past events was reviewed by De Brigard (2012).

Conclusion

Consciousness provides a unique means for us to master, and autonomously act in, our environment. It allows us to cope with the environment in a flexible and autonomous way according to our needs and self-generated goals and intentions, rather than to automatically and blindly respond to environmental stimuli. This is made possible by the fact that it processes information in a unique and distinctive way: (1) it produces information, rather than purely transmitting it, (2) the information it produces is meaningful for us (who consciously experiences it), and (3) the information it produces is always individuated, in the sense that it has a meaning only for the person experiencing it.

We are continuously transformed by our experiences (Baars 1988), and the needs of our organism change with time; conversely, our environment is also always changing. Therefore, we need a mechanism that updates us in real time about the effects that the objects can have on us, where the object now is relative to us, if we can cope with it, etc. This mechanism is consciousness. At every step, consciousness brings forth the results of the (new) interactions between us and the object, producing the information that we need in order to understand how we can behave.

The information provided by consciousness (CI) is meaningful for us because it is produced on the basis of our biological, naturally selected, and culturally acquired values. Something becomes information and acquires a meaning for us because it is related to our needs, plans, and goals (Zlatev 2002).

CI is always individuated because it is evolutionary embedded, socially altered, and subjectively grounded (Jonkisz 2015): that is, it is formed in the unique and particular interactions occurring between the person and his environment.

CI allows for the make-up of order of various kinds—such as space, time, series, successions, sequences and schemas—out of the chaotic, uninterrupted and unordered stream of stimuli an organism is continuously faced with (Hofkirchner 2013; Marchetti 2012b, 2016). The order generated by consciousness allows for the gradual, parallel emergence of the person, objects, and the various relations (spatial, temporal, causal, etc.) between the person and objects.

The production of individuated, meaningful information by consciousness occurs thanks to three major components: the self, attention and working memory.

The self comprises the person’s organism and mind. It is primarily expressed via the central and peripheral nervous systems, which map the person’s body, his environment, and the interactions of the person with the environment. It develops and is centered on those values of the person (among which the biological ones are the most fundamental) that help maintain and expand the well-being of the person in its entirety. It is the primary means by which the complexity inherent to the composite structure of an organism is reduced into the “single voice” of a unique individual. Finally, it provides a reference system that (albeit evolving) is sufficiently stable (Damasio 2010) to define the variations that are relevant for an organism.

It is precisely the variations of the state of the self that supply the data for CI. The import that a datum has for us depends on the extent of the variation our values undergo. The content of the datum depends on the structures and levels of the self that are involved by the variation.

Attention is the tool that allows for the detection, filtering and isolation of the variations of the self that are most relevant in the given situation and for the current needs, plans and goals of the person. It originates and is deployed from a single point that is located inside our body, which represents the center of the self. All our conscious experiences are egocentrically organized around this center. Consequently, whatever is focused by attention appears in our consciousness as possessing a spatial quality that is defined through this center and the direction toward which attention is focused.

According to my analysis, attentional activity also determines two other most relevant features of conscious experience: periodicity and phenomenal quality.

Neurophysiological investigation shows that the apparently seamless flow of conscious experiences results from the combination and assembly of distinct processing epochs. Likewise, as showed by VanRullen et al. (2007), attention, even when focused on a single target, samples information periodically like a blinking spotlight. This observation combined with the considerations that both attention and consciousness are underpinned by brain oscillations, and that attention is in general a prerequisite for consciousness (Marchetti 2012a), makes it highly probable that that the periodicity of conscious experience is caused by the periodicity of attention.

The phenomenal quality of conscious experience is brought about by attentional activity through the modulation of the energy state of the neural substrate that underpins attention itself. More specifically, attentional activity can be performed thanks to the neural energy provided by the neural substrate (which I call the organ of attention). When attentional activity is performed to detect the variation of the state of the self, this latter modulates proportionally the energy state of the organ of attention. It is this modulation that generates the phenomenal aspect of consciousness. The quantitative aspect of conscious experience is determined by the amount of variation of the energy level of the organ of attention caused by such a modulation; the qualitative aspect of conscious experience, by the area of the self affected by the variation and the part of the organ of attention focusing on it.

The self and attention, despite being necessary for the production of CI, are not sufficient. In order to occur, complex forms of conscious experiences, such as the various modes of givenness of conscious experience and the stream of consciousness, need another mechanism: working memory (WM.) WM allows for the combination and assembling of the basic pieces of information selected by attention.

Even though this article has provided a description of the three basic components (the self, attention and WM) responsible for the production of CI, some other questions must still be addressed and investigated. One of these concerns the problem of whether these components are sufficient for the production of CI. It could be claimed that they are not sufficient, because there can be cases in which their combined activity does not produce any CI. Consider the case of currently available self-driving cars. One could observe that they have some, albeit rudimentary, kind of self (it can represent itself as being located in a particular place and time, surrounded by objects that it distinguishes from its own self), attention (it can scan the environment, select important information, such as a road sign, and allocate processing according to its current goal), and WM (it can temporarily hold information for further processing and manipulation), but no phenomenal consciousness. However, what this case shows, I argue, is not so much that the three components are not sufficient to produce CI, as that a partial implementation of their essential functions or “essential performances” (Negrotti 1997, 1999) does not suffice to produce CI. As we have seen, attention performs various functions: it allows for the detection, filtering and isolation of the variations of the self, it determines the egocentric spatial organization and periodicity of conscious experience, and it is responsible for its phenomenal quality. In the example of self-driving cars, most of these functions are implemented, but not the capacity to produce phenomenal quality. Indeed, self-driving cars can only work based on the information defined by an external programmer: they cannot define what information is for themselves and use it to work, which on the contrary is precisely what the phenomenal quality of conscious experience allows a person to do.

This explains why self-driving cars lack CI (as well as, among other properties, the capacity to autonomously set their own aims, which in turn depends on the availability of CI). Therefore the answer to the question whether the three components are sufficient for the production of CI, is yes, but provided that all their “essential performances” are implemented, and they perform all their essential functions: which, as the Theory of the Artificial (Negrotti 1997, 1999) shows, is not always the case, above all when more than one “essential performance” must be reproduced in an artificial device.