Skip to main content

On the Irreducibility of Consciousness and Its Relevance to Free Will

  • Chapter
  • First Online:

Abstract

Integrated information theory of consciousness (IIT) starts from phenomenological axioms and argues that an experience is an integrated information structure. IIT holds that a system of connected elements—for example a network of neurons, some firing and some not—intrinsically and necessarily generates information, because its mechanisms and present state constrain possible past and future states. This intrinsic, causal kind of information—called cause-effect information (CEI)—measures “differences that make a difference” from the intrinsic perspective of the system. Moreover, a subset of elements only generates information to the extent that the cause and effect repertoires they specify cannot be reduced to the product of the repertoires specified by independent components (integrated information, ϕ). Finally, only maxima of integrated information (max ϕ) matter. A maximally irreducible cause-effect repertoire constitutes a concept. A complex is a set of elements specifying a maximally irreducible constellation of concepts (maxΦ), giving rise to a maximally integrated conceptual information structure or quale. Under certain conditions, such as the presence of noise and irreversibility, a maximum of integrated information may be associated with a “macro” spatiotemporal grain (say neurons over hundreds of milliseconds), rather than with a “micro” grain (say subatomic particles over microseconds). IIT accounts, in a parsimonious manner, for many, seemingly disparate empirical observations about consciousness, and makes theoretical predictions concerning the necessary and sufficient conditions for the presence and quality of consciousness in newborns, brain damaged patients, animals, and machines. Moreover, IIT has direct relevance for issues related to free will. According to IIT, when a choice is made consciously, in addition to satisfying the requirements of autonomy, understanding, self-control, and alternative possibilities, the choice is maximally irreducible. This is because the choice cannot be attributed to anything less than the entire complex that brings it about, nor is anything more than the complex required, as the complex provides the maximally irreducible set of cause-effects. If maximal integrated information is generated by a complex at a macroscale in space or time (groups of neurons, hundreds of milliseconds), the requirement for indeterminism is also satisfied: a conscious choice, while maximally and irreducibly causal, is also necessarily under-determined and thus unpredictable. In this view, indeterminism is not to be thought of as a sprinkle of randomness that instills some arbitrariness into a preordained cascade of mechanisms, decreasing their causal powers. Rather, indeterminism provides a backdrop of ultimate unpredictability against which information integration acts to impose autonomy, understanding, self-control, and alternative possibilities. Thus, according to IIT, a choice is the freer, the more it is determined intrinsically, meaning that it can only be accounted for by considering a large set of concepts, beliefs, memories, and wishes, all acting within a maximally irreducible complex. Which is to say that a choice is the freer, the more it is conscious.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Descartes started his philosophical investigations from the axiom “cogito ergo sum,” though his “cogito” emphasized the thinking aspect of consciousness rather than the more general notion of having an experience.

  2. 2.

    Contrasting with this intrinsic perspective, which is observer-independent, is the extrinsic perspective of an external observer: the observer can ask how information is encoded, communicated or stored given the system’s state and the observer’s expectations (prior distribution, e.g., based on observing the system) and assumptions about the system.

  3. 3.

    The distance D between two probability distributionsp andq can be measured in various ways. Perhaps the most general way is to consider the information distance between them, i.e. the maximum of the Kolmogorov complexity of one distribution given the other (Bennett et al. 1998). See Tononi (2013) for further considerations.

  4. 4.

    Partitions, indicated by x, can be evaluated by performing the same computations after injecting noise (do(H max)) in the partitioned links in the input–output matrix. To fairly compare different partitions to find the MIP, it is necessary to normalize by the information capacity of each partition.

  5. 5.

    Where the empty set [] is only allowed on either P or S, but not both.

  6. 6.

    If several CER(S) yield the same max, one takes the CER(S) of largest scope (accounting for the most), where ϕMIP(S) > 0, its subsets R have lower or at most equal ϕMIP, and its supersets T have lower ϕMIP: ϕMIP (R) ≤ ϕMIP (S) > ϕMIP (T), for all RS and all TS. If there are multiple maximal CER(S) each with the same scope, then at any given time only one is realized as a concept, although which one is indeterminate.

  7. 7.

    One could say that trying various CER and their partitions to find maxϕMIP is the informational/causal equivalent of “cutting to the chase.” It is also related to finding the optimal tradeoff between the transmission of relevant information and the compression/efficiency of the channel.

  8. 8.

    In neural terms, the fact that, out of all possible causes of a neuron’s firing, the input that actually caused its firing remains undecidable from the intrinsic perspective, also means that “illusions” are inevitable. Based on the exclusion postulate, the intrinsic perspective entails the simplifying attribution of cause always to the core (most irreducible) cause, rightly or wrongly. Usually, in an adapted system, the actual cause and the core cause will be similar enough, but occasionally the actual cause may be quite different from the core cause, in which case an “illusion” ensues (this applies also to the case of a neuron’s firing being caused by microstimulation).

  9. 9.

    The exclusion postulate is related to the principle of sufficient reason—in fact, it enforces a principle of least reducible reason; to the principle of least action; to maximum likelihood approaches and to information minimization/compression (though it is causal, not just statistical); and of course ultimately to Occam’s razor.

  10. 10.

    In this example, the cause repertoire component of a concept (backward, input, retrodictive, receptive concept) can be taken to refer to a classic invariant—a set of inputs equivalently compatible with the present state of a certain mechanism (e.g., tables, faces, places, and so on); the effect repertoire component (forward, output, predictive, projective concept) can be taken to refer to “Gibsonian” affordances—a set of outputs equivalently compatible with the present state of a certain mechanism (e.g., the consequences/associations/actions primed by seeing a table, face, place, and so on).

  11. 11.

    Within conceptual information, one can distinguish a backward portion (specified by the cause repertoires), or understanding; and a forward portion (specified by the effect repertoires), or control.

  12. 12.

    Note that constellations of concepts must satisfy several requirements: (1) they must be physically realizable; (2) they must be self-consistent (that is, concepts that exclude/contradict each other cannot coexist; i.e., their product should never yield a distribution with zeros everywhere); (3) they must be irreducible. If these requirements are satisfied, ideally a constellation of concepts should also: (1) have as many concepts as possible; (2) they should be as irreducible as possible; (3) they should be as informative as possible about concept space, i.e., sample it as uniformly as possible (acting as representative “prototypes” of possible contingencies).

  13. 13.

    Unless, of course, the interactions become so strong that maxΦMIP for the union exceeds that of each part, in which case the parts merge into a single complex.

  14. 14.

    Occam’s razor conventional formulation, “entia non sunt multiplicanda praeter necessitatem,” is probably due not to Occam or his teacher Duns Scotus, but to John Ponce. It has important applications in the context of Solomonoff theory of inductive inference and compressibility (Solomonoff 1964), see also (Hutter 2005). If one can compress a wiring diagram into a product of smaller diagrams (e.g., by finding k-connected subgraphs) plus some residual terms, one identifies separate integrated conceptual information entities that cannot be reduced further (complexes), and beyond which no additional “higher” entities exist. Each complex is then characterized by a particular integrated conceptual information structure, within which different repertoires specified by subsets of elements exist only to the extent that they are not reducible.

  15. 15.

    The complete characterization of an experience or quale would thus require specifying all of the concepts (cause-effect repertoires in Q) of a complex. From the intrinsic perspective, these concepts provide the information necessary to distinguish that experience from any other. From the extrinsic perspective, knowing these distributions and their degree of irreducibility, one would know all there is to be known about that experience. It is interesting to ask how much information that is (in terms of algorithmic complexity or incompressible information). Clearly, the input–output matrix of a system (or transition probability matrix TPM), if known and available to perform manipulations (injecting noise), could be used to derive all the quantities discussed here. However, the information in the TPM is both uncompressed and implicit. It can be an uncompressed TPM when a large TPM reduces to the product of the smaller TPMs, as indicated by ϕMIP = 0. More generally, finding maxϕMIP and maxΦMIP over subsets of elements would indicate how best to compress a large TPM into the product of smaller, maximally irreducible TPMs, plus some extra terms. Also, it may turn out that a TPM at the finest spatio-temporal grain may be compressed to a coarser spatio-temporal grain with no loss (or indeed gain) in information. This aspect is captured again by finding maxΦMIP over different spatio-temporal scales. The TPM is also implicit: while it contains all the information necessary to find complexes and specify their quale, making them explicit requires work. One must extract the repertoires specified by each element and subset of elements, find the MIP to establish which subsets integrate information, which sets of elements are maximally irreducible (concepts and then complexes), and at which spatio-temporal grain size. This requires examining the effects of a large number of perturbations (performing partitions and injections of noise/max entropy) within a large combinatorial space. At a minimum, one would need to calculate probability distributions specified by each element, from which one can calculate all the distributions specified by subsets of elements (as the product of distribution at lower levels in the power-set). From this one can establish, through appropriate partitions, which subsets specify maximally irreducible points and, finally, which maximally irreducible subsets constitute complexes. It would be interesting to know if the most economical characterization (e.g., algorithmic complexity) of a particular conceptual information structure would correspond to the minimal set of causal processes generating it. In this case, obtaining complexes and their quale (integrated conceptual information structure) would be equivalent to finding the most compressed description of the causal structure of a physical process.

  16. 16.

    In any case, describing a quale would not be the same as being that quale.

  17. 17.

    Within matching, one can distinguish a backward portion (specified by the cause repertoires), or representation capacity; and a forward portion (specified by the effect repertoires), or action capacity.

  18. 18.

    Note that in an unpredictable environment it is important not only to have a large repertoire of possible actions, but also to have many different ways of achieving the same effect, i.e., degeneracy (Tononi et al. 1999). High degeneracy implies both high effective information and high integration in the forward repertoire component of the concepts available to a complex. In general, if information integration is high, a small subset of elements within a complex should be able to affect many other elements (pleiotropy). At the same time, many subsets of elements should be able to produce the same effect over a small subset of outputs (degeneracy).

  19. 19.

    This is because <M> is bounded by <maxΦMIP>.

  20. 20.

    Since consciousness undoubtedly exists (indeed, it is the only thing whose existence is beyond doubt), if each individual consciousness is an integrated conceptual information structure, then integrated information must be a fundamental ingredient of reality—as fundamental as mass, charge, or energy (Tononi 2008).

  21. 21.

    It is interesting to consider how the notion of maximally irreducible set of past causes of future effects maps onto accounts of trajectories of dynamical systems, for example accounts of how an element may be enslaved by one of two weakly coupled attractors, though being subjected to causal influences from both. More generally, it is interesting to consider how the intrinsic notion of causation indicated here maps onto an extrinsic notion of causation developed along parallel lines (Hoel et al., in preparation). In the extrinsic perspective, one takes a given event (i.e. an observed state) and considers what past event actually caused it (as opposed to what could have potentially caused it, as in the intrinsic view) and what are its actual future effects (as opposed to potential effects). In this way, it is possible to define an extrinsic notion of cause-effect power based on the sufficiency (reliability) and necessity (specificity) of the mechanisms mediating the transition from one event to the next, and the size of the repertoire of counterfactuals. By applying exclusion, one can then proceed to partitions to identify maximally irreducible (“core”) cause-effects as well as sets of cause-effects (“cause-effect complexes”).

  22. 22.

    That is, one should not double-count intrinsic causes, just as one should not double-count information. In terms of dynamical systems, this means that micro-variables are “enslaved” by macro-variables.

  23. 23.

    In this sense, integrated information can be said to be a measure of intrinsic causation. And a complex—defined from the intrinsic perspective as a maximally irreducible set of maximally irreducible cause-effect repertoires (concepts)—can be said to be truly causa sui.

References

  • Bennett, C. H., Gacs, P., Li, M., Vitany, P. M. B., Zurek, W. H. (1998). Information distance. IEEE Transactions on Information Theory,44, 1407––1423.

    Article  Google Scholar 

  • Fried, I., Mukamel, R., & Kreiman, G. (2011). Internally generated preactivation of single neurons in human medial frontal cortex predicts volition. Neuron, 69(3), 548–562.

    Article  PubMed  Google Scholar 

  • Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Berlin: Springer.

    Google Scholar 

  • Kane, R. (2005). A contemporary introduction to free will. New York: Oxford University Press.

    Google Scholar 

  • Libet, B., et al. (1991). Control of the transition from sensory detection to sensory awareness in man by the duration of a thalamic stimulus. The cerebral “time-on” factor. Brain, 114(Pt 4), 1731–1757.

    Article  PubMed  Google Scholar 

  • Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7(2), 224–254.

    Article  Google Scholar 

  • Soon, C. S., et al. (2008). Unconscious determinants of free decisions in the human brain. Nature neuroscience, 11(5), 543–545.

    Article  PubMed  Google Scholar 

  • Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto. Biology Bulletin, 215(3), 216–242.

    Article  PubMed  Google Scholar 

  • Tononi, G. (2010). Information integration: Its relevance to brain function and consciousness. Archives italiennes de biologie, 148(3), 299–322.

    PubMed  Google Scholar 

  • Tononi, G. (2013). Integrated information theory of consciousness: An updated account. Archives italiennes de biologie, in press.

    Google Scholar 

  • Tononi, G., Sporns, O., & Edelman, G. M. (1996). A complexity measure for selective matching of signals by the brain. Proceedings of the National Academy of Sciences of the United States of America, 93(8), 3422–3427.

    Article  PubMed  Google Scholar 

  • Tononi, G., Sporns, O., & Edelman, G. M. (1999). Measures of degeneracy and redundancy in biological networks. Proceedings of the National Academy of Sciences of the United States of America, 96(6), 3257–3262.

    Article  PubMed  Google Scholar 

  • Wegner, D. M. (2003). The illusion of conscious will. Cambridge: MIT.

    Google Scholar 

Download references

Acknowledgements

Part of the material presented here is derived from the previous publications, especially Tononi, G. Integrated Information Theory of Consciousness: An Updated Account, Archives italiennes de Biologie, 2012. I thank Chiara Cirelli, Lice Ghilardi, Christof Koch, Barry van Veen, Virgil Griffith, Atif Hashmi, Erik Hoel, Matteo Mainetti, Melanie Boly, Andy Nere, Masafumi Oizumi, Umberto Olcese, and Puneet Rana for many helpful discussions and for developing the software used to compute integrated conceptual information structures (M. Oizumi, A. Nere, A. Hashmi, U. Olcese, P. Rana). This work was supported by a Paul Allen Family Foundation grant and by the McDonnell Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giulio Tononi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Tononi, G. (2013). On the Irreducibility of Consciousness and Its Relevance to Free Will. In: Suarez, A., Adams, P. (eds) Is Science Compatible with Free Will?. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-5212-6_11

Download citation

Publish with us

Policies and ethics