Skip to main content

Foundations and Formalizations of Self-Organization

  • Chapter

Part of the book series: Advanced Information and Knowledge Processing ((AI&KP))

Abstract

In view of the various streams and directions of the field of self-organization, it is beyond the present introductory chapter to review all the currents of research in the field. Rather, the aim of the present section is to address some of the points judged as most relevant and to provide a discussion of suitable candidate formalisms for the treatment of self-organization. In the author’s opinion, discussing formalisms is not just a vain exercise, but allows one to isolate the essence of the notion one wishes to develop. Thus even if one disagrees with the path taken (as is common in the case of not yet universally agreed upon formal notions), starting from operational formalisms helps to serve as a compass guiding one towards notions suitable for one’s purposes. This is the philosophy of the present chapter. The chapter is structured as follows: in Sect. 2.2, we will present several central conceptual issues relevant in the context of self-organization. Some historical remarks about related relevant work are then done in Sect. 2.3. To illustrate the setting, a brief overview over some classical examples for self-organizing processes is given in Sect. 2.4. In Sects. 2.5 and 2.6, introduces the two main information-theoretic concepts of self-organization that the present chapter aims to discuss. One concept, based on the ϵ-machine formalism by Crutchfield and Shalizi, introduces self-organization as an increase of (statistical) complexity with time. The other concept will suggest measuring self-organization as an increase of mutual correlations (measured by multiinformation) between different components of a system. In Sect. 2.7, finally, important properties of these two measures as well as their distinctive characteristics (namely their power to identify temporal versus compositional self-organization) will be discussed, before Sect. 2.8 gives some conclusive remarks.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Emergence is briefly discussed in Sects. 2.5.2 and 2.6.1.

  2. 2.

    As an example, the energy balance of real biological computation process will operate at the ATP metabolism level and respect its restrictions—but this is still far off the Landauer limit.

  3. 3.

    Here we ignore technical details necessary to properly define the dynamics.

  4. 4.

    Note that, in general, the construction of an ϵ-machine from the visible process variables X is not necessarily possible, and the reader should be aware that the Shalizi/Crutchfield model is required to fulfil suitable properties for the reconstruction to work. I am indebted to Nihat Ay and Wolfgang Löhr for pointing this out to me.

  5. 5.

    This is a generalization of Eq. (3) from Tononi et al. (1994) for the bipartite case to the multipartite case.

  6. 6.

    This property is related to a property that can be proven for graphical models, see e.g. Proposition 2.1 in Slonim et al. (2001).

References

  • Adami, C. (1998). Introduction to artificial life. New York: Springer.

    Book  MATH  Google Scholar 

  • Ashby, W. R. (1947). Principles of the self-organizing dynamic system. The Journal of General Psychology, 37, 125–128.

    Article  Google Scholar 

  • Ay, N., & Krakauer, D. C. (2007). Geometric robustness theory and biological networks. Theory in Biosciences, 125(2), 93–121.

    Google Scholar 

  • Ay, N., & Polani, D. (2008). Information flows in causal networks. Advances in Complex Systems, 11(1), 17–41.

    Article  MathSciNet  MATH  Google Scholar 

  • Ay, N., & Wennekers, T. (2003). Dynamical properties of strongly interacting Markov chains. Neural Networks, 16(10), 1483–1497.

    Article  Google Scholar 

  • Baas, N. A., & Emmeche, C. (1997). On emergence and explanation. Intellectica, 2(25), 67–83.

    Google Scholar 

  • Bar-Yam, Y. (1997). Dynamics of complex systems. Studies in nonlinearity. Boulder: Westview Press.

    Google Scholar 

  • Bennett, C. H., & Landauer, R. (1985). The fundamental limits of computation. Scientific American, 253(1), 48–56.

    Article  Google Scholar 

  • Bertschinger, N., Olbrich, E., Ay, N., & Jost, J. (2006). Autonomy: an information theoretic perspective. In Proc. workshop on artificial autonomy at Alife X, Bloomington, Indiana (pp. 7–12).

    Google Scholar 

  • Comon, P. (1991). Independent component analysis. In Proc. intl. signal processing workshop on higher-order statistics, Chamrousse, France (pp. 111–120).

    Google Scholar 

  • Crutchfield, J. P. (1994). The calculi of emergence: computation, dynamics, and induction. Physica D, 11–54.

    Google Scholar 

  • Crutchfield, J. P., & Young, K. (1989). Inferring statistical complexity. Physical Review Letters, 63, 105–108.

    Article  MathSciNet  Google Scholar 

  • Emmeche, C., Køppe, S., & Stjernfelt, F. (2000). Levels, emergence, and three versions of downward causation. In P. B. Andersen, C. Emmeche, N. O. Finnemann, & P. V. Christiansen (Eds.), Downward causation. minds, bodies and matter (pp. 13–34). Århus: Aarhus University Press.

    Google Scholar 

  • Golubitsky, M., & Stewart, I. (2003). The symmetry perspective. Basel: Birkhäuser.

    Google Scholar 

  • Grassberger, P. (1986). Toward a quantitative theory of self-generated complexity. International Journal of Theoretical Physics, 25, 907–938.

    Article  MathSciNet  MATH  Google Scholar 

  • Haken, H. (1983). Advanced synergetics. Berlin: Springer.

    MATH  Google Scholar 

  • Harvey, I. (2000). The 3 Es of artificial life: emergence, embodiment and evolution. Invited talk at Artificial Life VII, 1–6 August, Portland, Oregon.

    Google Scholar 

  • Helbing, D., Buzna, L., Johansson, A., & Werner, T. (2005). Self-organized pedestrian crowd dynamics: experiments, simulations, and design solutions. Transportation Science, 39(1), 1–24.

    Article  Google Scholar 

  • Hoyle, R. (2006). Pattern formation. Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Jetschke, G. (1989). Mathematik der Selbstorganisation. Braunschweig: Vieweg.

    Book  MATH  Google Scholar 

  • Klyubin, A. S., Polani, D., & Nehaniv, C. L. (2004). Organization of the information flow in the perception-action loop of evolved agents. In Proceedings of 2004 NASA/DoD conference on evolvable hardware (pp. 177–180). Los Alamitos: IEEE Computer Society.

    Chapter  Google Scholar 

  • Klyubin, A. S., Polani, D., & Nehaniv, C. L. (2005). Empowerment: a universal agent-centric measure of control. In Proc. IEEE congress on evolutionary computation (CEC 2005), Edinburgh, Scotland, 2–5 September 2005 (pp. 128–135). New York: IEEE.

    Chapter  Google Scholar 

  • Landauer, R. (1961). Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5, 183–191.

    Article  MathSciNet  MATH  Google Scholar 

  • Mees, A. I. (1981). Dynamics of feedback systems. New York: Wiley.

    MATH  Google Scholar 

  • Meinhardt, H. (1972). A theory of biological pattern formation. Kybernetik, 12, 30–39.

    Article  Google Scholar 

  • Meinhardt, H. (1982). Models of biological pattern formation. San Diego: Academic Press.

    Google Scholar 

  • Pask, G. (1960). The natural history of networks. In M. C. Yovits & S. Cameron (Eds.), Computer science and technology and their application. Self-organizing systems—proceedings of an interdisciplinary conference, 5–6 May 1959 (pp. 5–6). New York: Pergamon.

    Google Scholar 

  • Polani, D. (2003). Measuring self-organization via observers. In W. Banzhaf, T. Christaller, J. Ziegler, P. Dittrich, J. T. Kim, H. Lange, T. Martinetz, & F. Schweitzer (Eds.), Advances in artificial life. Proc. 7th European conference on artificial life, Dortmund, 14–17 September. Berlin: Springer.

    Google Scholar 

  • Polani, D. (2004). Defining emergent descriptions by information preservation. InterJournal Complex Systems, 1102.

    Google Scholar 

  • Polani, D. (2006). Emergence, intrinsic structure of information, and agenthood. InterJournal Complex Systems, 1937.

    Google Scholar 

  • Prigogine, I., & Nicolis, G. (1977). Self-organization in non-equilibrium systems: from dissipative structures to order through fluctuations. New York: Wiley.

    Google Scholar 

  • Prokopenko, M., Gerasimov, V., & Tanev, I. (2006). Evolving spatiotemporal coordination in a modular robotic system. In S. Nolfi, G. Baldassarre, R. Calabretta, J. C. T. Hallam, D. Marocco, J.-A. Meyer, O. Miglino, & D. Parisi (Eds.), Lecture notes in computer science: Vol. 4095. From animals to animats 9: 9th international conference on the simulation of adaptive behavior (SAB 2006), Rome, Italy (pp. 558–569). Berlin: Springer.

    Chapter  Google Scholar 

  • Rasmussen, S., Baas, N., Mayer, B., Nilsson, M., & Olesen, M. W. (2001). Ansatz for dynamical hierarchies. Artificial Life, 7, 329–353.

    Article  Google Scholar 

  • Reichl, L. (1980). A modern course in statistical physics. Austin: University of Texas Press.

    Google Scholar 

  • Shalizi, C. R. (2001). Causal architecture, complexity and self-organization in time series and cellular automata. PhD thesis, University of Wisconsin-Madison.

    Google Scholar 

  • Shalizi, C. R., & Crutchfield, J. P. (2002). Information bottlenecks, causal states, and statistical relevance bases: how to represent relevant information in memoryless transduction. Advances in Complex Systems, 5(1), 91–95.

    Article  MATH  Google Scholar 

  • Shalizi, C. R., Shalizi, K. L., & Haslinger, R. (2004). Quantifying self-organization with optimal predictors. Physical Review Letters, 93(11), 118701.

    Article  Google Scholar 

  • Slonim, N., Friedman, N., & Tishby, T. (2001). Agglomerative multivariate information bottleneck. In Neural information processing systems (NIPS 01), La Jolla (pp. 929–936).

    Google Scholar 

  • Slonim, N., Atwal, G. S., Tkačik, G., & Bialek, W. (2005). Estimating mutual information and multi-information in large networks. arXiv:cs.IT/0502017.

  • Spitzner, A., & Polani, D. (1998). Order parameters for self-organizing maps. In L. Niklasson, M. Bodén, & T. Ziemke (Eds.), Proc. of the 8th int. conf. on artificial neural networks (ICANN 98), Skövde, Sweden (Vol. 2, pp. 517–522). Berlin: Springer.

    Google Scholar 

  • Tishby, N., Pereira, F. C., & Bialek, W. (1999). The information bottleneck method. In Proc. 37th annual Allerton conference on communication, control and computing, Urbana-Champaign, IL.

    Google Scholar 

  • Tononi, G., Sporns, O., & Edelman, G. M. (1994). A measure for brain complexity: relating functional segregation and integration in the nervous system. Proceedings of the National Academy of Sciences of the United States of America, 91, 5033–5037.

    Article  Google Scholar 

  • Turing, A. M. (1952). The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 327, 37–72.

    Article  Google Scholar 

  • Walter, W. G. (1951). A machine that learns. Scientific American, 185(2), 60–63.

    Google Scholar 

  • Yovits, M. C. & Cameron, S. (Eds.) (1960). Computer science and technology and their application. Self-organizing systems—proceedings of an interdisciplinary conference, 5–6 May 1959. New York: Pergamon.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Polani .

Editor information

Editors and Affiliations

Appendix: Proof of Relation Between Fine and Coarse-Grained Multi-Information

Appendix: Proof of Relation Between Fine and Coarse-Grained Multi-Information

Proof

First, note that a different way to write the composite random variables \(\Tilde X_{j}\) is \(\Tilde X_{j} = (X_{k_{j-1}+1},\dots,X_{k_{j}})\) for \(j=1\dots\Tilde k\), giving

$$ H(\Tilde X_j) = H(X_{k_{j-1}+1},\dots,X_{k_j})\;. $$
(2.6)

Similarly, the joint random variable \((\Tilde X_{1},\dots,\Tilde X_{\Tilde k})\) consisting of the composite random variables \(\Tilde X_{j}\) can be seen as a regrouping of the elementary random variables X 1,…,X k .Therefore the joint random variable constructed from the \(\Tilde X_{j}\) and that constructed from the X i have both the same entropy:

$$ H(\Tilde X_1,\dots,\Tilde X_{\Tilde k}) = H(X_1,\dots,X_k)\;. $$
(2.7)

For consistency of notation, write k 0=0 and \(k_{\Tilde k} = k\). One then obtains

where the first term results from a regrouping of summands, the second term results from Eq. (2.6) and the third from rewriting the whole set of random variables from the coarse-grained to the fine-grained notation, thus giving

which proves the equation. □

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Polani, D. (2013). Foundations and Formalizations of Self-Organization. In: Prokopenko, M. (eds) Advances in Applied Self-Organizing Systems. Advanced Information and Knowledge Processing. Springer, London. https://doi.org/10.1007/978-1-4471-5113-5_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-5113-5_2

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-5112-8

  • Online ISBN: 978-1-4471-5113-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics