Skip to main content

Information-Theoretic Measures for the Non-Markovian Case

  • Chapter
  • First Online:
Markov Chain Aggregation for Agent-Based Models

Part of the book series: Understanding Complex Systems ((UCS))

  • 2097 Accesses

Abstract

This chapter is devoted to the study of a non-Markovian case building upon the analysis of the contrarian voter model (CVM) discussed in the previous chapter. As noted earlier, two things may happen by projecting the microscopic Markov chain associated to an agent-based model (ABM) onto a coarser partition. First, the macro process is still a Markov chain which is the case of lumpability discussed most extensively throughout this book. Secondly, Markovianity may be lost after the projection induced by a certain observable which means that memory effects are introduced at the macroscopic level. This is a fingerprint of emergence in models of self-organizing systems. Noteworthy, in ABMs as well as more generally in Markov chains, this situation is the rule rather than an exception (Chazottes and Ugalde 2003; Gurvits and Ledoux 2005; Banisch et al. 2012; Banisch 2014).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This example will be explored carefully in future work. For now, just notice that the VM on the ring leads to a long-lasting pattern of a single white and a single black regime and further change in the number of white and black agents only happens when an edge at the interface between the two regimes is chosen. The respective probability is equal for all micro configurations of this kind and other (disordered) configurations are not visited once such a situation has been reached. See also Fig. 1.1 in the Introduction.

  2. 2.

    Notice that the number of possibilities reduces at the corners or borders of the meso chain whenever m = 0 or l = 0.

  3. 3.

    This is to avoid a possible ambiguity because the probability \(Pr[X_{k}\vert \tilde{X}_{m,l}]\) could also be read in terms of the projection from \(\tilde{\mathbf{X}}\) to X where \(Pr[X_{k}\vert \tilde{X}_{m,l}]\) would indicate the probability with which the meso state \(\tilde{X}_{m,l}\) is taken by ϕ to the macro state X k .

References

  • Ball, R. C., Diakonova, M., & Mackay, R. S. (2010). Quantifying emergence in terms of persistent mutual information. Advances in Complex Systems, 13(03), 327–338.

    Article  MathSciNet  MATH  Google Scholar 

  • Banisch, S. (2014). From microscopic heterogeneity to macroscopic complexity in the contrarian voter model. Advances in Complex Systems, 17, 1450025.

    Article  MathSciNet  Google Scholar 

  • Banisch, S., Lima, R., & Araújo, T. (2012). Agent based models and opinion dynamics as Markov chains. Social Networks, 34, 549–561.

    Article  Google Scholar 

  • Buchholz, P. (1994). Exact and ordinary lumpability in finite Markov chains. Journal of Applied Probability, 31(1), 59–75.

    Article  MathSciNet  MATH  Google Scholar 

  • Burke, C. J., & Rosenblatt, M. (1958). A Markovian function of a Markov chain. The Annals of Mathematical Statistics, 29(4), 1112–1122.

    Article  MathSciNet  MATH  Google Scholar 

  • Chazottes, J.-R., Floriani, E., & Lima, R. (1998). Relative entropy and identification of Gibbs measures in dynamical systems. Journal of Statistical Physics, 90(3–4), 697–725.

    Article  ADS  MathSciNet  MATH  Google Scholar 

  • Chazottes, J.-R., & Ugalde, E. (2003). Projection of Markov measures may be Gibbsian. Journal of Statistical Physics, 111(5/6), 1245–1272.

    Article  MathSciNet  MATH  Google Scholar 

  • Darroch, J.N., & Seneta, E. (1965). On quasi-stationary distributions in absorbing discrete-time finite Markov chains. Journal of Applied Probability, 2(1), 88–100.

    Article  MathSciNet  MATH  Google Scholar 

  • Görnerup, O., & Jacobi, M. N. (2008). A method for inferring hierarchical dynamics in stochastic processes. Advances in Complex Systems, 11(1), 1–16.

    Article  MathSciNet  MATH  Google Scholar 

  • Görnerup, O., & Jacobi, M. N. (2010). A method for finding aggregated representations of linear dynamical systems. Advances in Complex Systems, 13(02), 199–215.

    Article  MathSciNet  MATH  Google Scholar 

  • Gurvits, L., & Ledoux, J. (2005). Markov property for a function of a Markov chain: A linear algebra approach. Linear Algebra and its Applications, 404(0), 85–117.

    Google Scholar 

  • Jacobi, M. N., & Görnerup, O. (2009). A spectral method for aggregating variables in linear dynamical systems with application to cellular automata renormalization. Advances in Complex Systems, 12(02), 131–155.

    Article  MathSciNet  MATH  Google Scholar 

  • James, R. G., Ellison, C. J., & Crutchfield, J. P. (2011). Anatomy of a bit: Information in a time series observation. Chaos, 21(3), 7109.

    Article  MathSciNet  MATH  Google Scholar 

  • Kemeny, J. G., & Snell, J. L. (1976). Finite Markov chains. New York: Springer.

    MATH  Google Scholar 

  • Ledoux, J., Rubino, G., & Sericola, B. (1994). Exact aggregation of absorbing Markov processes using the quasi-stationary distribution. Journal of Applied Probability, 31, 626–634.

    Article  MathSciNet  MATH  Google Scholar 

  • Pfante, O., Bertschinger, N., Olbrich, E., Ay, N., & Jost, J. (2014a). Comparison between different methods of level identification. Advances in Complex Systems, 17, 1450007.

    Article  MathSciNet  Google Scholar 

  • Pfante, O., Olbrich, E., Bertschinger, N., Ay, N., & Jost, J. (2014b). Closure measures for coarse-graining of the tent map. Chaos: An Interdisciplinary Journal of Nonlinear Science, 24(1), 013136.

    Article  MathSciNet  Google Scholar 

  • Shalizi, C. R. (2001). Causal architecture, complexity and self-organization in the time series and cellular automata. (Doctoral dissertation, University of Wisconsin–Madison).

    Google Scholar 

  • Shalizi, C. R., & Moore, C. (2003). What is a Macrostate? Subjective observations and objective dynamics. In CoRR. arXiv:cond-mat/0303625.

    Google Scholar 

  • Vilela Mendes, R., Lima, R., & Araújo, T. (2002). A process-reconstruction analysis of market fluctuations. International Journal of Theoretical and Applied Finance, 5(08), 797–821.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Banisch, S. (2016). Information-Theoretic Measures for the Non-Markovian Case. In: Markov Chain Aggregation for Agent-Based Models. Understanding Complex Systems. Springer, Cham. https://doi.org/10.1007/978-3-319-24877-6_7

Download citation

Publish with us

Policies and ethics