Abstract
This chapter is devoted to the study of a non-Markovian case building upon the analysis of the contrarian voter model (CVM) discussed in the previous chapter. As noted earlier, two things may happen by projecting the microscopic Markov chain associated to an agent-based model (ABM) onto a coarser partition. First, the macro process is still a Markov chain which is the case of lumpability discussed most extensively throughout this book. Secondly, Markovianity may be lost after the projection induced by a certain observable which means that memory effects are introduced at the macroscopic level. This is a fingerprint of emergence in models of self-organizing systems. Noteworthy, in ABMs as well as more generally in Markov chains, this situation is the rule rather than an exception (Chazottes and Ugalde 2003; Gurvits and Ledoux 2005; Banisch et al. 2012; Banisch 2014).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This example will be explored carefully in future work. For now, just notice that the VM on the ring leads to a long-lasting pattern of a single white and a single black regime and further change in the number of white and black agents only happens when an edge at the interface between the two regimes is chosen. The respective probability is equal for all micro configurations of this kind and other (disordered) configurations are not visited once such a situation has been reached. See also Fig. 1.1 in the Introduction.
- 2.
Notice that the number of possibilities reduces at the corners or borders of the meso chain whenever m = 0 or l = 0.
- 3.
This is to avoid a possible ambiguity because the probability \(Pr[X_{k}\vert \tilde{X}_{m,l}]\) could also be read in terms of the projection from \(\tilde{\mathbf{X}}\) to X where \(Pr[X_{k}\vert \tilde{X}_{m,l}]\) would indicate the probability with which the meso state \(\tilde{X}_{m,l}\) is taken by ϕ to the macro state X k .
References
Ball, R. C., Diakonova, M., & Mackay, R. S. (2010). Quantifying emergence in terms of persistent mutual information. Advances in Complex Systems, 13(03), 327–338.
Banisch, S. (2014). From microscopic heterogeneity to macroscopic complexity in the contrarian voter model. Advances in Complex Systems, 17, 1450025.
Banisch, S., Lima, R., & Araújo, T. (2012). Agent based models and opinion dynamics as Markov chains. Social Networks, 34, 549–561.
Buchholz, P. (1994). Exact and ordinary lumpability in finite Markov chains. Journal of Applied Probability, 31(1), 59–75.
Burke, C. J., & Rosenblatt, M. (1958). A Markovian function of a Markov chain. The Annals of Mathematical Statistics, 29(4), 1112–1122.
Chazottes, J.-R., Floriani, E., & Lima, R. (1998). Relative entropy and identification of Gibbs measures in dynamical systems. Journal of Statistical Physics, 90(3–4), 697–725.
Chazottes, J.-R., & Ugalde, E. (2003). Projection of Markov measures may be Gibbsian. Journal of Statistical Physics, 111(5/6), 1245–1272.
Darroch, J.N., & Seneta, E. (1965). On quasi-stationary distributions in absorbing discrete-time finite Markov chains. Journal of Applied Probability, 2(1), 88–100.
Görnerup, O., & Jacobi, M. N. (2008). A method for inferring hierarchical dynamics in stochastic processes. Advances in Complex Systems, 11(1), 1–16.
Görnerup, O., & Jacobi, M. N. (2010). A method for finding aggregated representations of linear dynamical systems. Advances in Complex Systems, 13(02), 199–215.
Gurvits, L., & Ledoux, J. (2005). Markov property for a function of a Markov chain: A linear algebra approach. Linear Algebra and its Applications, 404(0), 85–117.
Jacobi, M. N., & Görnerup, O. (2009). A spectral method for aggregating variables in linear dynamical systems with application to cellular automata renormalization. Advances in Complex Systems, 12(02), 131–155.
James, R. G., Ellison, C. J., & Crutchfield, J. P. (2011). Anatomy of a bit: Information in a time series observation. Chaos, 21(3), 7109.
Kemeny, J. G., & Snell, J. L. (1976). Finite Markov chains. New York: Springer.
Ledoux, J., Rubino, G., & Sericola, B. (1994). Exact aggregation of absorbing Markov processes using the quasi-stationary distribution. Journal of Applied Probability, 31, 626–634.
Pfante, O., Bertschinger, N., Olbrich, E., Ay, N., & Jost, J. (2014a). Comparison between different methods of level identification. Advances in Complex Systems, 17, 1450007.
Pfante, O., Olbrich, E., Bertschinger, N., Ay, N., & Jost, J. (2014b). Closure measures for coarse-graining of the tent map. Chaos: An Interdisciplinary Journal of Nonlinear Science, 24(1), 013136.
Shalizi, C. R. (2001). Causal architecture, complexity and self-organization in the time series and cellular automata. (Doctoral dissertation, University of Wisconsin–Madison).
Shalizi, C. R., & Moore, C. (2003). What is a Macrostate? Subjective observations and objective dynamics. In CoRR. arXiv:cond-mat/0303625.
Vilela Mendes, R., Lima, R., & Araújo, T. (2002). A process-reconstruction analysis of market fluctuations. International Journal of Theoretical and Applied Finance, 5(08), 797–821.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Banisch, S. (2016). Information-Theoretic Measures for the Non-Markovian Case. In: Markov Chain Aggregation for Agent-Based Models. Understanding Complex Systems. Springer, Cham. https://doi.org/10.1007/978-3-319-24877-6_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-24877-6_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-24875-2
Online ISBN: 978-3-319-24877-6
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)