Learning of Steady States in Nonlinear Models when Shocks Follow a Markov Chain
- 312 Downloads
Local convergence results for adaptive learning of stochastic steady states in nonlinear models are extended to the case where the exogenous observable variables follow a finite Markov chain. The stability conditions for the corresponding nonstochastic model and its steady states yield convergence for the stochastic model when shocks are sufficiently small. The results are applied to asset pricing and to an overlapping generations model. Large shocks can destabilize learning even if the steady state is stable with small shocks. Relationship to stationary sunspot equilibria are also discussed.
Key wordsBounded rationality Recursive algorithms Steady state Linearization Asset pricing Overlapping generations
Unable to display preview. Download preview PDF.
- 2.Chiappori, P. and R Guesnerie (1991): “Sunspot Equilibria in Sequential Market Models,” in , pp. 1683–1762.Google Scholar
- 6.Evans, G. W. and S. Honkapohja (2001): Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, New Jersey.Google Scholar
- 8.Hamilton, J. D. (1994): Time Series Analysis. Princeton University Press, Princeton, NJ.Google Scholar
- 9.Hildenbrand, W., and H. Sonnenschein (eds.) (1991): Handbook of Mathematical Economics, Vol. IV. North-Holland, Amsterdam.Google Scholar
- 10.Ljungqvist, L., and T. J. Sargent (2000): Recursive Macroeconomic Theory. MIT Press, Cambridge Mass.Google Scholar
- 12.Sargent, T. J. (1976): “Observational Equivalence of Natural and Unnatural Rate Theories of Unemployment,” Journal of Political Economy, 84, 631–640.Google Scholar