© 2013

Simulation-Based Algorithms for Markov Decision Processes


Part of the Communications and Control Engineering book series (CCE)

Table of contents

  1. Front Matter
    Pages I-XVII
  2. Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus
    Pages 1-17
  3. Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus
    Pages 19-60
  4. Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus
    Pages 61-87
  5. Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus
    Pages 89-177
  6. Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus
    Pages 179-218
  7. Back Matter
    Pages 219-229

About this book


Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences.  Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable.  In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function.  Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search.
This substantially enlarged new edition reflects the latest developments in novel algorithms and their underpinning theories, and presents an updated account of the topics that have emerged since the publication of the first edition. Includes:
. innovative material on MDPs, both in constrained settings and with uncertain transition properties;
. game-theoretic method for solving MDPs;
. theories for developing roll-out based algorithms; and
. details of approximation stochastic annealing, a population-based on-line simulation-based algorithm.
The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling, and control, and simulation but will be a valuable source of tuition and reference for students of control and operations research.

The Communications and Control Engineering series reports major technological advances which have potential for great impact in the fields of communication and control. It reflects

research in industrial and academic institutions around the world so that the readership can exploit new possibilities as they become available.


Controlled Markov Chains Markov Decision Processes Simulation-based Algorithms Stochastic Dynamic Programming Stochastic Modeling

Authors and affiliations

  1. 1.Dept. of Computer Science & EngineeringSogang UniversitySeoulKorea, Republic of (South Korea)
  2. 2.Dept. Applied Mathematics & StatisticsState University of New YorkStony BrookUSA
  3. 3.Smith School of BusinessUniversity of MarylandCollege ParkUSA
  4. 4.Dept. Electrical & Computer EngineeringUniversity of MarylandCollege ParkUSA

About the authors

Hyeong Soo Chang (SM’07 of the IEEE, Member of INFORMS) received the B.S. and M.S. degrees in electrical engineering and the Ph.D. degree in electrical and computer engineering, all from Purdue University,West Lafayette, IN, in 1994, 1996, and 2001, respectively. Since 2003, he has been with the Department of Computer Science and Engineering, Sogang University, Seoul, Korea, where he is now an Associate Professor. He has about 30 journal papers in the area of MDPs and related areas. His main research interests include Markov decision processes, Markov games, computational learning theory, computational intelligence, and stochastic optimization.  He currently serves as an Associate Editor for the IEEE Transactions on Automatic Control.
Jiaqiao Hu (M’11 of the IEEE, Member of INFORMS) received the B.S. degree in automation from Shanghai Jiao Tong University, Shanghai, China, in 1997, the M.S. degree in applied mathematics from the University of Maryland, Baltimore County, in 2001, and the Ph.D. degree in electrical engineering from the University of Maryland, College Park, in 2006. Since 2006, he has been with the Department of Applied Mathematics and Statistics, State University of New York, Stony Brook, where he is currently an Assistant Professor Markov decision processes, simulation-based optimization, global optimization, applied probability, and stochastic modeling and analysis.
Michael Fu (Fellow of the IEEE, Member of INFORMS) received his Ph.D. and M.S degrees in applied mathematics from Harvard University in 1989 and 1986, respectively. He received S.B. and S.M. degrees in electrical engineering and an S.B. degree in mathematics from the Massachusetts Institute of Technology in 1985. Since 1989, he has been at the University of Maryland, College Park, in the College of Business and Management.  He was the Simulation Area Editor for Operations and is an Associate Editor for Management Science, and has served on the Editorial Boards of the INFORMS Journal on Computing, Production and Operations Management and IIE Transactions. He was on the program committee for the Spring 1996 INFORMS National Meeting, in charge of contributed papers. In 1995, he received the Maryland Business School's annual Allen J. Krowe Award for Teaching Excellence. He is the co-author (with Jian-Qiang Hu) of the book, Conditional Monte Carlo: Gradient Estimation and Optimization Applications (0-7923-9873-4, 1997), which received the 1998 INFORMS College on Simulation Outstanding Publication Award. Other awards include the 1999 IIE Operations Research Division Award and a 1998 IIE Transactions Best Paper Award. In 2002, he received ISR's Outstanding Systems Engineering Faculty Award. He currently serves as a director of National Science Foundation Operations Research Program. Dr. Fu's research interests lie in the areas of stochastic derivative estimation and simulation optimization of discrete-event systems, particularly with applications towards manufacturing systems, inventory control, and the pricing of financial derivatives.
Steven I. Marcus (Fellow of the IEEE, Fellow of SIAM, Member of INFORMS) received his Ph.D. and S.M. from the Massachusetts Institute of Technology in 1975 and 1972, respectively. He received a B.A. from Rice University in 1971. From 1975 to 1991, he was with the Department of Electrical and Computer Engineering at the University of Texas at Austin, where he was the L.B. (Preach) Meaders Professor in Engineering. He was Associate Chairman of the Department during the period 1984-89. In 1991, he joined the University of Maryland, College Park, where he was Director of the Institute for Systems Research until 1996. He is currently a Professor in the Electrical Engineering Department and the Institute for Systems Research. He has served as an Editor of the SIAM Journal on Control and Optimization, and Associate Editor of Mathematics of Control, Signals, and Systems, Journal on Discrete Event Dynamic Systems, and Acta Applicandae Mathematicae. He has authored or co-authored more than 100 articles, conference proceedings, and book chapters. Dr. Marcus's research interests lie in the areas of control and systems engineering, analysis and control of stochastic systems, Markov decision processes, stochastic and adaptive control, learning, fault detection, and discrete event systems, with applications in manufacturing, acoustics, and communication networks.

Bibliographic information

Industry Sectors
Chemical Manufacturing
IT & Software
Materials & Steel
Oil, Gas & Geosciences


From the book reviews:

“The book consists of five chapters. … This well-written book is addressed to researchers in MDPs and applied modeling with an interests in numerical computations, but the book is also accessible to graduate students in operation research, computer science, and economics. The authors gives many pseudocodes of algorithms, numerical examples, algorithms convergence analysis and bibliographical notes that can be very helpful for readers to understand the ideas presented in the book and to perform experiments on their own.” (Wiesław Kotarski, zbMATH, Vol. 1293, 2014)