Advertisement

Parallelizing Parallel Rollout Algorithm for Solving Markov Decision Processes

  • Seon Wook Kim
  • Hyeong Soo Chang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2716)

Abstract

Parallel rollout is a formal method of combining multiple heuristic policies available to a sequential decision maker in the framework of Markov Decision Processes (MDPs). The method improves the performances of all of the heuristic policies adapting to the different stochastic system trajectories. From an inherent multi-level parallelism in the method, in this paper we propose a parallelized version of parallel rollout algorithm, and evaluate its performance on a multi-class task scheduling problem by using OpenMP and MPI programming model. We analyze and compare the performance in two versions of parallelized codes, e.g., OpenMP and MPI on several execution environment. We show that the performance using OpenMP API is higher than MPI due to lower overhead in data synchronization across processors.

Keywords

Schedule Problem Markov Decision Process Earliest Deadline First Load Imbalance Parallel Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York, 1994.zbMATHGoogle Scholar
  2. 2.
    D. P. Bertsekas and J. N. Tsitsiklis. Neuro dynamic programming. Athena Scientific, 1996.Google Scholar
  3. 3.
    R. Sutton and A. Barto. Reinforcement Learning. MIT Press, 2000.Google Scholar
  4. 4.
    A. Printista, M. Errecalde, and C. Montoya. A parallel implementation of Qlearning based on communication with cache. Journal of Computer Science and Technology, 1(6), 2002.Google Scholar
  5. 5.
    H. S. Chang, R. Givan, and E. K. P. Chong. Parallel rollout for on-line solution of partially observable markov decision processes. Discrete Event Dynamic Systems (Revised), 2002.Google Scholar
  6. 6.
    M. Littman, T. Dean, and L. Kaelbling. On the complexity of solving markov decision problems. In Proc. 11th Annual Conf. on Uncertainty in Artificial Intelligence, pages 394–402, 1995.Google Scholar
  7. 7.
    A. M. Law and W. D. Kelton. Simulation Modeling and Analysis, 3rd Ed. McGraw-Hill, New York, 2000.Google Scholar
  8. 8.
    D. P. Bertsekas. Differential training of rollout policies. In Proc. 35th Allerton Conf. on Comm., Control, and Computing, 1997.Google Scholar
  9. 9.
    R. Givan, E. K. P. Chong, and H. S. Chang. Scheduling multiclass packet streams to minimize weighted loss. Queueing Systems, 41:241–270, 2002.zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    W. Fischer and K. Meier-Hellstern. The markov-modulated poisson process (mmpp) cookbook. Performance Evaluation, 18:149–171, 1992.CrossRefMathSciNetGoogle Scholar
  11. 11.
    Wolfgang E. Nagel, Alfred Arnold, Michael Weber, and Hans-Christian Hoppe. VAMPIR: Visualization and analysis of MPI resources. Supercomputer, (1):69–80, January 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Seon Wook Kim
    • 1
  • Hyeong Soo Chang
    • 2
  1. 1.Advanced Computer Systems Laboratory, Department of Electronics and Computer EngineeringKorea UniversitySeoulKorea
  2. 2.Department of Computer Science and EngineeringSogang UniversitySeoulKorea

Personalised recommendations