Reward Based Congruences: Can We Aggregate More?
- 299 Downloads
In this paper we extend a performance measure sensitive Markovian bisimulation congruence based on yield and bonus rewards that has been previously defined in the literature, in order to aggregate more states and transitions while preserving compositionality and the values of the performance measures. The extension is twofold. First, we show how to define a performance measure sensitive Markovian bisimulation congruence that aggregates bonus rewards besides yield rewards. This is achieved by taking into account in the aggregation process the conditional execution probabilities of the transitions to which the bonus rewards are attached. Second, we show how to define a performance measure sensitive Markovian bisimulation congruence that allows yield rewards and bonus rewards to be used interchangeably up to suitable correcting factors, aiming at the introduction of a normal form for rewards. We demonstrate that this is possible in the continuous time case, while it is not possible in the discrete time case because compositionality is lost.
KeywordsComposition Operator Operational Semantic Priority Level Time Case Passive Action
Unable to display preview. Download preview PDF.
- 1.C. Baier, J.-P. Katoen, H. Hermanns, “Approximate Symbolic Model Checking of Continuous Time Markov Chains”, in Proc. of CONCUR’ 99, LNCS 1664:146–162Google Scholar
- 2.M. Bernardo, “Theory and Application of Extended Markovian Process Algebra”, Ph.D. Thesis, University of Bologna (Italy), 1999 (http://www.di.unito.it/~bernardo/)
- 3.M. Bernardo, M. Bravetti, “Performance Measure Sensitive Congruences for Markovian Process Algebras”, to appear in Theoretical Computer Science, 2001Google Scholar
- 4.M. Bernardo, M. Bravetti, “Formal Specification of Performance Measures for Process Algebra Models of Concurrent Systems”, Tech. Rep. UBLCS-1998-08, University of Bologna (Italy), 1998 (revised 2001)Google Scholar
- 5.M. Bravetti, M. Bernardo, “Compositional Asymmetric Cooperations for Process Algebras with Probabilities, Priorities, and Time”, Tech. Rep. UBLCS-2000-01, University of Bologna (Italy), 2000 (extended abstract in MTCS’ 00, Electronic Notes in Theoretical Computer Science 39(3))Google Scholar
- 6.G. Clark, “Formalising the Specification of Rewards with PEPA”, in Proc. Of PAPM’ 96, CLUT, pp. 139–160Google Scholar
- 7.G. Clark, S. Gilmore, J. Hillston, “Specifying Performance Measures for PEPA”, in Proc. of ARTS’ 99, LNCS 1601:211–227Google Scholar
- 10.R.A. Howard, “Dynamic Probabilistic Systems”, John Wiley & Sons, 1971Google Scholar
- 11.V.F. Nicola, “Lumping in Markov Reward Processes”, Tech. Rep. RC-14719, IBM T.J. Watson Research Center, Yorktown Heights (NY), 1990Google Scholar
- 12.W.H. Sanders, J.F. Meyer, “A Unified Approach for Specifying Measures of Performance, Dependability, and Performability”, in Dependable Computing and Fault Tolerant Systems 4:215–237, 1991Google Scholar
- 13.J.E. Voeten, “Temporal Rewards for Performance Evaluation”, in Proc. of PAPM’ 00, Carleton Scientific, pp. 511–522Google Scholar