Abstract
Problems TDC and DC constitute special cases of a more general class of partially observable Markov decision processes. This class of processes may be loosely characterized as follows. A DM observes a sequence oi random variables X(t):t є T, where X(t) denotes the random variable observed at time t and t є T It is assumed that X(t) has a known distribution function F s (t), which depends on the unknown state at time t Additionally, it is assumed that S(t): єT is a Markov process and that for a given value of S(t), the process X(t) consists of a sequence of independent random variables. The DM is required to make sequential decisions to optimally control (with respect to a given loss structure, an objective criterion, and well-defined constraints) the process S(t), while observing only X(t).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1979 D. Reidel Publishing Company, Dordrecht, Holland
About this chapter
Cite this chapter
Rapoport, A., Stein, W.E., Burkheimer, G.J. (1979). Extensions. In: Response Models for Detection of Change. Theory and Decision Library, vol 18. Springer, Dordrecht. https://doi.org/10.1007/978-94-009-9386-0_9
Download citation
DOI: https://doi.org/10.1007/978-94-009-9386-0_9
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-009-9388-4
Online ISBN: 978-94-009-9386-0
eBook Packages: Springer Book Archive