Advertisement

MDP for Query-Based Wireless Sensor Networks

  • Mihaela MiticiEmail author
Chapter
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 248)

Abstract

Increased sensors availability and growing interest in sensor monitoring has lead to an significant increase in the number of sensor networks deployed in the last decade. Simultaneously, the amount of sensed data and the number of queries calling this data significantly increased. The challenge is to respond to the queries in a timely manner and with relevant data, without having to resort to hardware updates or duplication. In this chapter we focus on the trade-off between the response time of queries and the freshness of the data provided. Query response time is a significant Quality of Service for sensor networks, especially in the case of real-time applications. Data freshness ensures that queries are answered with relevant data, that closely characterizes the monitored area. To model the trade-off between the two metrics, we propose a continuous-time Markov decision process with a drift, which assigns queries for processing either to a sensor network, where queries wait to be processed, or to a central database, which provides stored and possibly outdated data. To compute an optimal query assignment policy, a discrete-time discrete-state Markov decision process, shown to be stochastically equivalent to the initial continuous-time process, is formulated. This approach provides a theoretical support for the design and implementation of WSN applications, while ensuring a close-to-optimum performance of the system.

References

  1. 1.
    E.B. Dynkin, Markov Processes, vol. 1 (Academic, New York, 1965)CrossRefGoogle Scholar
  2. 2.
    I.I. Gikhman, A.V. Skorokhod, The Theory of Stochastic Processes: II, vol. 232 (Springer, New York, 2004)CrossRefGoogle Scholar
  3. 3.
    A. Hordijk, R. Schassberger, Weak convergence for generalized semi-markov processes. Stoch. Process. Appl. 12 (3), 271–291 (1982)CrossRefGoogle Scholar
  4. 4.
    A. Hordijk, F.A. van der Duyn Schouten, Discretization and weak convergence in Markov decision drift processes. Math. Oper. Res. 9 (1), 112–141 (1984)CrossRefGoogle Scholar
  5. 5.
    A. Jensen, Markoff chains as an aid in the study of markoff processes. Scand. Actuar. J. 1953 (sup1), 87–91 (1953)Google Scholar
  6. 6.
    M. Mitici, M. Onderwater, M. de Graaf, J.-K. van Ommeren, N. van Dijk, J. Goseling, R.J. Boucherie, Optimal query assignment for wireless sensor networks. AEU-Int. J. Electron. Commun. 69 (8), 1102–1112 (2015)CrossRefGoogle Scholar
  7. 7.
    A.R. Odoni, On finding the maximal gain for Markov decision processes. Oper. Res. 17 (5), 857–860 (1969)CrossRefGoogle Scholar
  8. 8.
    M.L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming (Wiley, New York, 1994)CrossRefGoogle Scholar
  9. 9.
    N. van Dijk, On a simple proof of uniformization for continuous and discrete-state continuous-time markov chains. Adv. Appl. Probab. 22 (3), 749–750 (1990)CrossRefGoogle Scholar
  10. 10.
    N. van Dijk, A. Hordijk, Time-discretization for controlled Markov processes. I. General approximation results. Kybernetika 32 (1), 1–16 (1996)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Faculty of Aerospace Engineering, Air Transport and OperationsDelft University of TechnologyDelftThe Netherlands

Personalised recommendations