Abstract
In sequential decision making, we program agent by reward and punishment. In this, agent learns to map situations to actions which results in maximizing rewards gained. This agent is also known as decision makers. It is difficult to take decision about giving specific kind and quantity of insulin dose to the diabetes patient in a critical system of insulin pump control. This paper implements the Q learning algorithm on diabetes data streams. This helps in classifying the data for diabetes dose and also helps in making decision about giving particular kind and quantity of insulin dose by generating various rules.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
L.P. Kaelbling et al., in Reinforcement Learning: A Survey. AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved (1996)
M.A. Wiering, H. van Hasselt, Ensemble algorithms in reinforcement learning. IEEE Trans. Syst., Man, Cybern.—Part b: Cybern. 38(4)(2008)
M. Mill´an-Giraldo, et al., in On-line Classification of Data Streams with Missing Values based on Reinforcement Learning. Institute of New Imaging Technologies
M.M. Gaber, A. Zaslavsky, S. Krishnaswamy. Mining data streams: a review. SIGMOD Rec. 34(2) (2005)
R. Garnett, S.J. Roberts, in Learning from Data Streams with Concept Drift. Technical Report PARG-08-01, Hilary Term (2008)
L. Busoniu et al., A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst., Man, Cybern.—Part c: Appl. Rev. 38(2) (2008)
C. Gaskett, D. Wettergreen, A. Zelinsky, Q-learning in continuous state and action spaces. Int. J. Comput. Sci. Eng. (IJCSE)
H. van Seijen, H. van Hasselt, S. Whiteson, M. Wiering. A Theoretical and Empirical Analysis of Expected Sarsa
M.A. Wiering, QV (lambda)-learning: a new on-policy reinforcement learning algorithm, in Proceedings of the 7th European Workshop on Reinforcement Learning (2005) pp. 29–30
A.G. Barto et al. in Learning and Sequential Decision Making. pp. 530–599
G. Ditzler, R. Polikar, Incremental learning of concept drift from streaming imbalanced data. IEEE Trans. Know. Data Eng. 25, 2283–2301 (2013)
C. Gaskett, D. Wettergreen, A. Zelinsky, Q-learning in continuous state and action spaces . Int. J. Comput. Sci. Eng. (IJCSE)
S. Harm van Seijen, H. van Hasselt, S. Whiteson, M. Wiering. A theoretical and empirical analysis of expected
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer India
About this paper
Cite this paper
Patil, P., Kulkarni, P., Shirsath, R. (2015). Sequential Decision Making Using Q Learning Algorithm for Diabetic Patients. In: Suresh, L., Dash, S., Panigrahi, B. (eds) Artificial Intelligence and Evolutionary Algorithms in Engineering Systems. Advances in Intelligent Systems and Computing, vol 324. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2126-5_35
Download citation
DOI: https://doi.org/10.1007/978-81-322-2126-5_35
Published:
Publisher Name: Springer, New Delhi
Print ISBN: 978-81-322-2125-8
Online ISBN: 978-81-322-2126-5
eBook Packages: EngineeringEngineering (R0)