Abstract
Continuous technology progress is fueling the delivery of new and less expensive IoT components, providing a variety of options for the Smart Home. Although most of the components can be easily integrated, achieving an optimal configuration that prioritizes environmental goals over individual performance strategies is a complex task that requires manual fine tuning. The objective of this work is to propose an architecture model that integrates reinforcement learning capabilities in a Smart Home environment. In order to ensure the completeness of the solution, a set of architecture requirements was elicited. The proposed architecture is extended from the IoT Architecture Reference Model (ARM), with specific components designed to coordinate the learning effort, as well as data governance and general orchestration. Besides confirming the fulfillment of the architecture requirements, a simulation tool was developed to test the learning capabilities of a system instantiated from the proposed architecture. After six million and four hundred thousand execution cycles, it was verified that system was able to learn in most of the configurations. Unexpectedly, results show very similar performance for collaborative and competitive environments, suggesting that a more varied selection of agent scenarios should be tested as an extension of this work, to confirm or contest Q-Learning hypothesis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Madakam, S., Ramaswamy, R., Tripathi, S.: Internet of Things (IoT): a literature review. J. Comput. Commun. 3, 164–173 (2015)
Katasonov, A., Kaykova, O., Khriyenko, O., Nikitin, S., Terziyan, S.: Smart semantic middleware for the Internet of Things. In: Proceedings of the 5th International Conference on Informatics in Control, Automation and Robotics, Portugal, pp. 169–178 (2008)
Wang, J., Zhu, Q., Ma, Y.: An agent-based hybrid service delivery for coordinating internet of things and 3rd party service providers. J. Netw. Comput. Appl. 36, 1684–1695 (2013)
Nastic, S., Copil, G., Truong. H., Dustdar. S.: Governing elastic IoT cloud systems under uncertainty. In: 2015 IEEE 7th International Conference on Cloud Computing Technology and Science, pp. 131–138. IEEE, Canada (2015)
Bassi, A., Bauer, M., Fiedler, M., Kramp, T., Van Kranenburg, R., Lange, S., Meissner, S.: Enabling Things to Talk: Designing IoT solutions with the IoT Architectural Reference Model, p. 349. Springer, Berlin (2013)
Rozanski, N., Woods, E.: Software Systems Architecture. Working with Stakeholders Using Viewpoints and Perspectives, p. 529. Pearson, London (2005)
Watkins, C.J.C.H.: Learning from Delayed Rewards. Ph.D. thesis, UK (1989)
Tuyls, K., Weiss, G.: Multiagent learning: basics, challenges and prospects. AI Mag. 3, 41–52 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Rivas, M., Giorno, F. (2019). A Reinforcement Learning Multiagent Architecture Prototype for Smart Homes (IoT). In: Arai, K., Bhatia, R., Kapoor, S. (eds) Proceedings of the Future Technologies Conference (FTC) 2018. FTC 2018. Advances in Intelligent Systems and Computing, vol 880. Springer, Cham. https://doi.org/10.1007/978-3-030-02686-8_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-02686-8_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-02685-1
Online ISBN: 978-3-030-02686-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)