Skip to main content

Feature Model-Guided Online Reinforcement Learning for Self-Adaptive Services

  • Conference paper
  • First Online:
Service-Oriented Computing (ICSOC 2020)

Abstract

A self-adaptive service can maintain its QoS requirements in the presence of dynamic environment changes. To develop a self-adaptive service, service engineers have to create self-adaptation logic encoding when the service should execute which adaptation actions. However, developing self-adaptation logic may be difficult due to design time uncertainty; e.g., anticipating all potential environment changes at design time is in most cases infeasible. Online reinforcement learning addresses design time uncertainty by learning suitable adaptation actions through interactions with the environment at runtime. To learn more about its environment, reinforcement learning has to select actions that were not selected before, which is known as exploration. How exploration happens has an impact on the performance of the learning process. We focus on two problems related to how a service’s adaptation actions are explored: (1) Existing solutions randomly explore adaptation actions and thus may exhibit slow learning if there are many possible adaptation actions to choose from. (2) Existing solutions are unaware of service evolution, and thus may explore new adaptation actions introduced during such evolution rather late. We propose novel exploration strategies that use feature models (from software product line engineering) to guide exploration in the presence of many adaptation actions and in the presence of service evolution. Experimental results for a self-adaptive cloud management service indicate an average speed-up of the learning process of 58.8% in the presence of many adaptation actions, and of 61.3% in the presence of service evolution. The improved learning performance in turn led to an average QoS improvement of 7.8% and 23.7% respectively .

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://sourceforge.net/p/vm-alloc/task_vm_pm .

  2. 2.

    https://git.uni-due.de/online-reinforcement-learning/icsoc-2020-artefacts.

References

  1. Acher, M., Heymans, P., Collet, P., Quinton, C., Lahire, P., Merle, P.: Feature model differences. In: Proceedings of the 24th International Conference on Advanced Information Systems Engineering, CAiSE 2012, pp. 629–645 (2012)

    Google Scholar 

  2. Arabnejad, H., Pahl, C., Jamshidi, P., Estrada, G.: A comparison of reinforcement learning techniques for fuzzy cloud auto-scaling. In: 17th Intl Symposium on Cluster, Cloud and Grid Computing, CCGRID 2017, pp. 64–73 (2017)

    Google Scholar 

  3. Barrett, E., Howley, E., Duggan, J.: Applying reinforcement learning towards automating resource allocation and application scalability in the cloud. Concurr. Comput. Pract. Exp. 25(12), 1656–1674 (2013)

    Article  Google Scholar 

  4. Bu, X., Rao, J., Xu, C.: Coordinated self-configuration of virtual machines and appliances using a model-free learning approach. IEEE Trans. Parallel Distrib. Syst. 24(4), 681–690 (2013)

    Article  Google Scholar 

  5. Bürdek, J., Kehrer, T., Lochau, M., Reuling, D., Kelter, U., Schürr, A.: Reasoning about product-line evolution using complex feature model differences. Autom. Softw. Eng. 23(4), 687–733 (2015). https://doi.org/10.1007/s10515-015-0185-3

    Article  Google Scholar 

  6. Caporuscio, M., D’Angelo, M., Grassi, V., Mirandola, R.: Reinforcement learning techniques for decentralized self-adaptive service assembly. In: Aiello, M., Johnsen, E.B., Dustdar, S., Georgievski, I. (eds.) ESOCC 2016. LNCS, vol. 9846, pp. 53–68. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44482-6_4

    Chapter  Google Scholar 

  7. Chen, B., Peng, X., Yu, Y., Nuseibeh, B., Zhao, W.: Self-adaptation through incremental generative model transformations at runtime. In: 36th International Conference on Software Engineering, ICSE 2014, pp. 676–687 (2014)

    Google Scholar 

  8. Chen, T., Bahsoon, R.: Self-adaptive and online QoS modeling for cloud-based software services. IEEE Trans. Software Eng. 43(5), 453–475 (2017)

    Article  Google Scholar 

  9. de Lemos, R., et al.: Software engineering for self-adaptive systems: a second research roadmap. In: de Lemos, R., Giese, H., Müller, H.A., Shaw, M. (eds.) Software Engineering for Self-Adaptive Systems II. LNCS, vol. 7475, pp. 1–32. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35813-5_1

    Chapter  Google Scholar 

  10. D’Ippolito, N., Braberman, V.A., Kramer, J., Magee, J., Sykes, D., Uchitel, S.: Hope for the best, prepare for the worst: multi-tier control for adaptive systems. In: 36th International Conference on Software Engineering, ICSE 2014, pp. 688–699 (2014)

    Google Scholar 

  11. Dutreilh, X., Kirgizov, S., Melekhova, O., Malenfant, J., Rivierre, N., Truck, I.: Using reinforcement learning for autonomic resource allocation in clouds: towards a fully automated workflow. In: 7th International Conference on Autonomic and Autonomous Systems, ICAS 2011, pp. 67–74 (2011)

    Google Scholar 

  12. Esfahani, N., Elkhodary, A., Malek, S.: A learning-based framework for engineering feature-oriented self-adaptive software systems. IEEE Trans. Softw. Eng. 39(11), 1467–1493 (2013)

    Article  Google Scholar 

  13. Filho, R.V.R., Porter, B.: Defining emergent software using continuous self-assembly, perception, and learning. TAAS 12(3), 16:1–16:25 (2017)

    Article  Google Scholar 

  14. Hinchey, M., Park, S., Schmid, K.: Building dynamic software product lines. IEEE Comput. 45(10), 22–26 (2012)

    Article  Google Scholar 

  15. Jamshidi, P., Velez, M., Kästner, C., Siegmund, N., Kawthekar, P.: Transfer learning for improving model predictions in highly configurable software. In: 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2017, pp. 31–41 (2017)

    Google Scholar 

  16. Kinneer, C., Coker, Z., Wang, J., Garlan, D., Le Goues, C.: Managing uncertainty in self-adaptive systems with plan reuse and stochastic search. In: 13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2018, pp. 40–50 (2018)

    Google Scholar 

  17. Mann, Z.Á.: Interplay of virtual machine selection and virtual machine placement. In: Aiello, M., Johnsen, E.B., Dustdar, S., Georgievski, I. (eds.) ESOCC 2016. LNCS, vol. 9846, pp. 137–151. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44482-6_9

    Chapter  Google Scholar 

  18. Mann, Z.Á.: Resource optimization across the cloud stack. IEEE Trans. Parallel Distrib. Syst. 29(1), 169–182 (2018)

    Article  Google Scholar 

  19. Metzger, A., Bayer, A., Doyle, D., Molzam Sharifloo, A., Pohl, K., Wessling, F.: Coordinated run-time adaptation of variability-intensive systems: An application in cloud computing. In: 1st International Workshop on Variability and Complexity in Software Design, VACE 2016 (2016)

    Google Scholar 

  20. Metzger, A., Di Nitto, E.: Addressing highly dynamic changes in service-oriented systems: Towards agile evolution and adaptation. In: Agile and Lean Service-Oriented Development: Foundations, Theory and Practice, pp. 33–46 (2012)

    Google Scholar 

  21. Metzger, A., Pohl, K.: Software product line engineering and variability management: achievements and challenges. In: Future of Software Engineering, FOSE 2014, pp. 70–84 (2014)

    Google Scholar 

  22. Moustafa, A., Zhang, M.: Learning efficient compositions for QoS-aware service provisioning. In: IEEE International Conference on Web Services, ICWS 2014, pp. 185–192 (2014)

    Google Scholar 

  23. Nachum, O., Norouzi, M., Xu, K., Schuurmans, D.: Bridging the gap between value and policy based reinforcement learning. In: Advances in Neural Information Processing Systems 12 (NIPS 2017), pp. 2772–2782 (2017)

    Google Scholar 

  24. Palm, A., Metzger, A., Pohl, K.: Online reinforcement learning for self-adaptive information systems. In: Yu, E., Dustdar, S. (eds.) International Conference on Advanced Information Systems Engineering, CAiSE 2020 (2020)

    Google Scholar 

  25. Papazoglou, M.P.: The challenges of service evolution. In: 20th International Conference on Advanced Information Systems Engineering, CAiSE 2008, pp. 1–15 (2008)

    Google Scholar 

  26. Plappert, M., et al.: Parameter space noise for exploration. In: 6th International Conference on Learning Representations, ICLR 2018, OpenReview.net (2018)

    Google Scholar 

  27. Ramirez, A.J., Cheng, B.H.C., McKinley, P.K., Beckmann, B.E.: Automatically generating adaptive logic to balance non-functional tradeoffs during reconfiguration. In: 7th International Conference on Autonomic Computing, ICAC 2010, pp. 225–234 (2010)

    Google Scholar 

  28. Salehie, M., Tahvildari, L.: Self-adaptive software: landscape and research challenges. TAAS 4(2), 1–42 (2009)

    Article  Google Scholar 

  29. Sharifloo, A.M., Metzger, A., Quinton, C., Baresi, L., Pohl, K.: Learning and evolution in dynamic software product lines. In: 11th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2016, pp. 158–164 (2016)

    Google Scholar 

  30. Siegmund, N., Grebhahn, A., Apel, S., Kästner, C.: Performance-influence Models for Highly Configurable Systems. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, pp. 284–294. ACM, New York (2015)

    Google Scholar 

  31. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  32. Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: a survey. J. Mach. Learn. Res. 10, 1633–1685 (2009)

    MathSciNet  MATH  Google Scholar 

  33. Tesauro, G., Jong, N.K., Das, R., Bennani, M.N.: On the use of hybrid reinforcement learning for autonomic resource allocation. Cluster Comput. 10(3), 287–299 (2007)

    Article  Google Scholar 

  34. Thüm, T., Batory, D., Kastner, C.: Reasoning about edits to feature models. In: 31st International Conference on Software Engineering, ICSE 2009, pp. 254–264 (2009)

    Google Scholar 

  35. Thüm, T., Kästner, C., Benduhn, F., Meinicke, J., Saake, G., Leich, T.: FeatureIDE: an extensible framework for feature-oriented software development. Sci. Comput. Program. 79, 70–85 (2014)

    Article  Google Scholar 

  36. Thüm, T., Kästner, C., Erdweg, S., Siegmund, N.: Abstract features in feature modeling. In: 15th International Conference on Software Product Lines, SPLC 2011, pp. 191–200 (2011)

    Google Scholar 

  37. Van Der Donckt, J., Weyns, D., Quin, F., Van Der Donckt, J., Michiels, S.: Applying deep learning to reduce large adaptation spaces of self-adaptive systems with multiple types of goals. In: 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2020. ACM (2020)

    Google Scholar 

  38. Wang, H., Gu, M., Yu, Q., Fei, H., Li, J., Tao, Y.: Large-scale and adaptive service composition using deep reinforcement learning. In: 15th Intl Conference on Service-Oriented Computing (ICSOC 2017), pp. 383–391 (2017)

    Google Scholar 

  39. Xu, C., Rao, J., Bu, X.: URL: A unified reinforcement learning approach for autonomic cloud management. J. Parallel Distrib. Comput. 72(2), 95–105 (2012)

    Article  Google Scholar 

  40. Zhao, T., Zhang, W., Zhao, H., Jin, Z.: A reinforcement learning-based framework for the generation and evolution of adaptation rules. In: International Conference on Autonomic Computing, ICAC 2017, pp. 103–112 (2017)

    Google Scholar 

Download references

Acknowledgments

We cordially thank Amir Molzam Sharifloo for constructive discussions during the conception of initial ideas, as well as Alexander Palm for his comments on earlier drafts. Our research received funding from the EU’s Horizon 2020 R&I programme under grants 780351 (ENACT) and 871525 (FogProtect).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Metzger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Metzger, A., Quinton, C., Mann, Z.Á., Baresi, L., Pohl, K. (2020). Feature Model-Guided Online Reinforcement Learning for Self-Adaptive Services. In: Kafeza, E., Benatallah, B., Martinelli, F., Hacid, H., Bouguettaya, A., Motahari, H. (eds) Service-Oriented Computing. ICSOC 2020. Lecture Notes in Computer Science(), vol 12571. Springer, Cham. https://doi.org/10.1007/978-3-030-65310-1_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65310-1_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65309-5

  • Online ISBN: 978-3-030-65310-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics