Advertisement

Investigation of Impacts on Network Performance in the Advance of a Microservice Design

  • Nane KratzkeEmail author
  • Peter-Christian Quint
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 740)

Abstract

Due to REST-based protocols, microservice architectures are inherently horizontally scalable. That might be why the microservice architectural style is getting more and more attention for cloud-native application engineering. Corresponding microservice architectures often rely on a complex technology stack which includes containers, elastic platforms and software defined networks. Astonishingly, there are almost no specialized tools to figure out performance impacts (coming along with this microservice architectural style) in the upfront of a microservice design. Therefore, we propose a benchmarking solution intentionally designed for this upfront design phase. Furthermore, we evaluate our benchmark and present some performance data to reflect some often heard cloud-native application performance rules (or myths).

Keywords

Cloud-native application Microservice Container Cluster Elastic platform Network Performance Reference Benchmark REST SDN Software-defined network 

Notes

Acknowledgements

This study was funded by German Federal Ministry of Education and Research (03FH021PX4). We thank René Peinl and his research group for their valuable feedback and for their contribution to integrate Calico SDN into ppbench. We thank all reviewers for their valuable feedback on our initial conference paper [13], especially Bryan Boreham from Weaveworks.

References

  1. 1.
    Apache Software Foundation: ab - Apache HTTP server benchmarking tool (2015). http://httpd.apache.org/docs/2.2/programs/ab.html
  2. 2.
    Berkley Lab: iPerf - The network bandwidth measurement tool (2015). https://iperf.fr
  3. 3.
    Bormann, D., Braden, B., Jacobsen, V., Scheffenegger, R.: RFC 7323, TCP Extensions for High Performance (2014). https://tools.ietf.org/html/rfc7323
  4. 4.
    CoreOS: Flannel (2015). https://github.com/coreos/flannel
  5. 5.
    Docker Inc.: Docker (2015). https://www.docker.com
  6. 6.
    Docker Inc.: Docker Swarm (2016). https://docs.docker.com/swarm/
  7. 7.
    Fielding, R.T.: Architectural styles and the design of network-based software architectures. Ph.D. thesis (2000)Google Scholar
  8. 8.
    Hindman, B., Konwinski, A., Zaharia, M., Ghodsi, A., Joseph, A.D., Katz, R.H., Shenker, S., Stoica, I.: Mesos: a platform for fine-grained resource sharing in the data center. In: NSDI, vol. 11 (2011)Google Scholar
  9. 9.
    HP Labs: httperf - a tool for measuring web server performance (2008). http://www.hpl.hp.com/research/linux/httperf/
  10. 10.
    Kratzke, N., Peinl, R.: ClouNS - a reference model for cloud-native applications. In: Proceedings of 20th International Conference on Enterprise Distributed Object Computing Workshops (EDOCW 2016) (2016)Google Scholar
  11. 11.
    Kratzke, N., Quint, P.C.: About automatic benchmarking of IaaS cloud service providers for a world of container clusters. J. Cloud Comput. Res. 1(1), 16–34 (2015)Google Scholar
  12. 12.
    Kratzke, N., Quint, P.C.: How to operate container clusters more efficiently? Some insights concerning containers, software-defined-networks, and their sometimes counterintuitive impact on network performance. Int. J. Adv. Netw. Serv. 8(3&4), 203–214 (2015)Google Scholar
  13. 13.
    Kratzke, N., Quint, P.C.: ppbench - a visualizing network benchmark for microservices. In: Proceedings of 6th International Conference on Cloud Computing and Service Sciences (CLOSER 2016), vol. 2, pp. 223–231 (2016)Google Scholar
  14. 14.
    netperf.org: The Public Netperf Homepage (2012). http://www.netperf.org
  15. 15.
    Newman, S.: Building Microservices. O’Reilly Media, Incorporated, San Francisco (2015)Google Scholar
  16. 16.
    Project Calico: Calico (2016). https://www.projectcalico.org/
  17. 17.
    R Core Team: R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria (2014). http://www.R-project.org/
  18. 18.
    Schmid, H., Huber, A.: Measuring a small number of samples, and the 3v fallacy: shedding light on confidence and error intervals. IEEE Solid-State Circ. Mag. 6(2), 52–58 (2014)CrossRefGoogle Scholar
  19. 19.
    Sun Microsystems: uperf - a network performance tool (2012). http://www.uperf.org
  20. 20.
    Velásquez, K., Gamess, E.: A comparative analysis of network benchmarking tools. In: Proceedings of the World Congress on Engineering and Computer Science 2009 (WCOES 2009) (2009)Google Scholar
  21. 21.
    Verma, A., Pedrosa, L., Korupolu, M.R., Oppenheimer, D., Tune, E., Wilkes, J.: Large-scale cluster management at Google with Borg. In: Proceedings of the European Conference on Computer Systems (EuroSys), Bordeaux, France (2015)Google Scholar
  22. 22.
    Weave Works: Weave (2015). https://github.com/weaveworks/weave

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Center of Excellence for Communication, Systems and Applications(CoSA) Lübeck University of Applied SciencesLübeckGermany

Personalised recommendations