Autonomic and Latency-Aware Degree of Parallelism Management in SPar
Stream processing applications became a representative workload in current computing systems. A significant part of these applications demands parallelism to increase performance. However, programmers are often facing a trade-off between coding productivity and performance when introducing parallelism. SPar was created for balancing this trade-off to the application programmers by using the C++11 attributes’ annotation mechanism. In SPar and other programming frameworks for stream processing applications, the manual definition of the number of replicas to be used for the stream operators is a challenge. In addition to that, low latency is required by several stream processing applications. We noted that explicit latency requirements are poorly considered on the state-of-the-art parallel programming frameworks. Since there is a direct relationship between the number of replicas and the latency of the application, in this work we propose an autonomic and adaptive strategy to choose the proper number of replicas in SPar to address latency constraints. We experimentally evaluated our implemented strategy and demonstrated its effectiveness on a real-world application, demonstrating that our adaptive strategy can provide higher abstraction levels while automatically managing the latency.
KeywordsAutonomic computing Stream processing Parallel programming Adaptive degree of parallelism
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001, by the EU H2020-ICT-2014-1 project RePhrase (No. 644235), and by the FAPERGS 01/2017-ARD project ParaElastic (No. 17/2551-0000871-5).
- 1.Aldinucci, M., Meneghin, M., Torquati, M.: Efficient Smith-Waterman on multi-core with FastFlow. In: Euromicro Conference on Parallel, Distributed and Network-Based Processing, pp. 195–199 (2010)Google Scholar
- 6.De Sensi, D., Torquati, M., Danelutto, M.: A reconfiguration algorithm for power-aware parallel applications. ACM Trans. Architect. Code Optim. 13(4), 43 (2016)Google Scholar
- 7.FastFlow : FastFlow (FF) Website (2017). http://mc-fastflow.sourceforge.net/. Accessed Dec 2017
- 10.Griebler, D., Hoffmann, R.B., Danelutto, M., Fernandes, L.G.: High-level and productive stream parallelism for Dedup, Ferret, and Bzip2. Int. J. Parallel Program., 1–19 (2018)Google Scholar
- 11.Griebler, D., Hoffmann, R.B., Danelutto, M., Fernandes, L.G.: Higher-level parallelism abstractions for video applications with SPar. In: Parallel Computing is Everywhere, Proceedings of the International Conference on Parallel Computing, ParCo 2017, pp. 698–707. IOS Press, Bologna (2017)Google Scholar
- 12.Heinze, T., Pappalardo, V., Jerzak, Z., Fetzer, C.: Auto-scaling techniques for elastic data stream processing. In: IEEE International Conference on Data Engineering Workshops, pp. 296–302 (2014)Google Scholar
- 14.Reinders, J.: Intel Threading Building Blocks: Outfitting C++ for Multi-core Processor Parallelism. O’Reilly Media, Sebastopol (2007)Google Scholar
- 15.Schneider, S., Hirzel, M., Gedik, B., Wu, K.L.: Auto-parallelizing stateful distributed streaming applications. In: Proceedings of the International Conference on Parallel Architectures and Compilation Techniques, pp. 53–64 (2012)Google Scholar
- 16.Selva, M., Morel, L., Marquet, K., Frenot, S.: A monitoring system for runtime adaptations of streaming applications. In: Euromicro International Conference on Parallel, Distributed and Network-Based Processing, pp. 27–34 (2015)Google Scholar