Abstract
The use of multiprocessor tasks (M-tasks) has been shown to be successful for mixed task and data parallel implementations of algorithms from scientific computing. The approach often leads to an increase of scalability compared to a pure data parallel implementation, but restricts the data exchange between M-tasks to the beginning or the end of their execution, expressing data or control dependencies between M-tasks.
In this article, we propose an extension of the M-task model to communicating M-tasks (CM-tasks) which allows communication between M-tasks during their execution. In particular, we present and discuss the CM-task programming model, programming support for designing CM-task programs, and experimental results. Internally, a CM-task comprises communication and computation phases. The communication between different CM-tasks can exploit optimized communication patterns for the data exchange between CM-tasks, e.g., by using orthogonal realizations of the communication. This can be used to further increase the scalability of many applications, including time-stepping methods which use a similar task structure for each time step. This is demonstrated for solution methods for ordinary differential equations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Aldinucci, M., Danelutto, M., Teti, P.: An advanced environment supporting structured parallel programming in Java. Future Generation Computer Systems 19(5), 611–626 (2003)
Bal, H., Haines, M.: Approaches for Integrating Task and Data Parallelism. IEEE Concurrency 6(3), 74–84 (1998)
Chakrabarti, S., Demmel, J., Yelick, K.: Modeling the benefits of mixed data and task parallelism. In: Symposium on Parallel Algorithms and Architecture, pp. 74–83 (1995)
Chandy, M., Foster, I., Kennedy, K., Koelbel, C., Tseng, C.-W.: Integrated support for task and data parallelism. The Int. Journal of Supercomputer Applications 8(2), 80–98 (1994)
Chapman, B., Haines, M., Mehrota, P., Zima, H., Van Rosendale, J.: Opus: A coordination language for multidisciplinary applications. Sci. Program. 6(4), 345–362 (1997)
Fink, S.J.: A Programming Model for Block-Structured Scientific Calculations on SMP Clusters. PhD thesis, University of California, San Diego (1998)
Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems. Springer, Berlin (1993)
Hill, M., McColl, W., Skillicorn, D.: Questions and Answers about BSP. Scientific Programming 6(3), 249–274 (1997)
Joisha, P., Banerjee, P.: PARADIGM (version 2.0): A New HPF Compilation System. In: Proc. 1999 International Parallel Processing Symposium (IPPS 1999) (1999)
Keßler, C.W.: NestStep: Nested Parallelism and Virtual Shared Memory for the BSP model. The Journal of Supercomputing 17, 245–262 (2001)
Kühnemann, M., Rauber, T., Rünger, G.: Optimizing MPI Collective Communication by Orthogonal Structures. Journal of Cluster Computing 9(3), 257–279 (2006)
Orlando, S., Palmerini, P., Perego, R.: Coordinating HPF programs to mix task and data parallelism. In: SAC 2000: Proceedings of the 2000 ACM symposium on Applied computing, pp. 240–247. ACM Press, New York (2000)
Radulescu, A., Nicolescu, C., van Gemund, A., Jonker, P.P.: CPR: Mixed task and data parallel scheduling for distributed systems. In: Proceedings of the 15th International Parallel and Distributed Symposium (2001)
Ramaswamy, S.: Simultaneous Exploitation of Task and Data Parallelism in Regular Scientific Applications. PhD thesis, University of Illinois at Urbana-Champaign (1996)
Rauber, T., Rünger, G.: A Transformation Approach to Derive Efficient Parallel Implementations. IEEE Transactions on Software Engineering 26(4), 315–339 (2000)
Rauber, T., Rünger, G.: Execution Schemes for Parallel Adams Methods. In: Danelutto, M., Vanneschi, M., Laforenza, D. (eds.) Euro-Par 2004. LNCS, vol. 3149, pp. 708–717. Springer, Heidelberg (2004)
Rauber, T., Rünger, G.: Tlib - A Library to Support Programming with Hierarchical Multi-Processor Tasks. J. of Parallel and Distributed Computing 65(3), 347–360 (2005)
Skillicorn, D., Talia, D.: Models and languages for parallel computation. ACM Computing Surveys 30(2), 123–169 (1998)
Subhlok, J., Yang, B.: A new model for integrated nested task and data parallel programming. In: Proceedings of the sixth ACM SIGPLAN symposium on Principles and practice of parallel programming, pp. 1–12. ACM Press, New York (1997)
van der Houwen, P.J., Messina, E.: Parallel Adams Methods. J. of Comp. and App. Mathematics 101, 153–165 (1999)
Vydyanathan, N., Krishnamoorthy, S., Sabin, G., Catalyurek, U., Kurc, T., Sadayappan, P., Saltz, J.: An integrated approach for processor allocation and scheduling of mixed-parallel applications. In: Proc. of the 2006 International Conference on Parallel Processing (ICPP 2006). IEEE, Los Alamitos (2006)
Vydyanathan, N., Krishnamoorthy, S., Sabin, G., Catalyurek, U., Kurc, T., Sadayappan, P., Saltz, J.: Locality conscious processor allocation and scheduling for mixed parallel applications. In: Proc. of the 2006 IEEE Int. Conf. on Cluster Computing. IEEE, Los Alamitos (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Dümmler, J., Rauber, T., Rünger, G. (2008). Communicating Multiprocessor-Tasks. In: Adve, V., Garzarán, M.J., Petersen, P. (eds) Languages and Compilers for Parallel Computing. LCPC 2007. Lecture Notes in Computer Science, vol 5234. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85261-2_20
Download citation
DOI: https://doi.org/10.1007/978-3-540-85261-2_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-85260-5
Online ISBN: 978-3-540-85261-2
eBook Packages: Computer ScienceComputer Science (R0)