Skip to main content

Communicating Multiprocessor-Tasks

  • Conference paper
Languages and Compilers for Parallel Computing (LCPC 2007)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5234))

Abstract

The use of multiprocessor tasks (M-tasks) has been shown to be successful for mixed task and data parallel implementations of algorithms from scientific computing. The approach often leads to an increase of scalability compared to a pure data parallel implementation, but restricts the data exchange between M-tasks to the beginning or the end of their execution, expressing data or control dependencies between M-tasks.

In this article, we propose an extension of the M-task model to communicating M-tasks (CM-tasks) which allows communication between M-tasks during their execution. In particular, we present and discuss the CM-task programming model, programming support for designing CM-task programs, and experimental results. Internally, a CM-task comprises communication and computation phases. The communication between different CM-tasks can exploit optimized communication patterns for the data exchange between CM-tasks, e.g., by using orthogonal realizations of the communication. This can be used to further increase the scalability of many applications, including time-stepping methods which use a similar task structure for each time step. This is demonstrated for solution methods for ordinary differential equations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aldinucci, M., Danelutto, M., Teti, P.: An advanced environment supporting structured parallel programming in Java. Future Generation Computer Systems 19(5), 611–626 (2003)

    Article  Google Scholar 

  2. Bal, H., Haines, M.: Approaches for Integrating Task and Data Parallelism. IEEE Concurrency 6(3), 74–84 (1998)

    Article  Google Scholar 

  3. Chakrabarti, S., Demmel, J., Yelick, K.: Modeling the benefits of mixed data and task parallelism. In: Symposium on Parallel Algorithms and Architecture, pp. 74–83 (1995)

    Google Scholar 

  4. Chandy, M., Foster, I., Kennedy, K., Koelbel, C., Tseng, C.-W.: Integrated support for task and data parallelism. The Int. Journal of Supercomputer Applications 8(2), 80–98 (1994)

    Article  Google Scholar 

  5. Chapman, B., Haines, M., Mehrota, P., Zima, H., Van Rosendale, J.: Opus: A coordination language for multidisciplinary applications. Sci. Program. 6(4), 345–362 (1997)

    Google Scholar 

  6. Fink, S.J.: A Programming Model for Block-Structured Scientific Calculations on SMP Clusters. PhD thesis, University of California, San Diego (1998)

    Google Scholar 

  7. Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems. Springer, Berlin (1993)

    MATH  Google Scholar 

  8. Hill, M., McColl, W., Skillicorn, D.: Questions and Answers about BSP. Scientific Programming 6(3), 249–274 (1997)

    Google Scholar 

  9. Joisha, P., Banerjee, P.: PARADIGM (version 2.0): A New HPF Compilation System. In: Proc. 1999 International Parallel Processing Symposium (IPPS 1999) (1999)

    Google Scholar 

  10. Keßler, C.W.: NestStep: Nested Parallelism and Virtual Shared Memory for the BSP model. The Journal of Supercomputing 17, 245–262 (2001)

    Article  Google Scholar 

  11. Kühnemann, M., Rauber, T., Rünger, G.: Optimizing MPI Collective Communication by Orthogonal Structures. Journal of Cluster Computing 9(3), 257–279 (2006)

    Article  Google Scholar 

  12. Orlando, S., Palmerini, P., Perego, R.: Coordinating HPF programs to mix task and data parallelism. In: SAC 2000: Proceedings of the 2000 ACM symposium on Applied computing, pp. 240–247. ACM Press, New York (2000)

    Chapter  Google Scholar 

  13. Radulescu, A., Nicolescu, C., van Gemund, A., Jonker, P.P.: CPR: Mixed task and data parallel scheduling for distributed systems. In: Proceedings of the 15th International Parallel and Distributed Symposium (2001)

    Google Scholar 

  14. Ramaswamy, S.: Simultaneous Exploitation of Task and Data Parallelism in Regular Scientific Applications. PhD thesis, University of Illinois at Urbana-Champaign (1996)

    Google Scholar 

  15. Rauber, T., Rünger, G.: A Transformation Approach to Derive Efficient Parallel Implementations. IEEE Transactions on Software Engineering 26(4), 315–339 (2000)

    Article  Google Scholar 

  16. Rauber, T., Rünger, G.: Execution Schemes for Parallel Adams Methods. In: Danelutto, M., Vanneschi, M., Laforenza, D. (eds.) Euro-Par 2004. LNCS, vol. 3149, pp. 708–717. Springer, Heidelberg (2004)

    Google Scholar 

  17. Rauber, T., Rünger, G.: Tlib - A Library to Support Programming with Hierarchical Multi-Processor Tasks. J. of Parallel and Distributed Computing 65(3), 347–360 (2005)

    Google Scholar 

  18. Skillicorn, D., Talia, D.: Models and languages for parallel computation. ACM Computing Surveys 30(2), 123–169 (1998)

    Article  Google Scholar 

  19. Subhlok, J., Yang, B.: A new model for integrated nested task and data parallel programming. In: Proceedings of the sixth ACM SIGPLAN symposium on Principles and practice of parallel programming, pp. 1–12. ACM Press, New York (1997)

    Chapter  Google Scholar 

  20. van der Houwen, P.J., Messina, E.: Parallel Adams Methods. J. of Comp. and App. Mathematics 101, 153–165 (1999)

    Article  MATH  Google Scholar 

  21. Vydyanathan, N., Krishnamoorthy, S., Sabin, G., Catalyurek, U., Kurc, T., Sadayappan, P., Saltz, J.: An integrated approach for processor allocation and scheduling of mixed-parallel applications. In: Proc. of the 2006 International Conference on Parallel Processing (ICPP 2006). IEEE, Los Alamitos (2006)

    Google Scholar 

  22. Vydyanathan, N., Krishnamoorthy, S., Sabin, G., Catalyurek, U., Kurc, T., Sadayappan, P., Saltz, J.: Locality conscious processor allocation and scheduling for mixed parallel applications. In: Proc. of the 2006 IEEE Int. Conf. on Cluster Computing. IEEE, Los Alamitos (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Vikram Adve María Jesús Garzarán Paul Petersen

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dümmler, J., Rauber, T., Rünger, G. (2008). Communicating Multiprocessor-Tasks. In: Adve, V., Garzarán, M.J., Petersen, P. (eds) Languages and Compilers for Parallel Computing. LCPC 2007. Lecture Notes in Computer Science, vol 5234. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85261-2_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-85261-2_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-85260-5

  • Online ISBN: 978-3-540-85261-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics