Advertisement

Parallel Programming with Interacting Processes

  • Peiyi Tang
  • Yoichi Muraoka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1863)

Abstract

In this paper, we argue that interacting processes (IP) with multiparty interactions are an ideal model for parallel programming. The IP model with multiparty interactions was originally proposed by N. Francez and I. R. Forman [1] for distributed programming of reactive applications. We analyze the IP model and provide the new insights into it from the parallel programming perspective. We show through parallel program examples in IP that the suitability of the IP model for parallel programming lies in its programmability, high degree of parallelism and support for modular programming. We believe that IP is a good candidate for the mainstream programming model for the both parallel and distributed computing in the future.

Keywords

Programming Models Parallel Programming Interacting Processes Multiparty Interactions Programmability Maximum Parallelism Modular Programming 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Nissim Francez and Ira R Forman. Interacting Processes-A Multiparty Approach to Coordinated Distributed Programming. Addison-Wesley, 1996.Google Scholar
  2. 2.
    D.B. Skillicorn and D. Talia. Models and languages for parallel computation. ACM Computer Surveys, 30(2):123–169, June 1998.Google Scholar
  3. 3.
    A. Geist, A. Beguelin, J. Dongarra, and W. Jiang at el. PVM: Parallel Virtual Machine-A User guide and Tutorial for Network Parallel Computing. MIT Press, 1994.Google Scholar
  4. 4.
    W. Gropp, E. Lusk, and A. Skjellum. MPI: Portable Parallel Programming with the Massage Passing Interface. MIT Press, 1994.Google Scholar
  5. 5.
    K. Li and P. Hudak. Memory coherence in shared virtual memory systems. ACM Transactions on Computer Systems, 7(4):321–359, 1989.CrossRefGoogle Scholar
  6. 6.
    C. Amza, A.L. Cox, S. Dwarkadas, and P. Keleher at el. Treadmarks: Shared memory computing on networks of workstations. IEEE Computer, 29(2):18–28, 1996.Google Scholar
  7. 7.
    Bradford Nichols, Dick Buttlar, and Jacqueline Proulx Farrell. Pthreads Programming: POSIX Standard for Better Multiprocessing. O’Reilly Associates, Inc., 1996.Google Scholar
  8. 8.
    Doug Lea. Concurrent Programming in Java: Design Principles and Patterns. Addison-Wesley Longman, Inc., 1996.Google Scholar
  9. 9.
    High Performance Fortran Forum. High performance fortran language specification. Scientific Programming, 1(1–2):1–170, 1993.Google Scholar
  10. 10.
    C.A.R. Hoare. Communication Sequential Processes. Prentice Hall, 1985.Google Scholar
  11. 11.
    Robin Milner. Communication and Concurrency. Prentice-Hall, 1989.Google Scholar
  12. 12.
    Michael J. Quinn. Designing Efficient Algorithms for Parallel Computers. McGraw-Hill Book Company, 1987.Google Scholar
  13. 13.
    C.H. Nevison, D.C. Hyde, G.M. Schneider, and P.T. Tymann. Laboratories for Parallel Computing. Jones and Bartlett Publishers, 1994.Google Scholar
  14. 14.
    Peter Carlin, Mani Chandy, and Carl Kesselman. The compositional c++ language definition. Technical Report http://globus.isi.edu/ccpp/langdef/cc++-def.html, California Technology Institute, 1993.
  15. 15.
    Parallel Fortran Forum. Parallel fortran from x3h5, version 1. Technical report, X3H5, 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Peiyi Tang
    • 1
  • Yoichi Muraoka
    • 2
  1. 1.Department of Mathematics and ComputingUniversity of Southern QueenslandToowoombaAustralia
  2. 2.School of Science and EngineeringWaseda UniversityTokyoJapan

Personalised recommendations