Abstract
Today, conditions for the development of parallel and distributed systems would appear to be ideal. On the one hand, the demand for such systems is strong and growing steadily. Traditional supercomputing applications, Grand Challenges, require the solution of increasingly large problems, with new areas added recently, e.g. research on the human genome. The rapid growth of the Internet has given rise to geographically distributed, networked supercomputers (Grids) and to new classes of distributed commercial applications with parallelism on both the server and client side. On the other hand, bigger and more powerful systems are being built every year. Microprocessors are rapidly becoming faster and cheaper, enabling more processors to be connected in one system. New networking hardware with smaller latency and greater bandwidth is improving systems’ communication performance. Several levels of parallelism are available to the user: within a processor, among several processors in an SMP or a cluster, as well as parallelism among remote machines cooperating via the Internet.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Reference
B. Bacci, S. Gorlatch, C. Lengauer, and S. Pelagatti. Skeletons and transformations in an integrated parallel programming environment. In Parallel Computing Technologies (PaCT-99), LNCS 1662, pages 13–27. Springer-Verlag, 1999.
M. Bernashi, G Iannello, and M. Lauria. Experimental results about MPI collective communication operations. In High-Performance Computing and Networking, Lecture Notes in Computer Science 1593, pages 775–783, 1999.
G. Bilardi, K. Herley, A. Pietracaprina, G. Pucci, and P. Spirakis. BSP vs. LogP. In Eighth ACM Symp. on Parallel Algorithms and Architectures, pages 25–32, 1996.
R. Bird. Lectures on constructive functional programming. In M. Broy, editor, Constructive Methods in Computing Science, NATO ASI Series F: Computer and Systems Sciences. Vol. 55, pages 151–216. Springer-Verlag, 1988.
G. Blelloch. Scans as primitive parallel operations. IEEE Trans. on Computers, TC-38(11):1526–1538, November 1989.
C. Böhm and G. Jacopini. Flow diagrams, turing machines and languages with only two formation rules. Comm. ACM, 9:366–371, 1966
M. Cole, S. Gorlatch, C. Lengauer, and D. Skillicorn, editors. Theory and Practice of Higher-Order Parallel Programming. Dagstuhl-Seminar Report 169, SchloĂź Dagstuhl. 1997.
M. Cole, S. Gorlatch, J. Prins, and D. Skillicorn, editors. High Level Parallel Programming: Applicability, Analysis and Performance. Dagstuhl-Seminar Report 238, SchloĂź Dagstuhl. 1999.
M. I. Cole. Algorithmic Skeletons: A Structured Approach to the Management of Parallel Computation. Pitman, 1989.
-J. Dahl, E. W. Dijkstra, and C. A.R.Hoare. Structured Programming. Academic Press, 1975
B. Di Martino, A. Mazzeo, N. Mazzocca, and U. Villano. Restructuring parallel programs by transformation of point-to-point interactions into collective communication. Available at http://www.grid.unina.it
E. W. Dijkstra. Go To statement considered harmful. Comm. ACM, 11(3):147–148, 1968.
I. Foster. Designing and Building Parallel Programs. Addison-Wesley, 1995.
S. Gorlatch. Systematic efficient parallelisation of scan and other list homomorphisms. In L. Boug¨¦, P. Fraigniaud, A. Mignotte, and Y. Robert, editors, Euro-Par’96: Parallel Processing, Vol. II, Lecture Notes in Computer Science 1124, pages 401–408. Springer-Verlag, 1996
S. Gorlatch. Abstraction and performance in the design of parallel programs Habilitation Thesis. Universität Passau. MIP-9802, 1998.
S. Gorlatch. Extracting and implementing list homomorphisms in parallel program development. Science of Computer Programming, 33(1):1–27, 1998.
S. Gorlatch, editor. First Int. Workshop on Constructive Methods for Parallel Programming (CMPP’98), Techreport MIP-9805. University of Passau, May 1998.
S. Gorlatch. Towards formally-based design of message passing programs. IEEE Trans. on Software Engineering, 26(3):276–288, March 2000.
S. Gorlatch and H. Bischof. A generic MPI implementation for a data-parallel skeleton: Formal derivation and application to FFT. Parallel Processing Letters, 8(4):447–458, 1998.
S. Gorlatch and C. Lengauer, editors. Second Int. Workshop on Constructive Methods for Parallel Programming (CMPP’2000), Techreport MIP0007. University of Passau, June 2000.
S. Gorlatch, C. Wedler, and C. Lengauer. Optimization rules for programming with collective operations. In M. Atallah, editor, Proc. IPPS/SPDP’99, pages 492–499. IEEE Computer Society Press, 1999.
M. Goudreau, K. Lang, S. Rao, T. Suel, and T. Tsantilas. Towards efficiency and portablility. programming with the BSP model. In Eighth ACM Symp. on Parallel Algorithms and Architectures, pages 1–12, 1996.
M. Goudreau and S. Rao. Single-message vs. batch communication. In M. Heath, A. Ranade, and R. Schreiber, editors, Algorithms for parallel processing, pages 61–74. Springer-Verlag, 1999
W. Gropp, E. Lusk, and A. Skjellum. Using MPI: Portable Parallel Programming with the Message Passing. MIT Press, 1994.
C. A. Herrmann and C. Lengauer. The hdc compiler project. In A. Darte, G.-A. Silber, and Y. Robert, editors, Proc. Eighth Int. Workshop on Compilers for Parallel Computers (CPC 2000), pages 239–254. LIP, ENS Lyon, 2000.
K. Hwang and Z. Xu. Scalable Parallel Computing. McGraw Hill, 1998.
T. Kielmann, H. E. Bal, and S. Gorlatch. Bandwidth-efficient collective communication for clustered wide area systems. In Parallel and Distributed Processing Symposium (IPDPS 2000), pages 492–499, 2000.
T. Kielmann, R. F. Hofman, H. E. Bal, A. Plaat, and R. A. Bhoedjang. MagPIe: MPI’s collective communication operations for clustered wide area systems. In Proc. ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP’99), pages 131–140, 1999.
V. Kumar et al. Introduction to Parallel Computing. Benjamin/Cummings Publ., 1994
E. Meijer, M. Fokkinga, and R. Paterson. Functional programming with bananas, lenses, envelopes and barbed wire. In J. Hughes, editor, Proc. 5th ACM Conf. on Functional Prog. and Comp. Architecture (FPCA ’91), pages 124–144. Springer-Verlag, 1991.
J. Misra. Powerlist: a structure for parallel recursion. ACM TOPLAS, 16(6):1737–1767, 1994.
D. Musser and A. Stepanov. Algorithm-oriented generic libraries. Software ¡ª Practice and Experience, 24(7):623–642, 1994.
P. Pacheco. Parallel Programming with MPI. Morgan Kaufmann Publ., 1997.
J.-Y. L. Park, H.-A. Choi, N. Nupairoj, and L. M. Ni. Construction of optimal multicast trees based on the parameterized communication model. In Proc. Int. Conference on Parallel Processing (ICPP), volume I, pages 180–187, 1996.
M. J. Quinn. Parallel Computing. McGraw-Hill, Inc., 1994.
D. Skillicorn. Foundations of Parallel Programming. Cambridge University Press, 1994.
D. Skillicorn and W. Cai. A cost calculus for parallel functional programming. J. Parallel and Distributed Computing, 28:65–83, 1995.
S. S. Vadhiyar, G. E. Fagg, and J. Dongarra. Automatically tuned collective communications. In Proc. Supercomputing 2000. Dallas, TX, November 2000.
L. Valiant General purpose parallel architectures. In Handbook of Theoretical Computer Science, volume A, chapter 18, pages 943–971. MIT Press, 1990
R. van de Geijn. On global combine operations. J. Parallel and Distributed Computing, 22:324–328, 1994.
[41] R. van de Geijn. Using PLAPACK: Parallel Linear Algebra Package. Scientific and Engineering Computation Series. MIT Press, 1997.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag London
About this chapter
Cite this chapter
Gorlatch, S. (2003). SAT: A Programming Methodology with Skeletons and Collective Operations. In: Rabhi, F.A., Gorlatch, S. (eds) Patterns and Skeletons for Parallel and Distributed Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0097-3_2
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0097-3_2
Publisher Name: Springer, London
Print ISBN: 978-1-85233-506-9
Online ISBN: 978-1-4471-0097-3
eBook Packages: Springer Book Archive