The Heterogeneous Bulk Synchronous Parallel Model

  • Tiffani L. Williams
  • Rebecca J. Parsons
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1800)


Trends in parallel computing indicate that heterogeneous parallel computing will be one of the most widespread platforms for computation-intensive applications. A heterogeneous computing environment offers considerably more computational power at a lower cost than a parallel computer. We propose the Heterogeneous Bulk Synchronous Parallel (HBSP) model, which is based on the BSP model of parallel computation, as a framework for dev eloping applications for heterogeneous parallel environments. HBSP enhances the applicability of the BSP model by incorporating parameters that reflect the relative speeds of the heterogeneous computing components. Moreover, we demonstrate the utility of the model by developing parallel algorithms for heterogeneous systems.


Communication Time Heterogeneous Computing Matrix Multiplication Algorithm Matrix Multiplication Matrix Parallel Sorting 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    R. H. Bisseling. Sparse matrix computations on bulk synchronous parallel computers. In Proceedings of the International Conference on Industrial and Applied Mathematics, Hamburg, July 1995.Google Scholar
  2. [2]
    R. H. Bisseling and W. F. McColl. Scientific computing on bulk synchronous parallel architectures. In B. Pehrson and I. Simon, editors, Proceedings of the 13th IFIP World Computer Congress, volume 1, pages 509–514. Elsevier, 1994.Google Scholar
  3. [3]
    A. V. Gerbessiotis and C. J. Siniolakis. Deterministic sorting and randomized mean finding on the BSP model. In Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, pages 223–232, June 1996.Google Scholar
  4. [4]
    A. V. Gerbessiotis and L. G. Valiant. Direct bulk-synchronous parallel algorithms. Journal of Parallel and Distributed Computing, 22(2):251–267, August 1994.CrossRefGoogle Scholar
  5. [5]
    M. W. Goudreau, K. Lang, S. Rao, T. Suel, and T. Tsantilas. Towards efficiency and portability: Programming with the BSP model. In Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, pages 1–12, June 1996.Google Scholar
  6. [6]
    J. M. D. Hill, B. McColl, D. C. Stefanescu, M. W. Goudreau, K. Lang, S. B. Rao, T. Suel, T. Tsantilas, and R. Bisseling. BSPlib: The BSP programming library. Parallel Computing, 24(14):1947–1980, 1998.CrossRefGoogle Scholar
  7. [7]
    J. Huang and Y. Chow. Parallel sorting and data partitioning by sampling. In IEEE Computer Society’s Seventh International Computer Software & Applications Conference (COMPSAC’83), pages 627–631, November 1983.Google Scholar
  8. [8]
    A. Khokhar, V. Prasanna, M. Shaaban, and C. Wang. Heterogeneous computing: Challenges and opportunities. Computer, 26(6):18–27, June 1993.CrossRefGoogle Scholar
  9. [9]
    B. M. Maggs, L. R. Matheson, and R. E. Tarjan. Models of parallel computation: A survey and synthesis. In Proceedings of the 28th Hawaii International Conference on System Sciences, volume 2, pages 61–70. IEEE Press, January 1995.Google Scholar
  10. [10]
    P. Morin. Coarse-grained parallel computing on heterogeneous systems. In Proceedings of the 1998 ACM Symposium on Applied Computing, pages 629–634, 1998.Google Scholar
  11. [11]
    P. Morin. Two topics in applied algorithmics. Master’s thesis, Carleton University, 1998.Google Scholar
  12. [12]
    H. J. Siegel, H. G. Dietz, and J. K. Antonio. Software support for heterogeneous computing. In A. B. Tucker, editor, The Computer Science and Engineering Handbook, pages 1886–1909. CRC Press, 1997.Google Scholar
  13. [13]
    D. B. Skillicorn and D. Talia. Models and languages for parallel computation. ACM Computing Surveys, 30(2):123–169, June 1998.CrossRefGoogle Scholar
  14. [14]
    L. Smarr and C. E. Catlett. Metacomputing. Communications of the ACM, 35(6):45–52, June 1992.CrossRefGoogle Scholar
  15. [15]
    L. G. Valiant. Optimally universal parallel computers. Philosophical Transactions of the Royal Society of London, A 326:373–376, 1988.CrossRefGoogle Scholar
  16. [16]
    L. G. Valiant. Bulk-synchronous parallel computers. In M. Reeve and S. E. Zenith, editors, Parallel Processing and Artificial Intelligence, pages 15–22. John Wiley & Sons, Chichester, 1989.Google Scholar
  17. [17]
    L. G. Valiant. A bridging model for parallel computation. Communications of the ACM, 33(8):103–111, 1990.CrossRefGoogle Scholar
  18. [18]
    L. G. Valiant. General purpose parallel architectures. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A: Algorithms and Complexity, chapter 18, pages 943–971. MIT Press, Cambridge, MA, 1990.Google Scholar
  19. [19]
    L. G. Valiant. Why BSP computers? In Proceedings of the 7th International Parallel Processing Symposium, pages 2–5. IEEE Press, April 1993.Google Scholar
  20. [20]
    C. C. Weems, G. E. Weaver, and S. G. Dropsho. Linguistic support for heterogeneous parallel processing: A survey and an approach. In Proceedings of the Heterogeneous Computing Workshop, pages 81–88, 1994.Google Scholar
  21. [21]
    T. L. Williams and M. W. Goudreau. An experimental evaluation of BSP sorting algorithms. In Proceedings of the 10th IASTED International Conference on Parallel and Distributed Computing Systems, pages 115–118, October 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Tiffani L. Williams
    • 1
  • Rebecca J. Parsons
    • 1
  1. 1.School of Computer ScienceUniversity of Central FloridaOrlando

Personalised recommendations