A Tutorial on Parallel and Concurrent Programming in Haskell

  • Simon Peyton Jones
  • Satnam Singh
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5832)


This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.


Parallel Program Shared Variable Concurrent Programming Parallel Array Data Parallelism 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Barnes, J., Hut, P.: A hierarchical O(n logn) force calculation algorithm. Nature 324 (December 1986)Google Scholar
  2. 2.
    Blelloch, G.: NESL: A nested data-parallel language (3.1). Technical Report CMU-CS-95-170, Carnegie Mellon University (September 1995)Google Scholar
  3. 3.
    Blelloch, G.: Programming parallel algorithms. Commun. ACM 39(3), 85–97 (1996)CrossRefGoogle Scholar
  4. 4.
    Blelloch, G., Sabot, G.: Compiling collection-oriented languages onto massively parallel computers. Journal of Parallel and Distributed Computing 8, 119–134 (1990)CrossRefGoogle Scholar
  5. 5.
    Chakravarty, M., Keller, G., Lechtchinsky, R., Pfannenstiel, W.: Nepal – nested data-parallelism in haskell. In: Sakellariou, R., Keane, J.A., Gurd, J.R., Freeman, L. (eds.) Euro-Par 2001. LNCS, vol. 2150, pp. 524–534. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  6. 6.
    Chakravarty, M., Keller, G., Peyton Jones, S.: Associated type synonyms. In: ACM SIGPLAN International Conference on Functional Programming (ICFP 2005), Tallinn, Estonia (2005)Google Scholar
  7. 7.
    Chakravarty, M., Leshchinskiy, R., Jones, S.P., Keller, G.: Data Parallel Haskell: a status report. In: ACM Sigplan Workshop on Declarative Aspects of Multicore Programming, Nice (January 2007)Google Scholar
  8. 8.
    Chakravarty, M.M., Keller, G.: More types for nested data parallel programming. In: ACM SIGPLAN International Conference on Functional Programming (ICFP 2000), Montreal, pp. 94–105. ACM Press, New York (2000)CrossRefGoogle Scholar
  9. 9.
    Chakravarty, M.M., Leshchinskiy, R., Jones, S.P., Keller, G.: Partial vectorisation of Haskell programs. In: Proc. ACM Workshop on Declarative Aspects of Multicore Programming. ACM Press, San Francisco (2008)Google Scholar
  10. 10.
    Fluet, M., Rainey, M., Reppy, J., Shaw, A., Xiao, Y.: Manticore: A heterogeneous parallel language. In: ACM Sigplan Workshop on Declarative Aspects of Multicore Programming, Nice (January 2007)Google Scholar
  11. 11.
    Keller, G.: Transformation-based Implementation of Nested Data Parallelism for Distributed Memory Machines. PhD thesis, Technische Universite at Berlin, Fachbereich Informatik (1999)Google Scholar
  12. 12.
    Leshchinskiy, R.: Higher-order nested data parallelism: semantics and implementation. PhD thesis, Technical University of Berlin (2006)Google Scholar
  13. 13.
    Leshchinskiy, R., Chakravarty, M., Keller, G.: Costing nested array codes. Parallel Processing Letters 12, 249–266 (2002)CrossRefGoogle Scholar
  14. 14.
    Leshchinskiy, R., Chakravarty, M.M., Keller, G.: Higher order flattening. In: Alexandrov, V.N., van Albada, G.D., Sloot, P.M.A., Dongarra, J. J. (eds.) ICCS 2006, Part II. LNCS, vol. 3992, pp. 920–928. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  15. 15.
    Mohr, E., Kranz, D.A., Halstead, R.H.: Lazy task creation – a technique for increasing the granularity of parallel programs. IEEE Transactions on Parallel and Distributed Systems 2(3) (July 1991)Google Scholar
  16. 16.
    O’Neill, M.: The genuine sieve of Eratosthenes. In: Submitted to JFP (2007)Google Scholar
  17. 17.
    Prins, J., Chatterjee, S., Simons, M.: Irregular computations in fortran: Expression and implementation strategies. Scientific Programming 7, 313–326 (1999)CrossRefGoogle Scholar
  18. 18.
    Schrijvers, T., Jones, S.P., Chakravarty, M., Sulzmann, M.: Type checking with open type functions. In: Submitted to ICFP 2008 (2008)Google Scholar
  19. 19.
    Trinder, P., Loidl, H.-W., Barry, E., Hammond, K., Klusik, U., Peyton Jones, S., Rebón Portillo, Á.J.: The Multi-Architecture Performance of the Parallel Functional Language GPH. In: Bode, A., Ludwig, T., Karl, W.C., Wismüller, R. (eds.) Euro-Par 2000. LNCS, vol. 1900, p. 739. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  20. 20.
    Trinder, P., Loidl, H.-W., Pointon, R.F.: Parallel and Distributed Haskells. Journal of Functional Programming 12(5), 469–510 (2002)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Simon Peyton Jones
    • 1
  • Satnam Singh
    • 1
  1. 1.Microsoft Research CambridgeUK

Personalised recommendations