Advertisement

Weaving Parallel Threads

Searching for Useful Parallelism in Functional Programs
  • José Manuel Calderón Trilla
  • Simon Poulding
  • Colin Runciman
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9275)

Abstract

As the speed of processors is starting to plateau, chip manufacturers are instead looking to multi-core architectures for increased performance. The ubiquity of multi-core hardware has made parallelism an important tool in writing performant programs. Unfortunately, parallel programming is still considered an advanced technique and most programs are written as sequential programs.

We propose that we lift this burden from the programmer and allow the compiler to automatically determine which parts of a program can be executed in parallel. Historically, most attempts at auto-parallelism depended on static analysis alone. While static analysis is often able to find safe parallelism, it is difficult to determine worthwhile parallelism. This is known as the granularity problem. Our work shows that we can use static analysis in conjunction with search techniques by having the compiler execute the program and then alter the amount of parallelism based on execution speed. We do this by annotating the program with parallel annotations and using search to determine which annotations to enable.

This allows the static analysis to find the safe parallelism and shift the burden of finding worthwhile parallelism to search. Our results show that by searching over the possible parallel settings we can achieve better performance than static analysis alone.

References

  1. 1.
    Augustsson, L., Johnsson, T.: Parallel graph reduction with the \(\langle v, G \rangle \)-machine. In: Proceedings of the Fourth International Conference on Functional Programming Languages and Computer Architecture. FPCA 1989, pp. 202–213. ACM, New York (1989)Google Scholar
  2. 2.
    Burn, G.L., Hankin, C., Abramsky, S.: Strictness analysis for higher-order functions. Sci. Comput. program. 7, 249–278 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Calderón Trilla, J.M., Runciman, C.: Improving implicit parallelism. In: Proceedings of the ACM SIGPLAN Symposium on Haskell. Haskell 2015 (2015). Under submissionGoogle Scholar
  4. 4.
    Clack, C., Peyton Jones, S.: The four-stroke reduction engine. In: Proceedings of the 1986 ACM Conference on LISP and Functional Programming, pp. 220–232. ACM (1986)Google Scholar
  5. 5.
    Clack, C., Peyton Jones, S.L.: Strictness analysis-a practical approach. In: Jouannaud, J.-P. (ed.) Functional Programming Languages and Computer Architecture. LNCS, vol. 201, pp. 35–49. Springer, Heidelberg (1985)CrossRefGoogle Scholar
  6. 6.
    Hammond, K.: Parallel functional programming: an introduction (1994). http://www-fp.dcs.st-and.ac.uk/~kh/papers/pasco94/pasco94.html
  7. 7.
    Hammond, K., Michelson, G.: Research Directions in Parallel Functional Programming. Springer-Verlag (2000)Google Scholar
  8. 8.
    Harris, T., Singh, S.: Feedback directed implicit parallelism. SIGPLAN Not. 42(9), 251–264 (2007). http://doi.acm.org/10.1145/1291220.1291192 CrossRefGoogle Scholar
  9. 9.
    Hinze, R.: Projection-based strictness analysis: theoretical and practical aspects. Inaugural dissertation, University of Bonn (1995)Google Scholar
  10. 10.
    Hogen, G., Kindler, A., Loogen, R.: Automatic parallelization of lazy functional programs. In: Krieg-Brückner, B. (ed.) ESOP 1992. LNCS, vol. 582, pp. 254–268. Springer, Heidelberg (1992)Google Scholar
  11. 11.
    Hughes, J.: Why functional programming matters. Comput. J. 32(2), 98–107 (1989)CrossRefGoogle Scholar
  12. 12.
    Hughes, R.J.M.: The design and implementation of programming languages. Ph.D. thesis, Programming Research Group, Oxford University, July 1983Google Scholar
  13. 13.
    Jones, M., Hudak, P.: Implicit and explicit parallel programming in haskell (1993). Distributed via FTP at http://nebula.systemsz.cs.yale.edu/pub/yale-fp/reports/RR-982.ps.Z. Accessed July 1999
  14. 14.
    Knuth, D.E.: Textbook examples of recursion. In: Lifschitz, V. (ed.) Artificial Intelligence and Theory of Computation, pp. 207–229. Academic Press, Boston (1991)CrossRefGoogle Scholar
  15. 15.
    Marlow, S., Maier, P., Loidl, H., Aswad, M., Trinder, P.: Seq no more: better strategies for parallel haskell. In: Proceedings of the Third ACM Haskell Symposium on Haskell, pp. 91–102. ACM (2010)Google Scholar
  16. 16.
    Mycroft, A.: The theory and practice of transforming call-by-need into call-by-value. In: Robinet, B. (ed.) International Symposium on Programming. LNCS, vol. 83, pp. 269–281. Springer, Heidelberg (1980)CrossRefGoogle Scholar
  17. 17.
    Naylor, M., Runciman, C.: The reduceron reconfigured. ACM Sigplan Not. 45(9), 75–86 (2010)CrossRefGoogle Scholar
  18. 18.
    Nisbet, A.: GAPS: A compiler framework for genetic algorithm (GA) optimised parallelisation. In: Proceedings of the International Conference and Exhibition on High-Performance Computing and Networking, pp. 987–989. HPCN Europe 1998 (1998)Google Scholar
  19. 19.
    Peyton Jones, S.L.: Parallel implementations of functional programming languages. Comput. J. 32(2), 175–186 (1989)CrossRefGoogle Scholar
  20. 20.
    Plasmeijer, R., Eekelen, M.V.: Functional Programming and Parallel Graph Rewriting, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston (1993)zbMATHGoogle Scholar
  21. 21.
    Runciman, C., Wakeling, D. (eds.): Applications of Functional Programming. UCL Press Ltd., London (1996)Google Scholar
  22. 22.
    Ryan, C., Ivan, L.: Automatic parallelization of arbitrary programs. In: Langdon, W.B., Fogarty, T.C., Nordin, P., Poli, R. (eds.) EuroGP 1999. LNCS, vol. 1598, pp. 244–254. Springer, Heidelberg (1999) CrossRefGoogle Scholar
  23. 23.
    Sutter, H.: The free lunch is over: a fundamental turn toward concurrency in software. Dr. Dobbs J. 30(3), 202–210 (2005)Google Scholar
  24. 24.
    Tremblay, G., Gao, G.R.: The impact of laziness on parallelism and the limits of strictness analysis. In: Proceedings High Performance Functional Computing, pp. 119–133. Citeseer (1995)Google Scholar
  25. 25.
    Trinder, P.W., Hammond, K., Loidl, H.W., Peyton Jones, S.L.: Algorithm + strategy = parallelism. J. Funct. Program. 8(1), 23–60 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Wadler, P.: Strictness analysis on non-flat domains. In: Abramsky, S., Hankin, C.L. (eds.) Abstract Interpretation of Declarative Languages, pp. 266–275. Ellis Horwood, Chichester (1987)Google Scholar
  27. 27.
    Wadler, P., Hughes, R.J.M.: Projections for strictness analysis. In: Kahn, G. (ed.) FPCA 1987. LNCS, vol. 274, pp. 385–407. Springer, Heidelberg (1987) CrossRefGoogle Scholar
  28. 28.
    Williams, K.P.: Evolutionary algorithms for automatic parallelization. Ph.D. thesis, Department of Computer Science, University of Reading, December 1998Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • José Manuel Calderón Trilla
    • 1
  • Simon Poulding
    • 2
  • Colin Runciman
    • 1
  1. 1.University of YorkYorkUK
  2. 2.Blekinge Institute of TechnologyKarlskronaSweden

Personalised recommendations