Advertisement

Tasklettes – A Fine Grained Parallelism for Ada on Multicores

  • Stephen Michell
  • Brad Moore
  • Luís Miguel Pinho
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7896)

Abstract

The widespread use of multi-CPU computers is challenging programming languages, which need to adapt to be able to express potential parallelism at the language level. In this paper we propose a new model for fine grained parallelism in Ada, putting forward a syntax based on aspects, and the corresponding semantics to integrate this model with the existing Ada tasking capabilities. We also propose a standard interface and show how it can be extended by the user or library writers to implement their own parallelization strategies.

Keywords

Work Plan Single Instruction Multiple Data Code Fragment Parallelism Strategy Parallel Loop 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hansen, P.B.: The Programming Language Concurrent Pascal. IEEE Transactions on Software Engineering 1(2), 199–207 (1975)CrossRefGoogle Scholar
  2. 2.
    Hoare, C.A.R.: Communicating Sequential Processes. Prentice Hall (1985)Google Scholar
  3. 3.
    Ada Programming Language, ANSI/MIL-STD-1815A-1983 (1983) Google Scholar
  4. 4.
    Java Language Specification, http://www.oracle.com/java (last accessed February 2013)
  5. 5.
    Sutter, H., Larus, J.: Software and the concurrency revolution. Queue 3, 54–62 (2005)CrossRefGoogle Scholar
  6. 6.
    Asanovic, K., Bodik, R., Catanzaro, B.C., Gebis, J.J., Husbands, P., Keutzer, K., Patterson, D.A., Plishker, W.L., Shalf, J., Williams, S.W., Yelick, K.A.: The landscape of parallel computing research: A view from Berkeley. Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley (December 2006)Google Scholar
  7. 7.
    Ada 83 Rationale, http://www.adaic.org/ada-resources/standards/ada83/ (last accessed February 2013)
  8. 8.
    Mayer, H.G., Jahnichen, S.: The data-parallel Ada run-time system, simulation and empirical results. In: Proceedings of Seventh International Parallel Processing Symposium, Newport, CA, USA, pp. 621–627 (April 1993)Google Scholar
  9. 9.
    Hind, M., Schonberg, E.: Efficient Loop-Level Parallelism in Ada. In: Proceedings of TriAda 1991 (October 1991)Google Scholar
  10. 10.
    Thornley, J.: Integrating parallel dataflow programming with the Ada tasking model. In: Engle Jr., C.B. (ed.) Proceedings of TRI-Ada 1994. ACM, New York (1994)Google Scholar
  11. 11.
    Moore, B.: Parallelism generics for Ada 2005 and beyond. In: Proceedings of the ACM SIGAda Annual Conference (SIGAda 2010) (October 2010)Google Scholar
  12. 12.
    Ali, H., Pinho, L.M.: A parallel programming model for Ada. In: Proceedings of the ACM SIGAda Annual Conference (SIGAda 2011) (November 2011)Google Scholar
  13. 13.
    Hansen, P.B.: Structured Multiprogramming. Communications of the ACM 15(7) (July 1972)Google Scholar
  14. 14.
    Frigo, M., Leiserson, C.E., Randall, K.H.: The implementation of the cilk-5 multithreaded language. SIGPLAN Notice 33, 212–223 (1998)CrossRefGoogle Scholar
  15. 15.
    Leiserson, C.: The Cilk++ concurrency platform. In: Proceedings of the 46th Annual Design Automation Conference. ACM, New York (2009)Google Scholar
  16. 16.
    Intel, Cilk Plus, http://software.intel.com/en-us/articles/intel-cilk-plus/ (last accessed February 2013)
  17. 17.
    Taft, T.: Designing ParaSail, a new programming language, http://parasail-programming-language.blogspot.pt/ (last accessed February 2013)
  18. 18.
    Intel. Threading Building Blocks, http://threadingbuildingblocks.org/ (last accessed February 2013)
  19. 19.
    Lea, D.: A Java fork/join framework. In: Proceedings of the ACM 2000 Conference on Java Grande, JAVA 2000, pp. 36–43. ACM, New York (2000)CrossRefGoogle Scholar
  20. 20.
    Marowka, A.: Parallel computing on any desktop. Communications of the ACM 50, 74–78 (2007)CrossRefGoogle Scholar
  21. 21.
    Microsoft. Task parallel library, http://msdn.microsoft.com/en-us/library/dd460717.aspx (last accessed February 2013)
  22. 22.
    Barnes, J.G.P.: Rationale for Ada 2012: 1 Contracts and aspects. Ada User Journal 32(4) (December 2011)Google Scholar
  23. 23.
    Blumofe, R.D., Leiserson, C.E.: Scheduling multithreaded computations by work stealing. Journal of the ACM 46(5), 720–748 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    Halpern, P.: Strict Fork-Join Parallelism. JTC1/SC22/WG21 N3409 (September 2012) Google Scholar
  25. 25.
    Ladner, R.E., Fischer, M.J.: Parallel Prefix Computation. Journal of the ACM 27(4), 831–838 (1980)MathSciNetzbMATHCrossRefGoogle Scholar
  26. 26.
    Moore, B.: A comparison of work-sharing, work-seeking, and work-stealing parallelism strategies using Paraffin with Ada 2005. Ada User Journal 32(1) (March 2011), http://www.ada-europe.org (last accessed February 2013)
  27. 27.
    Moore, B., Michell, S., Pinho, L.M.: Parallelism in Ada: General Model and Ravenscar. In: 16th International Real-Time Ada Workshop, York, UK (April 2013)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Stephen Michell
    • 1
  • Brad Moore
    • 2
  • Luís Miguel Pinho
    • 3
  1. 1.Maurya Software IncCanada
  2. 2.General DynamicsCanada
  3. 3.CISTER/INESC-TEC, ISEPPolytechnic Institute of PortoPortugal

Personalised recommendations