OpenMP Tasking Model for Ada: Safety and Correctness

  • Sara RoyuelaEmail author
  • Xavier MartorellEmail author
  • Eduardo QuiñonesEmail author
  • Luis Miguel PinhoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10300)


The safety-critical real-time embedded domain increasingly demands the use of parallel architectures to fulfill performance requirements. Such architectures require the use of parallel programming models to exploit the underlying parallelism. This paper evaluates the applicability of using OpenMP, a widespread parallel programming model, with Ada, a language widely used in the safety-critical domain.

Concretely, this paper shows that applying the OpenMP tasking model to exploit fine-grained parallelism within Ada tasks does not impact on programs safeness and correctness, which is vital in the environments where Ada is mostly used. Moreover, we compare the OpenMP tasking model with the proposal of Ada extensions to define parallel blocks, parallel loops and reductions. Overall, we conclude that the OpenMP tasking model can be safely used in such environments, being a promising approach to exploit fine-grain parallelism in Ada tasks, and we identify the issues which still need to be further researched.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
    Intel® OpenMP* Runtime Library (2016).
  5. 5.
    NVIDIA® CUDA C Programming Guide (2016).
  6. 6.
    OpenMP Technical Report 4: version 5.0 Preview 1 (2016).
  7. 7.
    Intel Interprocedural Optimization (2017).
  8. 8.
    Ada Rapporteur Group: AI12-0119-1 (2016).
  9. 9.
    Ayguadé, E., Copty, N., Duran, A., Hoeflinger, J., Lin, Y., Massaioli, F., Teruel, X., Unnikrishnan, P., Zhang, G.: The design of OpenMP tasks. TPDS 20(3), 404–418 (2009)Google Scholar
  10. 10.
    Basupalli, V., Yuki, T., Rajopadhye, S., Morvan, A., Derrien, S., Quinton, P., Wonnacott, D.: ompVerify: polyhedral analysis for the OpenMP programmer. In: Chapman, B.M., Gropp, W.D., Kumaran, K., Müller, M.S. (eds.) IWOMP 2011. LNCS, vol. 6665, pp. 37–53. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-21487-5_4 CrossRefGoogle Scholar
  11. 11.
    Butenhof, D.R.: Programming with POSIX Threads. Addison-Wesley, Reading (1997)Google Scholar
  12. 12.
    Chapman, B., Jost, G., Van Der Pas, R.: Using OpenMP: Portable Shared Memory Parallel Programming, vol. 10. MIT press, Cambridge (2008)Google Scholar
  13. 13.
    Duran, A., Ayguadé, E., Badia, R.M., Labarta, J., Martinell, L., Martorell, X., Planas, J.: Ompss: a proposal for programming heterogeneous multi-core architectures. Parallel Process. Lett. 21(02), 173–193 (2011)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Duran, A., Ferrer, R., Costa, J.J., Gonzàlez, M., Martorell, X., Ayguadé, E., Labarta, J.: A proposal for error handling in OpenMP. IJPP 35(4), 393–416 (2007)zbMATHGoogle Scholar
  15. 15.
    Ferrer, R., Royuela, S., Caballero, D., Duran, A., Martorell, X., Ayguadé, E.: Mercurium: Design decisions for a s2s compiler. In: Cetus Users and Compiler Infastructure Workshop in conjunction with PACT (2011)Google Scholar
  16. 16.
    Kegel, P., Schellmann, M., Gorlatch, S.: Using OpenMP vs. threading building blocks for medical imaging on multi-cores. In: Sips, H., Epema, D., Lin, H.-X. (eds.) Euro-Par 2009. LNCS, vol. 5704, pp. 654–665. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-03869-3_62 CrossRefGoogle Scholar
  17. 17.
    Krawezik, G., Cappello, F.: Performance comparison of MPI and three OpenMP programming styles on shared memory multiprocessors. In: SPAA. ACM (2003)Google Scholar
  18. 18.
    Kuhn, B., Petersen, P., O’Toole, E.: OpenMP versus threading in C/C++. Concurrency Pract. Experience 12(12), 1165–1176 (2000)CrossRefzbMATHGoogle Scholar
  19. 19.
    Lee, S., Min, S.J., Eigenmann, R.: OpenMP to GPGPU: a compiler framework for automatic translation and optimization. SIGPLAN Not. 44(4), 101–110 (2009)CrossRefGoogle Scholar
  20. 20.
    Lin, Y.: Static nonconcurrency analysis of OpenMP programs. In: Mueller, M.S., Chapman, B.M., Supinski, B.R., Malony, A.D., Voss, M. (eds.) IWOMP -2005. LNCS, vol. 4315, pp. 36–50. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-68555-5_4 CrossRefGoogle Scholar
  21. 21.
    Lisper, B.: Towards parallel programming models for predictability. In: OASIcs, vol. 23. Schloss Dagstuhl LZI (2012)Google Scholar
  22. 22.
    Ma, H., Diersen, S.R., Wang, L., Liao, C., Quinlan, D., Yang, Z.: Symbolic analysis of concurrency errors in openmp programs. In: ICPP, pp. 510–516. IEEE (2013)Google Scholar
  23. 23.
    Michell, S., Moore, B., Pinho, L.M.: Tasklettes – a fine grained parallelism for ada on multicores. In: Keller, H.B., Plödereder, E., Dencker, P., Klenk, H. (eds.) Ada-Europe 2013. LNCS, vol. 7896, pp. 17–34. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-38601-5_2 CrossRefGoogle Scholar
  24. 24.
    Pinho, L., Nelis, V., Yomsi, P., Quinones, E., Bertogna, M., Burgio, P., Marongiu, A., Scordino, C., Gai, P., Ramponi, M., Mardiak, M.: P-SOCRATES: a parallel software framework for time-critical many-core systems. MICPRO 39(8), 1190–1203 (2015)Google Scholar
  25. 25.
    Pinho, L.M., Michell, S.: Session summary: parallel and multicore systems. Ada Lett. 36(1), 83–90 (2016)CrossRefGoogle Scholar
  26. 26.
    Pinho, L.M., Moore, B., Michell, S., Taft, S.T.: Real-time fine-grained parallelism in ada. ACM SIGAda Ada Lett. 35(1), 46–58 (2015)CrossRefGoogle Scholar
  27. 27.
    Reinders, J.: Intel Threading Building Blocks. O’Reilly & Associates Inc, Sebastopol (2007)Google Scholar
  28. 28.
    Royuela, S., Duran, A., Liao, C., Quinlan, D.J.: Auto-scoping for OpenMP tasks. In: Chapman, B.M., Massaioli, F., Müller, M.S., Rorro, M. (eds.) IWOMP 2012. LNCS, vol. 7312, pp. 29–43. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-30961-8_3 CrossRefGoogle Scholar
  29. 29.
    Royuela, S., Ferrer, R., Caballero, D., Martorell, X.: Compiler analysis for OpenMP tasks correctness. In: Computing Frontiers, p. 7. ACM (2015)Google Scholar
  30. 30.
    Serrano, M.A., Melani, A., Bertogna, M., Quinones, E.: Response-time analysis of DAG tasks under fixed priority scheduling with limited preemptions. In: DATE, pp. 1066–1071. IEEE (2016)Google Scholar
  31. 31.
    Shen, J., Fang, J., Sips, H., Varbanescu, A.L.: Performance gaps between OpenMP and OpenCL for multi-core CPUs. In: ICPPW, pp. 116–125. IEEE (2012)Google Scholar
  32. 32.
    Sielski, K.L.: Implementing Ada 83 and Ada 9X using solaris threads. Ada: Towards Maturity 6, 5 (1993)Google Scholar
  33. 33.
    Snir, M.: MPI-the Complete Reference: The MPI Core, vol. 1. MIT press, Cambridge (1998)Google Scholar
  34. 34.
    Stone, J.E., Gohara, D., Shi, G.: OpenCL: a parallel programming standard for heterogeneous computing systems. CSE 12(3), 66–73 (2010)Google Scholar
  35. 35.
    Taft, S.T., Moore, B., Pinho, L.M., Michell, S.: Safe parallel programming in Ada with language extensions. ACM SIGAda Ada Lett. 34(3), 87–96 (2014)CrossRefGoogle Scholar
  36. 36.
    Varbanescu, A.L., Hijma, P., Van Nieuwpoort, R., Bal, H.: Towards an effective unified programming model for many-cores. In: IPDPS, pp. 681–692. IEEE (2011)Google Scholar
  37. 37.
    Wong, M., Klemm, M., Duran, A., Mattson, T., Haab, G., Supinski, B.R., Churbanov, A.: Towards an error model for OpenMP. In: Sato, M., Hanawa, T., Müller, M.S., Chapman, B.M., Supinski, B.R. (eds.) IWOMP 2010. LNCS, vol. 6132, pp. 70–82. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-13217-9_6 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Barcelona Supercomputing CenterBarcelonaSpain
  2. 2.CISTER/INESC-TEC, ISEPPolytechnic Institute of PortoPortoPortugal

Personalised recommendations