Semantic-Aware Automatic Parallelization of Modern Applications Using High-Level Abstractions

  • Chunhua Liao
  • Daniel J. Quinlan
  • Jeremiah J. Willcock
  • Thomas Panas
Open Access


Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. Modern applications using high-level abstractions, such as C++ STL containers and complex user-defined class types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we use a source-to-source compiler infrastructure, ROSE, to explore compiler techniques to recognize high-level abstractions and to exploit their semantics for automatic parallelization. Several representative parallelization candidate kernels are used to study semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Preliminary results have shown that semantics of abstractions can help extend the applicability of automatic parallelization to modern applications and expose more opportunities to take advantage of multicore processors.


Automatic parallelization High-level abstractions Semantics ROSE OpenMP 



This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. We thank Dr. Qing Yi for her dependence analysis implementation in ROSE.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution,and reproduction in any medium, provided the original author(s) and source are credited.


  1. 1.
    GOMP—An OpenMP implementation for GCC. (2008)
  2. 2.
    Allen R., Kennedy K.: Optimizing Compilers for Modern Architectures: a Dependence-based Approach. Morgan Kaufmann, San Francisco (2001)Google Scholar
  3. 3.
    An, P., Jula, A., Rus, S., Saunders, S., Smith, T., Tanase, G., Thomas, N., Amato, N.M., Rauchwerger, L.: STAPL: an adaptive, generic parallel C++ library. In: Languages and Compilers for Parallel Computing (LCPC), pp. 193–208 (2001)Google Scholar
  4. 4.
    Benkner S.: VFC: the Vienna Fortran compiler. Sci. Program. 7(1), 67–81 (1999)Google Scholar
  5. 5.
    Bik, A., Girkar, M., Grey, P., Tian, X.: Efficient exploitation of parallelism on Pentium III and Pentium 4 processor-based systems. Intel Technol. J. Q1, 1–9 (2001)Google Scholar
  6. 6.
    Blume W., Doallo R., Eigenmann R., Grout J., Hoeflinger J., Lawrence T., Lee J., Padua D., Paek Y., Pottenger B., Rauchwerger L., Tu P.: Parallel programming with Polaris. Computer 29(12), 78–82 (1996). doi: 10.1109/2.546612 CrossRefGoogle Scholar
  7. 7.
    Bodin, F., et al.: Sage++: an object-oriented toolkit and class library for building Fortran and C++ restructuring tools. In: Proceedings of the Second Annual Object-Oriented Numerics Conference (1994)Google Scholar
  8. 8.
    Cooper K., Torczon L.: Engineering a Compiler. Morgan Kaufmann, San Francisco (2003)MATHGoogle Scholar
  9. 9.
    Cooper K.D., Hall M.W., Hood R.T., Kennedy K., McKinley K.S., Mellor-Crummey J.M., Torczon L., Warren S.K.: The ParaScope parallel programming environment. Proc. IEEE 81(2), 244–263 (1993)CrossRefGoogle Scholar
  10. 10.
    Edison Design Group: C++ Front End.
  11. 11.
    Gregor D., Schupp S.: STLlint: lifting static checking from languages to libraries. Softw. Pract. Exper. 36(3), 225–254 (2006). doi: 10.1002/spe.v36:3 CrossRefGoogle Scholar
  12. 12.
    Johnson, E., Gannon, D., Beckman, P.: HPC++: Experiments with the Parallel Standard Template Library. In: Proceedings of the 11th International Conference on Supercomputing (ICS-97), pp. 124–131. ACM Press, New York (1997)Google Scholar
  13. 13.
    Johnson, S.P., Evans, E., Jin, H., Ierotheou, C.S.: The ParaWise Expert Assistant—Widening accessibility to efficient and scalable tool generated OpenMP code. In: WOMPAT, pp. 67–82 (2004)Google Scholar
  14. 14.
    Kambadur, P., Gregor, D., Lumsdaine, A.: OpenMP extensions for generic libraries. In: International Workshop on OpenMP (IWOMP) (2008)Google Scholar
  15. 15.
    Kennedy K., Broom B., Chauhan A., Fowler R., Garvin J., Koelbel C., McCosh C., Mellor-Crummey J.: Telescoping languages: a system for automatic generation of domain languages. Proc. IEEE 93(2), 387–408 (2005). doi: 10.1109/JPROC.2004.840447 CrossRefGoogle Scholar
  16. 16.
    Kennedy, K., Broom, B., Cooper, K., Dongarra, J., Fowler, R., Gannon, D., Johnsson, L., Mellor-Crummey, J., Torczon, L.: Telescoping languages: a strategy for automatic generation of scientific problem-solving systems from annotated libraries. J. Parallel Distrib. Comput. 61(12), 1803– 1826 (2001). doi:M 10.1006/jpdc.2001.1724 CrossRefMATHGoogle Scholar
  17. 17.
    Kulkarni M., Pingali K., Walter B., Ramanarayanan G., Bala K., Chew L.P.: Optimistic parallelism requires abstractions. Commun. ACM 52(9), 89–97 (2009). doi: 10.1145/1562164.1562188 CrossRefGoogle Scholar
  18. 18.
    Lamport L.: The parallel execution of do loops. Commun. ACM 17(2), 83–93 (1974). doi: 10.1145/360827.360844 CrossRefMathSciNetMATHGoogle Scholar
  19. 19.
    Liao, C., Quinlan, D.J., Vuduc, R., Panas, T.: Effective source-to-source outlining to support whole program empirical optimization. In: The 22th International Workshop on Languages and Compilers for Parallel Computing (LCPC). Newark, Delaware, USA (2009)Google Scholar
  20. 20.
    Liao, S.W., Diwan, A., Robert, P., Bosch, J., Ghuloum, A., Lam, M.S.: SUIF Explorer: an interactive and interprocedural parallelizer. In: PPoPP ’99: Proceedings of the Seventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 37–48. ACM Press, New York, NY, USA (1999). doi: 10.1145/301104.301108
  21. 21.
    OpenMP Architecture Review Board: The OpenMP specification for parallel programming. (2009)
  22. 22.
    Quinlan, D.: ROSE: Compiler support for object-oriented frameworks. In: In Proceedings of Conference on Parallel Compilers (CPC) (2000)Google Scholar
  23. 23.
    Quinlan, D., Schordan, M., Yi, Q., de Supinski, B.: A C++ infrastructure for automatic introduction and translation of OpenMP directives. In: Proceedings of the Worshop on OpenMP Applications and Tools (WOMPAT), LNCS, vol. 2716, pp. 13–25. Springer-Verlag (2003)Google Scholar
  24. 24.
    Quinlan D.J., Schordan M., Miller B., Kowarschik M.: Parallel object-oriented framework optimization: research articles. Concurr. Comput. Pract. Exper. 16(2–3), 293–302 (2004). doi: 10.1002/cpe.v16:2/3 CrossRefGoogle Scholar
  25. 25.
    Quinlan, D.J., Schordan, M., Yi, Q., de Supinski, B.R.: Semantic-driven parallelization of loops operating on user-defined containers. In: Workshop on Languages and Compilers for Parallel Computing, vol. 2958, pp. 524–538 (2003)Google Scholar
  26. 26.
    Quinlan, D.J., et al.: ROSE compiler project.
  27. 27.
    Quinlan, D.J., et al.: Compass user manual. (2008)
  28. 28.
    Rasmussen, C., et al.: Open Fortran Parser.
  29. 29.
    Robicheaux, J., Shah, S.: (1998)
  30. 30.
    Schordan, M., Quinlan, D.: A source-to-source architecture for user-defined optimizations. In: JMLC’03: Joint Modular Languages Conference with EuroPar’03, Lecture Notes in Computer Science, vol. 2789, pp. 214–223. Springer Verlag (2003)Google Scholar
  31. 31.
    Singler, J., Konsik, B.: The GNU libstdc++ parallel mode: software engineering considerations. In: IWMSE ’08: Proceedings of the 1st International Workshop on Multicore Software Engineering, pp. 15–22. ACM, New York, NY, USA (2008). doi: 10.1145/1370082.1370089
  32. 32.
    The Motor Industry Software Reliability Association: MISRA C++: 2008 Guidelines for the Use of the C++ Language in Critical Systems (2008)Google Scholar
  33. 33.
    Wilson R.P., French R.S., Wilson C.S., Amarasinghe S.P., Anderson J.A.M., Tjiang S.W., Liao S.W., Tseng C.W., Hall M.W., Lam M.S., Hennessy J.L.: SUIF: An infrastructure for research on parallelizing and optimizing compilers. SIGPLAN Notices 29(12) 31–37 (1994)CrossRefGoogle Scholar
  34. 34.
    Yi, Q., Quinlan, D.: Applying loop optimizations to object-oriented abstractions through general classification of array semantics. In: The 17th International Workshop on Languages and Compilers for Parallel Computing (LCPC) (2004)Google Scholar
  35. 35.
    Zima H.P., Bast H., Gerndt M.: Superb: a tool for semi-automatic MIMD and SIMD parallelization. Proc. Parallel Comput. 6(1), 1–18 (1988)CrossRefGoogle Scholar

Copyright information

© The Author(s) 2010

Authors and Affiliations

  • Chunhua Liao
    • 1
  • Daniel J. Quinlan
    • 1
  • Jeremiah J. Willcock
    • 2
  • Thomas Panas
    • 1
  1. 1.Center for Applied Scientific ComputingLawrence Livermore National LaboratoryLivermoreUSA
  2. 2.School of Informatics and ComputingIndiana UniversityBloomingtonUSA

Personalised recommendations