A Proposal to OpenMP for Addressing the CPU Oversubscription Challenge

  • Yonghong YanEmail author
  • Jeff R. Hammond
  • Chunhua Liao
  • Alexandre E. Eichenberger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9903)


OpenMP has become a successful programming model for developing multi-threaded applications. However, there are still some challenges in terms of OpenMP’s interoperability within itself and with other parallel programming APIs. In this paper, we explore typical use cases that expose OpenMP’s interoperability challenges and report our proposed solutions for addressing the resource oversubscription issue as the efforts by the OpenMP Interoperability language subcommittee. The solutions include OpenMP runtime routines for changing the wait policies, which include ACTIVE(SPIN_BUSY or SPIN_PAUSE), PASSIVE (SPIN_YIELD or SUSPEND), of idling threads for improved resource management, and routines for supporting contributing OpenMP threads to other thread libraries or tasks. Our initial implementations are being done by extending two OpenMP runtime libraries, Intel OpenMP (IOMP) and GNU OpenMP (GOMP). The evaluation results demonstrate the effectiveness of the proposed approach to address the CPU oversubscription challenge and detailed analysis provide heuristics for selecting an optimal wait policy according to the oversubscription ratios.


Runtime System Parallel Region Thread Pool Contention Group OpenMP Thread 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



We thank members from the OpenMP Interoperability language subcommittee and the language committee in general for providing insightful comments of the design. We are also grateful to Terry Wilmarth and Brian Bliss from Intel for providing information that help our implementation. This material is based upon work supported by the National Science Foundation under Grant No. SHF-1409946 and SHF-1551182. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.


  1. 1.
    INTERTWinE: Programming Model INTERoperability ToWards Exascale.
  2. 2.
    Chatterjee, S., Tasirlar, S., Budimlic, Z., Cavé, V., Chabbi, M., Grossman, M., Sarkar, V., Yan, Y.: Integrating asynchronous task parallelism with MPI. In: 2013 IEEE 27th International Symposium on Parallel Distributed Processing (IPDPS), pp. 712–725, May 2013Google Scholar
  3. 3.
  4. 4.
    Dinan, J., Balaji, P., Goodell, D., Miller, D., Snir, M., Thakur, R.: Enabling MPI interoperability through flexible communication endpoints. In: Proceedings of the 20th European MPI Users’ Group Meeting, EuroMPI 2013, pp. 13–18. ACM, New York, NY, USA (2013)Google Scholar
  5. 5.
    Epperly, T., Prantl, A., Chamberlain, B.: Composite parallelism: Creating interoperability between PGAS languages, HPCS languages and message passing libraries. Technical report LLNL-AR-499171 (2011)Google Scholar
  6. 6.
    Harris, T., Maas, M., Marathe, V.J.: Callisto: Co-scheduling parallel runtime systems. In: Proceedings of the Ninth European Conference on Computer Systems, EuroSys 2014, p. 24:1–24:14. ACM, New York, NY, USA (2014)Google Scholar
  7. 7.
    Hugo, A., Guermouche, A., Wacrenier, P.-A., Namyst, R.: Composing multiple StarPU applications over heterogeneous machines: a supervised approach. Int. J. High Perform. Comput. Appl. 28(3), 285–300 (2014)CrossRefGoogle Scholar
  8. 8.
    IBM Knowledge Center. XLSMPOPTS Runtime options: IBM XL C/C++ for Linux 12.1.0.
  9. 9.
    Intel. User and Reference Guide for the Intel\(^{\textregistered }\)C++ Compiler 15.0.
  10. 10.
    ORACLE. Oracle Solaris Studio 12.3: OpenMP API User’s Guide.
  11. 11.
    Pan, H., Hindman, B., Asanović, K.: Lithe: enabling efficient composition of parallel libraries. In: Proceedings of the First USENIX Conference on Hot Topics in Parallelism, HotPar 2009, pp. 11–11. Berkeley, CA, USA (2009)Google Scholar
  12. 12.
    Pérache, M., Jourdren, H., Namyst, R.: MPC: a unified parallel runtime for clusters of NUMA machines. In: Luque, E., Margalef, T., Benítez, D. (eds.) Euro-Par 2008. LNCS, vol. 5168, pp. 78–88. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  13. 13.
    Pheatt, C.: Intel\(^{\textregistered }\)threading building blocks. J. Comput. Sci. Coll. 23(4), 298–298 (2008)Google Scholar
  14. 14.
    Tian, X., Girkar, M., Shah, S., Armstrong, D., Ernesto, S., Petersen, P.: Compiler and runtime support for running OpenMP programs on pentium-and itanium-architectures. In: 2003 Proceedings of International Parallel and Distributed Processing Symposium, p. 9. IEEE (2003)Google Scholar
  15. 15.
    Wang, E., Zhang, Q., Shen, B., Zhang, G., Xiaowei, L., Qing, W., Wang, Y.: Intel math kernel library. In: Wang, E., Zhang, Q., Shen, B., Zhang, G., Xiaowei, L., Qing, W., Wang, Y. (eds.) High-Performance Computing on the Intel\(^{\textregistered }\)Xeon Phi™, pp. 167–188. Springer, Heidelberg (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Yonghong Yan
    • 1
    • 5
    Email author
  • Jeff R. Hammond
    • 2
    • 5
  • Chunhua Liao
    • 3
  • Alexandre E. Eichenberger
    • 4
    • 5
  1. 1.Department of Computer Science and EngineeringOakland UniversityRochesterUSA
  2. 2.Parallel Computing Lab, Intel Corp.Santa ClaraUSA
  3. 3.Center for Applied Scientific ComputingLawrence Livermore National LaboratoryLivermoreUSA
  4. 4.Thomas J. Watson Research Center, IBMYorktown HeightsUSA
  5. 5.OpenMP Interoperability Language SubcommitteeHoustonUSA

Personalised recommendations