OpenMP and Compilation Issue in Embedded Applications

  • Jaegeun Oh
  • Seon Wook Kim
  • Chulwoo Kim
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2716)


Currently embedded systems become more and more important and widely applied to everywhere around us, such as a mobile phone, a PDA, an HDTV, and so on. In this paper, we applied OpenMP to non-traditional benchmarks, i.e. embedded applications in order to examine the applicability of OpenMP in this area. We parallelized embedded benchmarks, called EEMBC, consisting of 5 categories and total 34 applications, and measure their performance in detail. From experiment, we could find 90 parallel sections in 17 applications, but we achieved speedup only in four applications. Since embedded applications consists of a chunk of small loops, we could not get speedup due to large parallelization overheads such as thread management and instruction overheads. Also we show that the OpenMP-inserted parallel code size is much larger than a serial version due to multithreaded libraries, which is critical to embedded systems because of their limited size of memory systems. We discuss an identified critical, but a trivial problem in the current OpenMP specification when we applied OpenMP to these applications.


Execution Time Parallel Section Code Size Parallel Region Parallel Code 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Silicon Graphics Inc., Origin 2000.
  2. 2.
    Sun Microsystems Inc., Mountain View, CA, Sun Enterprise 4000.
  3. 3.
    William Blume, Ramon Doallo, Rudolf Eigenmann, John Grout, Jay Hoeflinger, Thomas Lawrence, Jaejin Lee, David Padua, Yunheung Paek, Bill Pottenger, Lawrence Rauchwerger, and Peng Tu. Parallel programming with Polaris. IEEE Computer, pages 78–82, December 1996.Google Scholar
  4. 4.
    M. W. Hall, J. M. Anderson, S. P. Amarasinghe, B. R. Murphy, S.-W. Liao, E. Bugnion, and M. S. Lam. Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer, pages 84–89, December 1996.Google Scholar
  5. 5.
    OpenMP Forum, OpenMP: A Proposed Industry Standard API for Shared Memory Programming, October 1997.
  6. 6.
  7. 7.
    EEMBC (EDN Embedded Microprocessor Benchmark Consortium).
  8. 8.
    William M. Pottenger. Induction variable substitution and reduction recognition in the polaris parallelizing compiler. Technical Report UIUCDCS-R-98-2072, 1998.Google Scholar
  9. 9.
    Peng Tu and David A. Padua. Automatic array privatization. In Compiler Optimizations for Scalable Parallel Systems Languages, pages 247–284, 2001.Google Scholar
  10. 10.
    OpenMP Architecture Board, OpenMP C and C++ Application Program Interface 2.0, March 2002.

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Jaegeun Oh
    • 1
  • Seon Wook Kim
    • 1
  • Chulwoo Kim
    • 2
  1. 1.Advanced Computer Systems Laboratory, Department of Electronics and Computer EngineeringKorea UniversitySeoulKorea
  2. 2.Integrated System and Processor Laboratory, Department of Electronics and Computer EngineeringKorea UniversitySeoulKorea

Personalised recommendations