Skip to main content

Background

  • Chapter
  • First Online:

Part of the book series: SpringerBriefs in Applied Sciences and Technology ((BRIEFSPOLIMI))

Abstract

Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. The techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to use) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This chapter summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (i) selecting the best optimizations and (ii) the phase-ordering of optimizations. The chapter highlights the approaches taken, the obtained results, the holistic comparisons among different approaches and finally, the visionary path towards the near future.

This is a preview of subscription content, log in via an institution.

Notes

  1. 1.

    https://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html.

  2. 2.

    http://llvm.org/docs/Passes.html.

  3. 3.

    The phase-ordering problem does not have a deterministic upper-bound when we have an unbounded optimization length.

References

  1. Schaller RR (1997) Moore’s law: past, present and future. IEEE Spectr 34(6):52–59

    Article  Google Scholar 

  2. Hall M, Padua D, Pingali K (2009) Compiler research: the next 50 years. Commun ACM 52:60–67

    Article  Google Scholar 

  3. Aho AV, Sethi R, Ullman JD (1986) Compilers, principles, techniques. Addison-Wesley, Boston

    Google Scholar 

  4. Palermo G, Silvano C, Zaccaria V (2005) Multi-objective design space exploration of embedded systems. J Embedded Comput 1(3):305–316

    Google Scholar 

  5. Chaitin GJ, Auslander MA, Chandra AK, Cocke J, Hopkins ME, Markstein PW (1981) Register allocation via coloring. Comput Lang 6(1):47–57

    Google Scholar 

  6. Freudenberger SM, Ruttenberg JC (1992) Phase ordering of register allocation and instruction scheduling. In: Code generation concepts, tools, techniques. Springer, pp 146–170

    Google Scholar 

  7. Triantafyllis S, Vachharajani M, Vachharajani N, August DI (2003) Compiler optimization-space exploration. In: International symposium on code generation and optimization, 2003. CGO 2003. IEEE Computing Society, pp 204–215

    Google Scholar 

  8. Kulkarni PA, Whalley DB, Tyson GS (2007) Evaluating heuristic optimization phase order search algorithms. In International symposium on code generation and optimization (CGO’07). IEEE, Mar 2007, pp 157–169

    Google Scholar 

  9. Kulkarni S, Cavazos J (2012) Mitigating the compiler optimization phase-ordering problem using machine learning. ACM SIGPLAN Notices

    Google Scholar 

  10. Vegdahl SR (1982) Phase coupling and constant generation in an optimizing microcode compiler. ACM SIGMICRO Newsl 13(4):125–133

    Article  Google Scholar 

  11. Whitfield D, Soffa ML (1990) An approach to ordering optimizing transformations. In: Proceedings of the second ACM SIGPLAN symposium on Principles & practice of parallel programming—PPOPP ’90, vol 25. ACM Press, New York, New York, USA, pp 137–146

    Google Scholar 

  12. Ashouri AH, Mariani G, Palermo G, Park E, Cavazos J, Silvano C (2016) Cobayn: compiler autotuning framework using Bayesian networks. ACM Trans Archit Code Optim (TACO) 13(2):21:1–21:25. http://doi.acm.org/10.1145/2928270 (June)

  13. Bodin F, Kisuki T, Knijnenburg P, O’Boyle M, Rohou E (1998) Iterative compilation in a non-linear optimisation space. In: Workshop on profile and feedback-directed compilation

    Google Scholar 

  14. Chen Y, Fang S, Huang Y, Eeckhout L, Fursin G, Temam O, Wu C (2012) Deconstructing iterative optimization. ACM Trans Archit Code Optim (TACO) 9(3):21

    Google Scholar 

  15. Alpaydin E (2014) Introduction to machine learning. MIT Press, Cambridge

    MATH  Google Scholar 

  16. Cooper KD, Schielke PJ, Subramanian D (1999) Optimizing for reduced code space using genetic algorithms. ACM SIGPLAN Notices

    Google Scholar 

  17. Koseki A (1997) A method for estimating optimal unrolling times for nested loops. In: Proceedings of third international symposium on parallel architectures, algorithms, and networks, 1997 (I-SPAN’97), pp 376–382

    Google Scholar 

  18. Moss E, Utgoff P, Cavazos J, Precup D, Stefanovic D, Brodley C, Scheeff D (1998) Learning to schedule straight-line code. Adv Neural Inf Process Syst 10:929–935

    Google Scholar 

  19. Loveman DB (1977) Program improvement by source-to-source transformation. J ACM (JACM) 24(1):121–145

    Article  MathSciNet  MATH  Google Scholar 

  20. Padua DA, Wolfe MJ (1986) Advanced compiler optimizations for supercomputers. Commun ACM 29(12):1184–1201

    Article  Google Scholar 

  21. Pollock LL, Soffa ML (1990) Incremental global optimization for faster recompilations. In: International conference on computer languages, 1990, pp 281–290 (March)

    Google Scholar 

  22. Whitfield DL, Soffa ML (1997) An approach for exploring code improving transformations. ACM Trans Program Lang Syst 19(6):1053–1084

    Article  Google Scholar 

  23. Bacon DF, Graham SL, Sharp OJ (1994) Compiler transformations for high-performance computing. ACM Comput Surv 26(4):345–420

    Article  Google Scholar 

  24. Schneck PB (1973) A survey of compiler optimization techniques. In: Proceedings of the ACM annual conference. ACM, pp 106–113

    Google Scholar 

  25. Dagum L, Menon R (1998) Openmp: an industry standard api for shared-memory programming. IEEE Comput Sci Eng 5(1):46–55

    Article  Google Scholar 

  26. Wienke S, Springer P, Terboven C, an Mey D (2012) OpenACC—first experiences with real-world applications. European conference on parallel processing. Springer, Berlin, pp 859–870

    Google Scholar 

  27. Leverett BW, Cattell RGG, Hobbs SO, Newcomer JM, Reiner AH, Schatz BR, Wulf WA (1979) An overview of the production quality compiler-compiler project. Carnegie Mellon University, Department of Computer Science

    Book  Google Scholar 

  28. Steuer RE (1986) Multiple criteria optimization: theory, computation, and applications. Wiley, Hoboken

    MATH  Google Scholar 

  29. Agakov F, Bonilla E, Cavazos J, Franke B, Fursin G, O’Boyle MFP, Thomson J, Toussaint M, Williams CKI (2006) Using machine learning to focus iterative optimization. In: Proceedings of the international symposium on code generation and optimization. IEEE Computer Society, pp 295–305

    Google Scholar 

  30. Almagor L, Cooper KD (2004) Finding effective compilation sequences. ACM SIGPLAN Notices 39(7):231–239

    Article  Google Scholar 

  31. Ansel J, Kamil S (2014). In Proceedings of the 23rd international conference on parallel architectures and compilation, pp 303–316

    Google Scholar 

  32. Ashouri AH (2012) Design space exploration methodology for compiler parameters in vliw processors. Master’s thesis, Politecnico Di Milano, Italy. http://hdl.handle.net/10589/72083

  33. Ashouri AH (2016) Compiler autotuning using machine learning techniques. Ph.D. thesis, Politecnico Di Milano, Italy. http://hdl.handle.net/10589/129561

  34. Cavazos J, Fursin G, Agakov F (2007) Rapidly selecting good compiler optimizations using performance counters. In: International symposium on code generation and optimization (CGO’07)

    Google Scholar 

  35. Cavazos J, Moss JEB (2004) Inducing heuristics to decide whether to schedule. ACM SIGPLAN Notices

    Google Scholar 

  36. Cavazos J, Moss JEB, O’Boyle MFP (2006) Hybrid optimizations: which optimization algorithm to use?. Compiler construction. Springer, Berlin, pp 124–138

    Google Scholar 

  37. Cavazos J, O’Boyle MFP (2006) Method-specific dynamic compilation using logistic regression. ACM SIGPLAN Notices

    Google Scholar 

  38. Chen Y, Huang Y, Eeckhout L, Fursin G, Peng L, Temam O, Wu C (2010) Evaluating iterative optimization across 1000 datasets. In: Proceedings of the 2010 ACM SIGPLAN conference on programming language design and implementation, PLDI ’10. ACM, New York, NY, USA, pp 448–459

    Google Scholar 

  39. Childers BR, Soffa ML (2005) A model-based framework: an approach for profit-driven optimization. In: International symposium on code generation and optimization. IEEE, pp 317–327

    Google Scholar 

  40. Dubach C, Cavazos J, Franke B (2007) Fast compiler optimisation evaluation using code-feature based performance prediction. In: Proceedings of the 4th international conference on Computing frontiers, pp 131–142

    Google Scholar 

  41. Dubach C, Jones TM, Bonilla EV (2009) Portable compiler optimisation across embedded programs and microarchitectures using machine learning, pp 78–88

    Google Scholar 

  42. Fang S, Xu W, Chen Y, Eeckhout L (2015) Practical iterative optimization for the data center. ACM Trans Archit Code Optim (TACO) 12(2):15

    Google Scholar 

  43. Franke B, O’Boyle M, Thomson J, Fursin G (2005) Probabilistic source-level optimisation of embedded programs. ACM SIGPLAN Notices

    Google Scholar 

  44. Fursin G, Cavazos J, O’Boyle M, Temam O (2007) Midatasets: creating the conditions for a more realistic evaluation of iterative optimization. In: International conference on high-performance embedded architectures and compilers, pp 245–260

    Google Scholar 

  45. Fursin G, Kashnikov Y, Memon AW (2011) Milepost gcc: machine learning enabled self-tuning compiler. Int J Parallel Prog 39(3):296–327

    Article  Google Scholar 

  46. Fursin G, Miranda C, Temam O (2008) MILEPOST GCC: machine learning based research compiler. GCC Summit

    Google Scholar 

  47. Fursin G, Temam O (2010) Collective optimization: a practical collaborative approach. ACM Trans Archit Code Optim (TACO) 7(4):20

    Google Scholar 

  48. Fursin GG (2004) Iterative compilation and performance prediction for numerical applications

    Google Scholar 

  49. Fursin GG, O’Boyle MFP, Knijnenburg PMW (2002) Evaluating iterative compilation. In: International workshop on languages and compilers for, parallel computing, pp 362–376

    Google Scholar 

  50. Fursin G, Temam O (2009) Collective optimization. In: International conference on high-performance embedded architectures and compilers. Springer, pp 34–49

    Google Scholar 

  51. Haneda M (2005) Optimizing general purpose compiler optimization. In: Proceedings of the 2nd conference on Computing frontiers, pp 180–188

    Google Scholar 

  52. Hoste K, Eeckhout L (2008) Cole: compiler optimization level exploration. In: Proceedings of the 6th annual IEEE/ACM international symposium on Code generation and optimization, pp 165–174

    Google Scholar 

  53. Killian W, Miceli R, Park E, Alvarez M, Cavazos J (2014) Performance improvement in kernels by guiding compiler auto-vectorization heuristics. In: PRACE-RI.EU

    Google Scholar 

  54. Knijnenburg PMW, Kisuki T, O’Boyle MFP (2003) Combined selection of tile sizes and unroll factors using iterative compilation. J Supercomput 24:43–67

    Article  MATH  Google Scholar 

  55. Li F, Tang F, Shen Y (2014) Feature mining for machine learning based compilation optimization. In: Proceedings—2014 8th international conference on innovative mobile and internet services in ubiquitous computing, IMIS 2014, pp 207–214

    Google Scholar 

  56. Lokuciejewski P, Plazar S, Falk H, Marwedel P, Thiele L (2010) Multi-objective exploration of compiler optimizations for real-time systems. In: ISORC 2010—2010 13th IEEE international symposium on object/component/service-oriented real-time distributed computing, vol 1, pp 115–122

    Google Scholar 

  57. Luo L, Chen Y, Wu , Long S, Fursin G (2014) Finding representative sets of optimizations for adaptive multiversioning applications. arXiv preprint arXiv:1407.4075

  58. Mars J, Hundt R (2009) Scenario based optimization: a framework for statically enabling online optimizations. In: Proceedings of the 7th annual IEEE/ACM international symposium on code generation and optimization

    Google Scholar 

  59. Namolaru M, Cohen A, Fursin G (2010) Practical aggregation of semantical program properties for machine learning based optimization. In: Proceedings of the 2010 international conference on compilers, architectures and synthesis for embedded systems

    Google Scholar 

  60. Pan Z, Eigenmann R (2004) Rating compiler optimizations for automatic performance tuning. In: Proceedings of the 2004 ACM/IEEE conference on supercomputing, p 14

    Google Scholar 

  61. Pan Z, Eigenmann R (2006) Fast and effective orchestration of compiler optimizations for automatic performance tuning. In: International symposium on code generation and optimization (CGO’06). IEEE, 12–pp

    Google Scholar 

  62. Park E, Cavazos J, Alvarez MA (2012) Using graph-based program characterization for predictive modeling. In: Proceedings of the international symposium on code generation and optimization, pp 295–305

    Google Scholar 

  63. Park EJ, Kartsaklis C, Cavazos J (2014) HERCULES: strong patterns towards more intelligent predictive modeling. In: 2014 43rd international conference on parallel processing, pp 172–181

    Google Scholar 

  64. Pinkers RPJ, Knijnenburg PMW, Haneda M, Wijshoff HAG (2004) Statistical selection of compiler options. In: Proceedings—IEEE computer society’s annual international symposium on modeling, analysis, and simulation of computer and telecommunications systems, MASCOTS, pp 494–501

    Google Scholar 

  65. Pouchet LN, Bastoul C (2007) Iterative optimization in the polyhedral model: part I, one-dimensional time. In: International symposium on code generation and optimization (CGO’07), pp 144–156

    Google Scholar 

  66. Pouchet LN, Bastoul C, Cohen A, Cavazos J (2008) Iterative optimization in the polyhedral model: part II, multidimensional time. ACM SIGPLAN Notices

    Google Scholar 

  67. Pouchet LN, Bondhugula U (2010) Combined iterative and model-driven optimization in an automatic parallelization framework. In: Proceedings of the 2010 ACM/IEEE international conference for high performance computing, networking, storage and analysis, pp 1–11

    Google Scholar 

  68. Sanchez RN, Amaral JN, Szafron D, Pirvu M, Stoodley M (2011) Using machines to learn method-specific compilation strategies. In: Proceedings of the 9th Annual IEEE/ACM international symposium on code generation and optimization, pp 257–266

    Google Scholar 

  69. Sarkar V (2000) Optimized unrolling of nested loops. In: Proceedings of the 14th international conference on Supercomputing, pp 153–166

    Google Scholar 

  70. Schkufza E, Sharma R, Aiken A (2014) Stochastic optimization of floating-point programs with tunable precision. ACM SIGPLAN Notices

    Google Scholar 

  71. Stephenson M, Amarasinghe S (2003) Meta optimization: improving compiler heuristics with machine learning. ACM SIGPLAN Notices 38:77–90

    Article  Google Scholar 

  72. Stephenson M, Amarasinghe S (2005) Predicting unroll factors using supervised classification. In: International symposium on code generation and optimization

    Google Scholar 

  73. Stephenson M, O’Reilly UM (2003) Genetic programming applied to compiler heuristic optimization. In: European conference on genetic programming, pp 238–253

    Google Scholar 

  74. Stock K, Pouchet LN, Sadayappan P (2012) Using machine learning to improve automatic vectorization. ACM Trans Archit Code Optim (TACO) 8(4):50

    Google Scholar 

  75. Thomson J, O’Boyle M, Fursin G, Franke B (2009) Reducing training time in a one-shot machine learning-based compiler. In: International workshop on languages and compilers for parallel computing, pp 399–407

    Google Scholar 

  76. Tournavitis G, Wang Z, Franke B, O’Boyle MFP (2009) Towards a holistic approach to auto-parallelization: integrating profile-driven parallelism detection and machine-learning based mapping. ACM SIGPLAN Notices, pp 177–187

    Google Scholar 

  77. Vaswani K (2007) Microarchitecture sensitive empirical models for compiler optimizations. In: International symposium on code generation and optimization (CGO’07), pp 131–143

    Google Scholar 

  78. Wang Z, O’Boyle MFP (2009) Mapping parallelism to multi-cores: a machine learning based approach. ACM SIGPLAN Notices

    Google Scholar 

  79. Wolczko MI, Ungar DM (2000) Method and apparatus for improving compiler performance during subsequent compilations of a source program. US Patent 6,078,744

    Google Scholar 

  80. Zhao M, Childers B, Soffa ML (2003) Predicting the impact of optimizations for embedded systems. In: Proceedings of the 2003 ACM SIGPLAN conference on language, compiler, and tool for embedded systems—LCTES ’03, vol 38. ACM Press, New York, New York, USA, p 1

    Google Scholar 

  81. Ashouri AH, Bignoli A, Palermo G, Kulkarni S, Silvano C, Cavazos J (2017) Micomp: Mitigating the compiler phase-ordering problem using optimization sub-sequences and machine learning. ACM Trans Archit Code Optim (TACO) 13(2):21:1–21:25 (October). https://doi.org/10.1145/3124452

  82. Ashouri AH, Bignoli A, Palermo G, Silvano C (2016) Predictive modeling methodology for compiler phase-ordering. In: Proceedings of the 7th workshop on parallel programming and run-time management techniques for many-core architectures and the 5th workshop on design tools and architectures for multicore embedded computing platforms, PARMA-DITAM ’16. ACM, New York, NY, USA, pp 7–12. http://doi.acm.org/10.1145/2872421.2872424

  83. Kulkarni P, Hines S, Hiser J (2004) Fast searches for effective optimization phase sequences. ACM SIGPLAN Notices 39(6):171–182

    Article  Google Scholar 

  84. Martins LGA, Nobre R, Delbem ACB, Marques E, Cardoso JMP (2014) Exploration of compiler optimization sequences using clustering-based selection. In: ACM SIGPLAN Notices, vol 49. ACM, pp 63–72

    Google Scholar 

  85. Reis L, Nobre R, Cardoso JMP (2016) Compiler phase ordering as an orthogonal approach for reducing energy consumption. In: CPC

    Google Scholar 

  86. Park E, Cavazos J, Pouchet LN (2013) Predictive modeling in a polyhedral optimization space. Int J Parallel Prog 41:704–750

    Article  Google Scholar 

  87. Purini S, Jain L (2013) Finding good optimization sequences covering program space. ACM Trans Archit Code Optim (TACO) 9(4):56

    Google Scholar 

  88. Queva MSB (2007) Phase-ordering in optimizing compilers

    Google Scholar 

  89. Mohri M, Rostamizadeh A, Talwalkar A (2012) Foundations of machine learning. MIT Press, Cambridge

    MATH  Google Scholar 

  90. Ashouri AH, Mariani G, Palermo G, Silvano C (2014) A Bayesian network approach for compiler auto-tuning for embedded processors. In: 2014 IEEE 12th symposium on embedded systems for real-time multimedia, ESTIMedia 2014, pp 90–97. http://doi.acm.org/10.1109/ESTIMedia.2014.6962349

  91. Park E, Kulkarni S, Cavazos J (2011) An evaluation of different modeling techniques for iterative compilation. In: Proceedings of the 14th international conference on compilers, architectures and synthesis for embedded systems, pp 65–74

    Google Scholar 

  92. Ding Y, Ansel J, Veeramachaneni K (2015) Autotuning algorithmic choice for input sensitivity. ACM SIGPLAN Notices 50(6):379–390

    Article  Google Scholar 

  93. Fraser CW (1999) Automatic inference of models for statistical code compression. ACM SIGPLAN Notices 34(5):242–246

    Article  Google Scholar 

  94. Fursin G, Cohen A (2007) Building a practical iterative interactive compiler. In: Workshop proceedings

    Google Scholar 

  95. Kulkarni S, Cavazos J (2013) Automatic construction of inlining heuristics using machine learning. In: 2013 IEEE/ACM international symposium on code generation and optimization (CGO), pp 1–12

    Google Scholar 

  96. Lokuciejewski P, Gedikli F (2009) Automatic WCET reduction by machine learning based heuristics for function inlining. In: 3rd workshop on statistical and machine learning approaches to architectures and compilation (SMART), pp 1–15

    Google Scholar 

  97. Monsifrot A, Bodin F, Quiniou R (2002) A machine learning approach to automatic production of compiler heuristics. In: International conference on artificial intelligence: methodology, systems, and applications, pp 41–50

    Google Scholar 

  98. Cavazos J, Dubach C, Agakov F (2006) Automatic performance model construction for the fast software exploration of new hardware designs. In: Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, pp 24–34

    Google Scholar 

  99. Cavazos J, O’Boyle MFP (2005) Automatic tuning of inlining heuristics. In: Proceedings of the ACM/IEEE SC 2005 conference on supercomputing, 2005, p 14

    Google Scholar 

  100. Cooper KD, Grosul A, Harvey TJ (2005) ACME: adaptive compilation made efficient. 40(7):69–77, 2005

    Google Scholar 

  101. Cooper KD, Subramanian D, Torczon L (2002) Adaptive optimizing compilers for the 21st century. J Supercomput 21:7–22

    Article  MATH  Google Scholar 

  102. Garciarena U, Santana R (2016) Evolutionary optimization of compiler flag selection by learning and exploiting flags interactions. In: Proceedings of the 2016 on genetic and evolutionary computation conference companion, GECCO ’16 Companion, 2016. ACM, New York, NY, USA, pp 1159–1166

    Google Scholar 

  103. Hoste K, Georges A, Eeckhout L (2010) Automated just-in-time compiler tuning. In: Proceedings of the 8th annual IEEE/ACM international symposium on Code generation and optimization, pp 62–72

    Google Scholar 

  104. Leather H, Bonilla E, O’Boyle M (2009) Automatic feature generation for machine learning based optimizing compilation. In: International symposium on code generation and optimization, CGO’09, pp 81–91

    Google Scholar 

  105. Nobre R, Martins LGA, Cardoso JMP (2016) A graph-based iterative compiler pass selection and phase ordering approach. In: Proceedings of the 17th ACM SIGPLAN/SIGBED conference on languages, compilers, tools, and theory for embedded systems. ACM, pp 21–30

    Google Scholar 

  106. Park EJ (2015) Automatic selection of compiler optimizations using program characterization and machine learning title. Ph.D. thesis

    Google Scholar 

  107. Stephenson MW (2006) Automating the construction of compiler heuristics using machine learning

    Google Scholar 

  108. Ashouri AH, Palermo G, Silvano C (2016) An evaluation of autotuning techniques for the compiler optimization problems. In: RES4ANT2016 co-located with DATE 2016, pp 23–27. http://ceur-ws.org/Vol-1643/#paper-05

  109. Fursin G, Cohen A, O’Boyle M, Temam O (2005) A practical method for quickly evaluating program optimizations. In: International conference on high-performance embedded architectures and compilers, pp 29–46

    Google Scholar 

  110. Ashouri AH, Zaccaria V, Xydis S, Palermo G, Silvano C (2013) A framework for Compiler Level statistical analysis over customized VLIW architecture. In: VLSI-SoC, pp 124–129. http://dx.doi.org/10.1109/VLSI-SoC.2013.6673262

  111. Martins LGA, Nobre R (2016) Clustering-based selection for the exploration of compiler optimization sequences. ACM Trans Archit Code Optim (TACO) 13(1):8

    Google Scholar 

  112. Dietterich TG (2000) Ensemble methods in machine learning. In: International workshop on multiple classifier systems. Springer, pp 1–15

    Google Scholar 

  113. Friedman N, Geiger D, Goldszmidt M (1997) Bayesian network classifiers. Mach Learn 29(2–3):131–163

    Article  MATH  Google Scholar 

  114. Pearl J (1985) Bayesian networks: A model of self-activated memory for evidential reasoning. UCLA technical report CSD-850017). Proceedings of the 7th conference of the cognitive science society, University of California, Irvine, CA, vol 3, pp 329–334

    Google Scholar 

  115. Fursin G (2010) Collective benchmark (cbench), a collection of open-source programs with multiple datasets assembled by the community to enable realistic benchmarking and research on program and architecture optimization

    Google Scholar 

  116. Grauer-Gray S, Xu L, Searles R, Ayalasomayajula S, Cavazos J (2012) Auto-tuning a high-level language targeted to GPU codes. In: Innovative Parallel Computing (InPar), 2012. IEEE, pp. 1–10

    Google Scholar 

  117. Pouchet L-N (2012) Polybench: the polyhedral benchmark suite. http://www.cs.ucla.edu/~pouchet/software/polybench/ [cited July], 2012

  118. Lokuciejewski P, Plazar S (2011) Approximating Pareto optimal compiler optimization sequences a trade off between WCET, ACET and code size. Softw Pract Experience 41(12):1437–1458

    Article  Google Scholar 

  119. Specht DF (1990) Probabilistic neural networks. Neural Netw 3(1):109–118

    Article  Google Scholar 

  120. Deb K, Pratap A, Agarwal S, Meyarivan TAMT (2002) A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans Evol Comput 6(2):182–197

    Article  Google Scholar 

  121. Lee C, Potkonjak M, Mangione-Smith WH (1997) Mediabench: a tool for evaluating and synthesizing multimedia and communications systems. In: Proceedings of the 30th annual ACM/IEEE international symposium on microarchitecture. IEEE Computer Society, pp 330–335

    Google Scholar 

  122. Hastie T, Tibshirani R, Friedman J (2009) Unsupervised learning. In: The elements of statistical learning. Springer, pp 485–585

    Google Scholar 

  123. Silverman BW (1986) Density estimation for statistics and data analysis, vol 26. CRC Press, Boca Raton

    Book  MATH  Google Scholar 

  124. Stanley KO (2002) Efficient reinforcement learning through evolving neural network topologies. In: Proceedings of the genetic and evolutionary computation conference (GECCO-2002). Citeseer

    Google Scholar 

  125. Chapelle O, Scholkopf B, Zien A (2009) Semi-supervised learning (Chapelle, O. et al., eds.; 2006) [Book reviews]. IEEE Trans Neural Netw 20(3):542–542

    Google Scholar 

  126. Camps-Valls G, Marsheva TVB (2007) Semi-supervised graph-based hyperspectral image classification. IEEE Trans Geosci Remote Sens 45(10):3044–3054

    Article  Google Scholar 

  127. Getoor L (2007) Introduction to statistical relational learning. MIT press, Cambridge

    MATH  Google Scholar 

  128. Fisher JA, Faraboschi P, Young C (2009) Vliw processors: once blue sky, now commonplace. IEEE Solid-State Circuits Mag 1(2):10–17

    Article  Google Scholar 

  129. Fisher JA, Faraboschi P, Young C (2004) Embedded computing: a VLIW approach to architecture, compilers and tools. Morgan Kaufmann, San Francisco

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amir H. Ashouri .

Rights and permissions

Reprints and permissions

Copyright information

© 2018 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ashouri, A.H., Palermo, G., Cavazos, J., Silvano, C. (2018). Background. In: Automatic Tuning of Compilers Using Machine Learning. SpringerBriefs in Applied Sciences and Technology(). Springer, Cham. https://doi.org/10.1007/978-3-319-71489-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-71489-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-71488-2

  • Online ISBN: 978-3-319-71489-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics