Skip to main content

A Hybrid Approach to Proving Memory Reference Monotonicity

  • Conference paper
Languages and Compilers for Parallel Computing (LCPC 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7146))

Abstract

Array references indexed by non-linear expressions or subscript arrays represent a major obstacle to compiler analysis and to automatic parallelization. Most previous proposed solutions either enhance the static analysis repertoire to recognize more patterns, to infer array-value properties, and to refine the mathematical support, or apply expensive run time analysis of memory reference traces to disambiguate these accesses. This paper presents an automated solution based on static construction of access summaries, in which the reference non-linearity problem can be solved for a large number of reference patterns by extracting arbitrarily-shaped predicates that can (in)validate the reference monotonicity property and thus (dis)prove loop independence. Experiments on six benchmarks show that our general technique for dynamic validation of the monotonicity property can cover a large class of codes, incurs minimal run-time overhead and obtains good speedups.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allen, R., Kennedy, K.: Optimizing Compilers for Modern Architectures. Morgan Kaufmann (2002)

    Google Scholar 

  2. Banerjee, U.: Speedup of Ordinary Programs. Ph.D. Thesis, Dept. of Comp. Sci. Univ. of Illinois at Urbana-Champaign Report No. 79-989 (1988)

    Google Scholar 

  3. Berry, M., et al.: The PERFECT Club Benchmarks: Effective Performance Evaluation of Supercomputers. Int. J. of Supercomputer Applications 3, 5–40 (1988)

    Article  Google Scholar 

  4. Blume, W., Eigenmann, R.: Performance Analysis of Parallelizing Compilers on the Perfect Benchmarks Programs. IEEE Trans. Par. Distr. Sys. 3, 643–656 (1992)

    Article  Google Scholar 

  5. Blume, W., Eigenmann, R.: The Range Test: A Dependence Test for Symbolic, Non-Linear Expressions. In: Procs. Int. Conf. on Supercomp., pp. 528–537 (1994)

    Google Scholar 

  6. Blume, W., Eigenmann, R.: Demand-Driven, Symbolic Range Propagation. In: Huang, C.-H., Sadayappan, P., Banerjee, U., Gelernter, D., Nicolau, A., Padua, D.A. (eds.) LCPC 1995. LNCS, vol. 1033, pp. 141–160. Springer, Heidelberg (1996)

    Chapter  Google Scholar 

  7. Dang, F., Yu, H., Rauchwerger, L.: The R-LRPD Test: Speculative Parallelization of Partially Parallel Loops. In: Procs. of Int. Parallel and Distributed Processing Symp., pp. 20–29 (2002)

    Google Scholar 

  8. Engelen, R.A.V.: A unified framework for nonlinear dependence testing and symbolic analysis. In: Procs. Int. Conf. on Supercomputing, pp. 106–115 (2004)

    Google Scholar 

  9. Fahringer, T.: Efficient symbolic analysis for parallelizing compilers and performance estimators. J. of Supercomputing 12, 227–252 (1997)

    Article  Google Scholar 

  10. Feautrier, P.: Parametric Integer Programming. Operations Research 22(3), 243–268 (1988)

    MathSciNet  MATH  Google Scholar 

  11. Feautrier, P.: Dataflow Analysis of Array and Scalar References. Int. J. of Parallel Programming 20(1), 23–54 (1991)

    Article  MATH  Google Scholar 

  12. Hall, M.W., et al.: Interprocedural parallelization analysis in suif. ACM Trans. on Programming Languages and Systems 27(4), 662–731 (2005)

    Article  Google Scholar 

  13. Hoeflinger, J., Paek, Y., Yi, K.: Unified Interprocedural Parallelism Detection. Int. J. of Parallel Programming 29(2), 185–215 (2001)

    Article  MATH  Google Scholar 

  14. Lin, Y., Padua, D.: Demand-Driven Interprocedural Array Property Analysis. In: Carter, L., Ferrante, J. (eds.) LCPC 1999. LNCS, vol. 1863, pp. 303–317. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  15. Lin, Y., Padua, D.: Analysis of Irregular Single-Indexed Array Accesses and Its Applications in Compiler Optimizations. In: Watt, D.A. (ed.) CC 2000. LNCS, vol. 1781, pp. 202–218. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  16. Moon, S., Hall, M.W.: Evaluation of predicated array data-flow analysis for automatic parallelization. In: Procs. Int. Princ. Pract. of Par. Prog., pp. 84–95 (1999)

    Google Scholar 

  17. Oancea, C.E., Mycroft, A.: Set-Congruence Dynamic Analysis for Thread-Level Speculation (TLS). In: Amaral, J.N. (ed.) LCPC 2008. LNCS, vol. 5335, pp. 156–171. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  18. Oancea, C.E., Mycroft, A., Harris, T.: A Lightweight, In-Place Model for Software Thread-Level Speculation. In: Procs. Symp. Paral. Alg. Arch., pp. 223–232 (2009)

    Google Scholar 

  19. Paek, Y., Hoeflinger, J., Padua, D.: Efficient and Precise Array Access Analysis. ACM Trans. on Programming Languages and Systems 24(1), 65–109 (2002)

    Article  Google Scholar 

  20. Pugh, W.: The Omega Test: a Fast and Practical Integer Programming Algorithm for Dependence Analysis. Communications of the ACM 8, 4–13 (1992)

    Google Scholar 

  21. Pugh, W., Wonnacott, D.: Nonlinear Array Dependence Analysis. In: Proc. of Workshop on Lang. Comp. and Run-Time Support for Scallable Systems (1995)

    Google Scholar 

  22. Rauchwerger, L., Padua, D.: The LRPD Test: Speculative Run-Time Parallelization of Loops with Privatization and Reduction Parallelization. IEEE Trans. Par. Distr. Sys. 10(2), 160–199 (1999)

    Article  Google Scholar 

  23. Rauchwerger, L., Amato, N.M., Padua, D.A.: A scalable method for run-time loop parallelization. Int. J. of Parallel Programming 26, 26–6 (1995)

    Google Scholar 

  24. Rus, S., Hoeflinger, J., Rauchwerger, L.: Hybrid analysis: Static & dynamic memory reference analysis. Int. J. of Parallel Programming 31(3), 251–283 (2003)

    Article  MATH  Google Scholar 

  25. Rus, S., Pennings, M., Rauchwerger, L.: Sensitivity Analysis for Automatic Parallelization on Multi-Cores. In: Procs. Int. Conf. on Supercomp., pp. 263–273 (2007)

    Google Scholar 

  26. Rus, S., Zhang, D., Rauchwerger, L.: The Value Evolution Graph and its Use in Memory Reference Analysis. In: Procs. of Int. Conf. on Parallel Architectures and Compilation Techniques, pp. 243–254 (2004)

    Google Scholar 

  27. Yu, H., Rauchwerger, L.: Techniques for Reducing the Overhead of Run-Time Parallelization. In: Watt, D.A. (ed.) CC 2000. LNCS, vol. 1781, pp. 232–248. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Oancea, C.E., Rauchwerger, L. (2013). A Hybrid Approach to Proving Memory Reference Monotonicity. In: Rajopadhye, S., Mills Strout, M. (eds) Languages and Compilers for Parallel Computing. LCPC 2011. Lecture Notes in Computer Science, vol 7146. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36036-7_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-36036-7_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-36035-0

  • Online ISBN: 978-3-642-36036-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics