Abstract
Because most of the execution time of a program is typically spend in loops, loop optimization is the main target of optimizing and restructuring compilers. An accurate determination of induction variables and dependencies in loops is of paramount importance to many loop optimization and parallelization techniques, such as generalized loop strength reduction, loop parallelization by induction variable substitution, and loop-invariant expression elimination. In this paper we present a new method for induction variable recognition. Existing methods are either ad-hoc and not powerful enough to recognize some types of induction variables, or existing methods are powerful but not safe. The most powerful method known is the symbolic differencing method as demonstrated by the Parafrase-2 compiler on parallelizing the Perfect Benchmarks(R). However, symbolic differencing is inherently unsafe and a compiler that uses this method may produce incorrectly transformed programs without issuing a warning. In contrast, our method is safe, simpler to implement in a compiler, better adaptable for controlling loop transformations, and recognizes a larger class of induction variables.
This work was supported in part by NSF grant CCR-9904943
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
A. Aho, R. Sethi, and J. Ullman. Compilers: Principles, Techniques and Tools. Addison-Wesley Publishing Company, Reading MA, 1985.
F. Allen, J. Cocke, and K. Kennedy. Reduction of operator strength. In S. Munchnick and N. Jones, editors, Program Flow Analysis, pages 79–101, New-Jersey, 1981. Prentice-Hall.
Z. Ammerguallat and W.L. Harrison III. Automatic recognition of induction variables and recurrence relations by abstract interpretation. In ACM SIGPLAN’90 Conference on Programming Language Design and Implementation, pages 283–295, White Plains, NY, 1990.
O. Bachmann. Chains of Recurrences. PhD thesis, Kent State University of Arts and Sciences, 1996.
O. Bachmann, P.S. Wang, and E.V. Zima. Chains of recurrences-a method to expedite the evaluation of closed-form functions. In International Symposium on Symbolic and Algebraic Computing, pages 242–249, Oxford, 1994. ACM.
W. Blume and R. Eigenmann. Demand-driven, symbolic range propagation. In 8th International workshop on Languages and Compilers for Parallel Computing, pages 141–160, Columbus, Ohio, USA, August 1995.
R. Eigenmann, J. Hoeflinger, G. Jaxon, Z. Li, and D.A. Padua. Restructuring fortran programs for cedar. In ICPP, volume 1, pages 57–66, St. Charles, Illinois, 1991.
R. Eigenmann, J. Hoeflinger, Z. Li, and D.A. Padua. Experience in the automatic parallelization of four perfect-benchmark programs. In 4th Annual Workshop on Languages and Compilers for Parallel Computing, LNCS 589, pages 65–83, Santa Clara, CA, 1991. Springer Verlag.
T. Fahringer. Efficient symbolic analysis for parallelizing compilers and performance estimators. Supercomputing, 12(3):227–252, May 1998.
Mohammad R. Haghighat. Symbolic Analysis for Parallelizing Compilers. Kluwer Academic Publishers, 1995.
M.R. Haghighat and C.D. Polychronopoulos. Symbolic program analysis and optimization for parallelizing compilers. In 5th Annual Workshop on Languages and Compilers for Parallel Computing, LNCS 757, pages 538–562, New Haven, Connecticut, 1992. Springer Verlag.
P. Knupp and S. Steinberg. Fundamentals of Grid Generation. CRC Press, 1994.
S. Munchnick. Advanced Compiler Design and Implementation. Morgan Kaufmann, San Fransisco, CA, 1997.
J.P. Singh and J.L. Hennessy. An emperical investigation of the effectiviness and limitations of automatic parallelization. In N. Suzuki, editor, Shared Memory Multiprocessing, pages 203–207. MIT press, Cambridge MA, 1992.
R. van Engelen, D. Whalley, and X. Yuan. Automatic validation of code-improving transformations. In ACM SIGPLAN Workshop on Language, Compilers, and Tools for Embedded Systems, 2000.
R.A. van Engelen. Symbolic evaluation of chains of recurrences for loop optimization. Technical report, TR-000102, Computer Science Deptartment, Florida State University, 2000. Available from http://www.cs.fsu.edu/~engelen/cr.ps.gz.
R.A. van Engelen, L. Wolters, and G. Cats. Ctadel: A generator of multi-platform high performance codes for pde-based scientific applications. In 10th ACM International Conference on Supercomputing, pages 86–93, New York, 1996. ACM Press.
R.A. van Engelen, L. Wolters, and G. Cats. Tomorrow’s weather forecast: Automatic code generation for atmospheric modeling. IEEE Computational Science & Engineering, 4(3):22–31, July/September 1997.
M.J. Wolfe. Beyond induction variables. In ACM SIGPLAN’92 Conference on Programming Language Design and Implementation, pages 162–174, San Fransisco, CA, 1992.
M.J. Wolfe. High Performance Compilers for Parallel Computers. Addison-Wesley, Redwood City, CA, 1996.
E.V. Zima. Recurrent relations and speed-up of computations using computer algebra systems. In DISCO’92, pages 152–161. LNCS 721, 1992.
H. Zima. Supercompilers for Parallel and Vector Computers. ACM Press, New York, 1990.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
van Engelen, R.A. (2001). Efficient Symbolic Analysis for Optimizing Compilers. In: Wilhelm, R. (eds) Compiler Construction. CC 2001. Lecture Notes in Computer Science, vol 2027. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45306-7_9
Download citation
DOI: https://doi.org/10.1007/3-540-45306-7_9
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-41861-0
Online ISBN: 978-3-540-45306-2
eBook Packages: Springer Book Archive