Advertisement

Design-Driven Compilation

  • Radu Rugina
  • Martin Rinard
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2027)

Abstract

This paper introduces design-driven compilation, an approach in which the compiler uses design information to drive its analysis and verify that the program conforms to its design. Although this approach requires the programmer to specify additional design information, it offers a range of benefits, including guaranteed fidelity to the designer’s expectations of the code, early and automatic detection of design non- conformance bugs, and support for local analysis, separate compilation, and libraries. It can also simplify the compiler and improve its efficiency. The key to the success of our approach is to combine high-level design specifications with powerful static analysis algorithms that handle the low-level details of verifying the design information.

Keywords

Design Information Access Region Sort Procedure Array Index Program Language Design 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    S. Amarasinghe, J. Anderson, M. Lam, and C. Tseng. The SUIF compiler for scalable parallel machines. In Proceedings of the Eighth SIAM Conference on Parallel Processing for Scientific Computing, February 1995.Google Scholar
  2. 2.
    R. Blumofe, C. Joerg, B. Kuszmaul, C. Leiserson, K. Randall, and Y. Zhou. Cilk: An efficient multithreaded runtime system. In Proceedings of the 5th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Santa Barbara, CA, July 1995. ACM, New York.Google Scholar
  3. 3.
    S. Chatterjee, A. Lebeck, P. Patnala, and M. Thottethodi. Recursive array layouts and fast matrix multiplication. In Proceedings of the 11th Annual ACM Symposium on Parallel Algorithms and Architectures, Saint Malo, France, June 1999.Google Scholar
  4. 4.
    T.H. Cormen, C.E. Leiserson, and R.L. Rivest. Introductions to Algorithms. The MIT Press, Cambridge, Mass., Cambridge, MA, 1990.Google Scholar
  5. 5.
    Maryam Emami, Rakesh Ghiya, and Laurie J. Hendren. Context-sensitive interprocedural points-to analysis in the presence of function pointers. In Proceedings of the SIGPLAN’ 94 Conference on Program Language Design and Implementation, Orlando, FL, June 1994.Google Scholar
  6. 6.
    J. Frens and D. Wise. Auto-blocking matrix-multiplication or tracking BLAS3 performance from source code. In Proceedings of the 6th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Las Vegas, NV, June 1997.Google Scholar
  7. 7.
    M. Frigo, C. Leiserson, and K. Randall. The implementation of the Cilk-5 multi-threaded language. In Proceedings of the SIGPLAN’ 98 Conference on Program Language Design and Implementation, Montreal, Canada, June 1998.Google Scholar
  8. 8.
    D. Gifford, P. Jouvelot, J. Lucassen, and M. Sheldon. FX-87 reference manual. Technical Report MIT/LCS/TR-407, Laboratory for Computer Science, Massachusetts Institute of Technology, September 1987.Google Scholar
  9. 9.
    M. Gupta, S. Mukhopadhyay, and N. Sinha. Automatic parallelization of recursive procedures. Technical report, IBM T. J. Watson Research Center, 1999.Google Scholar
  10. 10.
    F. Gustavson. Recursion leads to automatic variable blocking for dense linear-algebra algorithms. IBM Journal of Research and Development, 41(6):737–755, November 1997.CrossRefGoogle Scholar
  11. 11.
    M.W. Hall, S.P. Amarasinghe, B.R. Murphy, S. Liao, and M.S. Lam. Detecting coarse-grain parallelism using an interprocedural parallelizing compiler. In Proceedings of Supercomputing’ 95, San Diego, CA, December 1995. IEEE Computer Society Press, Los Alamitos, Calif.Google Scholar
  12. 12.
    P. Havlak and K. Kennedy. An implementation of interprocedural bounded regular section analysis. IEEE Transactions on Parallel and Distributed Systems, 2(3):350–360, July 1991.CrossRefGoogle Scholar
  13. 13.
    M. Rinard and M. Lam. The design, implementation, and evaluation of jade. ACM Transactions on Programming Languages and Systems, 20(3):483–545, May 1998.CrossRefGoogle Scholar
  14. 14.
    R. Rugina and M. Rinard. Automatic parallelization of divide and conquer algorithms. In Proceedings of the 7th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Atlanta, GA, May 1999.Google Scholar
  15. 15.
    R. Rugina and M. Rinard. Pointer analysis for multithreaded programs. In Proceedings of the SIGPLAN’ 99 Conference on Program Language Design and Implementation, Atlanta, GA, May 1999.Google Scholar
  16. 16.
    R. Rugina and M. Rinard. Symbolic bounds analysis of pointers, array indexes, and accessed memory regions. In Proceedings of the SIGPLAN’ 00 Conference on Program Language Design and Implementation, Vancouver, Canada, June 2000.Google Scholar
  17. 17.
    R. Triolet, F. Irigoin, and P. Feautrier. Direct parallelization of CALL statements. In Proceedings of the SIGPLAN’ 86 Symposium on Compiler Construction, Palo Alto, CA, June 1986.Google Scholar
  18. 18.
    R. Wilson and M. S. Lam. Efficient context-sensitive pointer analysis for C programs. In Proceedings of the SIGPLAN’ 95 Conference on Program Language Design and Implementation, La Jolla, CA, June 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Radu Rugina
    • 1
  • Martin Rinard
    • 1
  1. 1.Laboratory for Computer ScienceMassachusetts Institute of TechnologyCambridge

Personalised recommendations