Advertisement

Language-Agnostic Optimization and Parallelization for Interpreted Languages

  • Michelle Mills StroutEmail author
  • Saumya Debray
  • Kate Isaacs
  • Barbara Kreaseck
  • Julio Cárdenas-Rodríguez
  • Bonnie Hurwitz
  • Kat Volk
  • Sam Badger
  • Jesse Bartels
  • Ian Bertolacci
  • Sabin Devkota
  • Anthony Encinas
  • Ben Gaska
  • Brandon Neth
  • Theo Sackos
  • Jon Stephens
  • Sarah Willer
  • Babak Yadegari
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11403)

Abstract

Scientists are increasingly turning to interpreted languages, such as Python, Java, R, Matlab, and Perl, to implement their data analysis algorithms. While such languages permit rapid software development, their implementations often run into performance issues that slow down the scientific process. Source-level approaches for parallelization are problematic for two reasons: first, many of the language features common to these languages can be challenging for the kinds of analyses needed for parallelization; and second, even where such analysis is possible, a language-specific approach implies that each language would need its own parallelizing compiler and/or constructs, resulting in significant duplication of effort.

The Science Up To Par project is investigating a radically different approach to this problem: automatic parallelization at the machine code level using trace information. The key to accomplishing this will be the static and dynamic analysis of executables and the reconstitution of such executables into parallel executables. The key insight is that with trace information it should be possible optimize out the interpreter and other dynamic features in a language-agnostic manner and create parallelized executables for multicore architectures. If successful, this can enable scientists to continue to develop in programming environments that most conveniently support their scientific exploration without paying the performance overheads currently associated with many such environments.

References

  1. 1.
    Bolz, C.F., Cuni, A., Fijalkowski, M., Rigo, A.: Tracing the meta-level: Pypy’s tracing JIT compiler. In: Proceedings of the 4th Workshop on the Implementation, Compilation, Optimization of Object-Oriented Languages and Programming Systems, pp. 18–25. ACM (2009)Google Scholar
  2. 2.
    Catanzaro, B., et al.: SEJITS: getting productivity and performance with selective embedded JIT specialization. Technical report UCB/EECS-2010-23, EECS Department, University of California, Berkeley, March 2010Google Scholar
  3. 3.
    Danford, F., Welch, E., Cárdenas-Ródriguez, J., Strout, M.M.: Analyzing parallel programming models for magnetic resonance imaging. In: Ding, C., Criswell, J., Wu, P. (eds.) LCPC 2016. LNCS, vol. 10136, pp. 188–202. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-52709-3_15CrossRefGoogle Scholar
  4. 4.
    Gaska, B.J.: Parforpy: loop parallelism in python. Master’s thesis, University of Arizona (2017)Google Scholar
  5. 5.
    Gaska, B.J., Jothi, N., Mohammadi, M.S., Volk, K., Strout, M.M.: Handling nested parallelism, load imbalance, and early termination in an orbital analysis code. Technical report arXiv:1707.09668, University of Arizona (2017)
  6. 6.
    Kotzmann, T., Wimmer, C., Mössenböck, H., Rodriguez, T., Russell, K., Cox, D.: Design of the Java hotspot™ client compiler for Java 6. ACM Trans. Archit. Code Optim. 5(1), 7:1–7:32 (2008)Google Scholar
  7. 7.
    Lindenbaum, P.: Programming language use distribution from recent programs/articles, April 2017. https://www.biostars.org/p/251002/
  8. 8.
    Oh, T., Beard, S.R., Johnson, N.P., Popovych, S., August, D.I.: A generalized framework for automatic scripting language parallelization. In: Proceedings of the 26th International Conference on Parallel Architectures and Compilation Techniques (PACT) (2017, to appear)Google Scholar
  9. 9.
    Oh, T., Kim, H., Johnson, N.P., Lee, J.W., August, D.I.: Practical automatic loop specialization. In: Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2013, pp. 419–430. ACM, New York (2013)Google Scholar
  10. 10.
    Schwartz, E.J., Avgerinos, T., Brumley, D.: All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask). In: Proceedings of IEEE Symposium on Security and Privacy, pp. 317–331 (2010)Google Scholar
  11. 11.
    Sharif, M., Lanzi, A., Giffin, J., Lee, W.: Automatic reverse engineering of malware emulators. In: 2009 30th IEEE Symposium on Security and Privacy, pp. 94–109. IEEE (2009)Google Scholar
  12. 12.
    Yadegari, B., Debray, S.: Bit-level taint analysis. In: IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM) (2014)Google Scholar
  13. 13.
    Yadegari, B., Debray, S.: Symbolic execution of obfuscated code. In: Proceedings of 22nd ACM Conference on Computer and Communications Security (CCS), October 2015Google Scholar
  14. 14.
    Yadegari, B., Debray, S.: Control dependencies in interpretive systems. In: Lahiri, S., Reger, G. (eds.) RV 2017. LNCS, vol. 10548, pp. 312–329. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67531-2_19CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Michelle Mills Strout
    • 1
    Email author
  • Saumya Debray
    • 1
  • Kate Isaacs
    • 1
  • Barbara Kreaseck
    • 1
  • Julio Cárdenas-Rodríguez
    • 1
  • Bonnie Hurwitz
    • 1
  • Kat Volk
    • 1
  • Sam Badger
    • 1
  • Jesse Bartels
    • 1
  • Ian Bertolacci
    • 1
  • Sabin Devkota
    • 1
  • Anthony Encinas
    • 1
  • Ben Gaska
    • 1
  • Brandon Neth
    • 1
  • Theo Sackos
    • 1
  • Jon Stephens
    • 1
  • Sarah Willer
    • 1
  • Babak Yadegari
    • 1
  1. 1.Department of Computer ScienceUniversity of ArizonaTucsonUSA

Personalised recommendations