Advertisement

New ideas in parallel lisp: Language design, implementation, and programming tools

  • Robert H. HalsteadJr.
Part I Parallel Lisp Languages and Programming Models
Part of the Lecture Notes in Computer Science book series (LNCS, volume 441)

Abstract

A Lisp-based approach is attractive for parallel computing since Lisp languages and systems assume significant clerical burdens, such as storage management. Parallel Lisps thus enable programmers to focus on the new problems introduced by using concurrency. Parallel Lisps now exist that can execute realistic applications with “industrial-strength” performance, but there are applications whose requirements they do not handle elegantly. Recent work has contributed new, elegant ideas in the areas of speculative computation, continuations, exception handling, aggregate data structures, and scheduling. Using these ideas, it should be possible to build “second generation” parallel Lisp systems that are as powerful and elegantly structured as sequential Lisp systems.

This paper surveys these recent ideas and explores how they could fit together in the parallel Lisp systems of the future, examining issues at three levels: language design, implementation techniques, and programming tools. The discussion is based on the Multilisp programming language, which is Scheme (a Lisp dialect) extended with the future construct. The paper outlines three criteria for judging Scheme extensions for parallel computing: compatibility with sequential Scheme, invariance of the result when future is introduced into side-effect-free Scheme programs, and modularity. Proposed language mechanisms, such as support for first-class continuations, are evaluated against these criteria.

In the area of implementation techniques, results of experiments with lazy task creation, unfair scheduling, and parallel garbage collection are surveyed; some areas that need more investigation, such as scheduler implementation for speculative computing, and interaction between user-level and operating-system schedulers, are identified. Finally, past work in tools to help with the development of Multilisp programs is surveyed, and needs for additional tools are discussed.

Keywords

Parallel Program Garbage Collection Sequential Scheme Exception Handling Task Creation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

10. References

  1. 1.
    Abelson, H., and G. Sussman, Structure and Interpretation of Computer Programs, M.I.T. Press, Cambridge, Mass., 1984.Google Scholar
  2. 2.
    Agarwal, A., R. Simoni, J. Hennessy, and M. Horowitz, “An Evaluation of Directory Schemes for Cache Coherence,” 15th Annual Int'l. Symp. on Computer Architecture, Honolulu, June 1988, pp. 280–289.Google Scholar
  3. 3.
    Allen, D., S. Steinberg, and L. Stabile, “Recent developments in Butterfly Lisp,” AAAI 87, July 1987, Seattle, pp. 2–6.Google Scholar
  4. 4.
    Anderson, T., The Design of a Multiprocessor Development System, M.I.T. Laboratory for Computer Science Technical Report TR-279, Cambridge, Mass., Sept. 1982.Google Scholar
  5. 5.
    Appel, A., J. Ellis, and K. Li, “Real-time Concurrent Collection on Stock Multiprocessors,” ACM SIGPLAN '88 Conf. on Programming Language Design and Implementation, Atlanta, June 1988, pp. 11–20.Google Scholar
  6. 6.
    Arvind, K. Gostelow, and W. Plouffe, An Asynchronous Programming Language and Computing Machine, U.C. Irvine Report TR114a, 1978.Google Scholar
  7. 7.
    Arvind, R. Nikhil, and K. Pingali, Id Nouveau: Language and Operational Semantics, CSG Memo, Computation Structures Group, M.I.T. Laboratory for Computer Science, Sept. 1987.Google Scholar
  8. 8.
    Baek, H.J., Parallel Retrieval Algorithms for Semantic Nets, S.B. thesis, M.I.T. E.E.C.S. Dept., Cambridge, Mass., June 1986.Google Scholar
  9. 9.
    Bagnall, L., ParVis: A Program Visualization Tool for Multilisp, S.M. thesis, MIT E.E.C.S. Dept., Cambridge, Mass., Feb. 1989.Google Scholar
  10. 10.
    Baker, H., Actor Systems for Real-Time Computation, M.I.T. Laboratory for Computer Science Technical Report TR-197, Cambridge, Mass., March 1978.Google Scholar
  11. 11.
    Baker, H., and C. Hewitt, “The Incremental Garbage Collection of Processes,” M.I.T. Artificial Intelligence Laboratory Memo 454, Cambridge, Mass., Dec. 1977.Google Scholar
  12. 12.
    Bartlett, J., SCHEME→C: A Portable Scheme-to-C Compiler, WRL Research Report 89/1, DEC Western Research Laboratory, Palo Alto, Ca., Jan. 1989.Google Scholar
  13. 13.
    Bradley, E., Logic Simulation on a Multiprocessor, Technical Report TR-380, M.I.T. Laboratory for Computer Science, Cambridge, Mass., November 1986.Google Scholar
  14. 14.
    Bradley, E., and R. Halstead, “Simulating Logic Circuits: A Multiprocessor Application,” Int'l. J. of Parallel Programming 16:4, August 1987, pp. 305–338.Google Scholar
  15. 15.
    Chambers, D., and D. Ungar, “Customization: Optimizing Compiler Technology for SELF, a Dynamically-Typed Object-Oriented Programming Language,” ACM SIGPLAN '89 Conf. on Programming Language Design and Implementation, Portland, Oregon, June 1989, pp. 146–160.Google Scholar
  16. 16.
    Clinger, W., A. Hartheimer, and E. Ost, “Implementation Strategies for Continuations,” 1988 ACM Symp. on Lisp and Functional Programming, Snowbird, Utah, July 1988, pp. 124–131.Google Scholar
  17. 17.
    Courtemanche, A., MultiTrash, a Parallel Garbage Collector for MultiScheme, S.B. thesis, M.I.T. E.E.C.S. Dept., Cambridge, Mass., Jan. 1986.Google Scholar
  18. 18.
    Crowther, W., et al., “Performance Measurements on a 128-Node Butterfly Parallel Processor,” 1985 Int'l. Conf. on Parallel Processing, St. Charles, Ill., Aug. 1985, pp. 531–540.Google Scholar
  19. 19.
    Dennis, J.B., “Data Flow Supercomputers,” IEEE Computer 13:11, Nov. 1980, pp. 48–56.Google Scholar
  20. 20.
    Dijkstra, E., et al., “On-the-fly Garbage Collection: An Exercise in Co-operation,” Language Hierarchies and Interfaces (Lecture Notes in Computer Science 46), Springer-Verlag, 1976.Google Scholar
  21. 21.
    Encore Computer Corp., Multimax Technical Summary, Encore Computer Corp., Marlborough, Mass., Rev. E, Jan. 1989.Google Scholar
  22. 22.
    Felleisen, M., D. Friedman, B. Duba, and J. Merrill, Beyond Continuations, Indiana University Computer Science Dept. Tech. Report 216, Bloomington, In., 1987.Google Scholar
  23. 23.
    Felleisen, M., “The Theory and Practice of First-Class Prompts,” 15th Annual ACM Symp. on Principles of Programming Languages, San Diego, Ca., Jan. 1988, pp. 180–190.Google Scholar
  24. 24.
    Forgy, C.L., “Rete: A Fast Algorithm for the Many Pattern / Many Object Match Problem,” Artificial Intelligence J. 19, Sept. 1982, pp. 17–37.Google Scholar
  25. 25.
    Friedman, D., and D. Wise, “Aspects of Applicative Programming for Parallel Processing,” IEEE Trans. Comp. C-27:4, April 1978, pp. 289–296.Google Scholar
  26. 26.
    Gabriel, R., Performance and Evaluation of Lisp Systems, M.I.T. Press, Cambridge, Mass., 1985.Google Scholar
  27. 27.
    Gabriel, R., and J. McCarthy, “Qlisp,” in J. Kowalik, ed., Parallel Computation and Computers for Artificial Intelligence, Kluwer Academic Publishers, 1988, pp. 63–89.Google Scholar
  28. 28.
    Goldman, R., and R. Gabriel, “Qlisp: Experience and New Directions,” ACM/SIGPLAN PPEALS 1988—Parallel Programming: Experience with Applications, Languages, and Systems, New Haven, Conn., July 1988, pp. 111–123.Google Scholar
  29. 29.
    Goldman, R., and R. Gabriel, “Preliminary Results with the Initial Implementation of Qlisp,” 1988 ACM Symp. on Lisp and Functional Programming, Snowbird, Utah, July 1988, pp. 143–152.Google Scholar
  30. 30.
    Goldman, R., R. Gabriel, and C. Sexton, “Qlisp: Parallel Processing in Lisp,” Proc. U.S./Japan Workshop on Parallel Lisp (Lecture Notes in Computer Science), Springer-Verlag, 1990.Google Scholar
  31. 31.
    Goodenough, J., “Exception Handling: Issues and a Proposed Notation,” Comm. ACM 18:12, Dec. 1975, pp. 683–696.Google Scholar
  32. 32.
    Gray, S., Using Futures to Exploit Parallelism in Lisp, S.M. thesis, M.I.T. E.E.C.S. Dept., Cambridge, Mass., Jan. 1986.Google Scholar
  33. 33.
    Gurd, J., C. Kirkham, and I. Watson, “The Manchester Prototype Dataflow Computer,” Comm. ACM 28:1, January 1985, pp. 34–52.Google Scholar
  34. 34.
    Halstead, R., “Multilisp: A Language for Concurrent Symbolic Computation,” ACM Trans. on Prog. Languages and Systems, October 1985, pp. 501–538.Google Scholar
  35. 35.
    Halstead, R., and J. Loaiza, “Exception Handling in Multilisp,” 1985 Int'l. Conf. on Parallel Processing, St. Charles, Ill., Aug. 1985, pp. 822–830.Google Scholar
  36. 36.
    Halstead, R., T. Anderson, R. Osborne, and T. Sterling, “Concert: Design of a Multiprocessor Development System,” 13th Annual Int'l. Symp. on Computer Architecture, Tokyo, June 1986, pp. 40–48.Google Scholar
  37. 37.
    Halstead, R., “Parallel Symbolic Computing,” IEEE Computer 19:8, August 1986, pp. 35–43.Google Scholar
  38. 38.
    Halstead, R., “An Assessment of Multilisp: Lessons from Experience,” Int'l. J. of Parallel Programming 15:6, Dec. 1986, pp. 459–501.Google Scholar
  39. 39.
    Halstead, R., “Overview of Concert Multilisp: A Multiprocessor Symbolic Computing System,” ACM Computer Architecture News 15:1, March 1987, pp. 5–14.Google Scholar
  40. 40.
    Halstead, R., “Parallel Symbolic Computing Using Multilisp,” in J. Kowalik, ed., Parallel Computation and Computers for Artificial Intelligence, Kluwer Academic Publishers, 1988, pp. 21–49.Google Scholar
  41. 41.
    Halstead, R., “Design Requirements for Concurrent Lisp Machines,” in K. Hwang and D. DeGroot, eds., Parallel Processing for Supercomputers and Artificial Intelligence, McGraw Hill, New York, 1989, pp. 69–105.Google Scholar
  42. 42.
    Harrison, W.L., “The Interprocedural Analysis and Automatic Parallelization of Scheme Programs,” Lisp and Symbolic Computation 2:3/4, 1989, pp. 179–396.Google Scholar
  43. 43.
    Henderson, P., and J.H. Morris, “A Lazy Evaluator,” Proc. 3rd ACM Symp. on Principles of Prog. Languages, 1976, pp. 95–103.Google Scholar
  44. 44.
    Hieb, R., and R.K. Dybvig, “Continuations and Concurrency,” 1990 ACM Conf. on the Principles and Practice of Parallel Programming (PPoPP), Seattle, March 1990.Google Scholar
  45. 45.
    Hillis, W.D., and G.L. Steele, “Data Parallel Algorithms,” Comm. ACM 29:12, Dec. 1986, pp. 1170–1183.Google Scholar
  46. 46.
    Hoare, C.A.R., “Monitors: An Operating System Structuring Concept,” Comm. ACM 17:10, October 1974, pp. 549–557.Google Scholar
  47. 47.
    Ichbiah, J.D., et al., “Preliminary ADA Reference Manual,” SIGPLAN Notices 14:6, Part A, June 1979.Google Scholar
  48. 48.
    Ito, T., and M. Matsui, “A Parallel Lisp Language PaiLisp and its Kernel Specification,” Proc. U.S./Japan Workshop on Parallel Lisp (Lecture Notes in Computer Science), Springer-Verlag, 1990.Google Scholar
  49. 49.
    Katz, M., ParaTran: A Transparent, Transaction Based Runtime Mechanism for Parallel Execution of Scheme, S.M. thesis, M.I.T. E.E.C.S. Dept., Cambridge, Mass., May 1986.Google Scholar
  50. 50.
    Katz, M., ParaTran: A Transparent, Transaction Based Runtime Mechanism for Parallel Execution of Scheme, Technical Report TR-454, M.I.T. Laboratory for Computer Science, Cambridge, Mass., July 1989.Google Scholar
  51. 51.
    Katz, M., and D. Weise, “Continuing Into the Future: On the Interaction of Futures and First-Class Continuations,” 1990 ACM Conf. on Lisp and Functional Programming, Nice, France, June 1990.Google Scholar
  52. 52.
    Katz, M., and D. Weise, “Continuing Into the Future: On the Interaction of Futures and First-Class Continuations (A Capsule Summary),” Proc. U.S./Japan Workshop on Parallel Lisp (Lecture Notes in Computer Science), Springer-Verlag, 1990.Google Scholar
  53. 53.
    Keller, R., and F. Lin, “Simulated Performance of a Reduction-Based Multiprocessor,” IEEE Computer 17:7, July 1984, pp. 70–82.Google Scholar
  54. 54.
    Kessler, R., and M. Swanson, “Concurrent Scheme,” Proc. U.S./Japan Workshop on Parallel Lisp (Lecture Notes in Computer Science), Springer-Verlag, 1990.Google Scholar
  55. 55.
    Knight, T., “An Architecture for Mostly Functional Languages,” ACM Symposium on Lisp and Functional Programming, Boston, Mass., Aug. 1986, pp. 105–112.Google Scholar
  56. 56.
    Knueven, P., P. Hibbard, and B. Leverett, “A Language System for a Multiprocessor Environment,” Fourth International Conf. on the Design and Implementation of Algorithmic Languages, Courant Institute of Mathematical Studies, New York, June 1976, pp. 264–274.Google Scholar
  57. 57.
    Kornfeld, W., and C. Hewitt, “The Scientific Community Metaphor,” IEEE Trans. on Systems, Man, and Cybernetics, January 1981.Google Scholar
  58. 58.
    Krall, E., and P. McGehearty, “A Case Study of Parallel Execution of a Rule-Based Expert System,” Int'l. J. of Parallel Programming 15:1, Feb. 1986, pp. 5–32.Google Scholar
  59. 59.
    Kranz, D., R. Kelsey, J. Rees, P. Hudak, J. Philbin, and N. Adams, “Orbit: An Optimizing Compiler for Scheme,” Proc. SIGPLAN '86 Symp. on Compiler Construction, June 1986, pp. 219–233.Google Scholar
  60. 60.
    Kranz, D., ORBIT: An Optimizing Compiler for Scheme, Yale University Technical Report YALEU/DCS/RR-632, February 1988.Google Scholar
  61. 61.
    Kranz, D., R. Halstead, and E. Mohr, “Mul-T: A High-Performance Parallel Lisp,” ACM SIGPLAN '89 Conf. on Programming Language Design and Implementation, Portland, Oregon, June 1989, pp. 81–90.Google Scholar
  62. 62.
    Kranz, D., R. Halstead, and E. Mohr, “Mul-T: A High-Performance Parallel Lisp (Extended Abstract),” Proc. U.S./Japan Workshop on Parallel Lisp (Lecture Notes in Computer Science), Springer-Verlag, 1990.Google Scholar
  63. 63.
    Larus, J., and P. Hilfinger, “Restructuring Lisp Programs for Concurrent Execution,” ACM/SIGPLAN PPEALS 1988—Parallel Programming: Experience with Applications, Languages, and Systems, New Haven, Conn., July 1988, pp. 100–110.Google Scholar
  64. 64.
    Lau, W., Lexical Analysis of Noisy Phonetic Transcriptions, S.M. thesis, M.I.T. E.E.C.S. Dept., Cambridge, Mass., Feb. 1986.Google Scholar
  65. 65.
    LeBlanc, T., and J. Mellor-Crummey, “Debugging Parallel Programs with Instant Replay,” IEEE Trans. Comp. C-36:4, April 1987, pp. 471–482.Google Scholar
  66. 66.
    Liskov, B.H., and A. Snyder, “Exception Handling in CLU,” IEEE Trans. Softw. Eng. SE-5:6, Nov. 1979, pp. 546–558.Google Scholar
  67. 67.
    Lucassen, J., and D. Gifford, “Polymorphic Effect Systems,” 15th Annual ACM Conf. on Principles of Programming Languages, Jan. 1988, pp. 47–57.Google Scholar
  68. 68.
    Ma, M., Efficient Message-Based System for Concurrent Simulation, Ph.D. thesis, M.I.T. E.E.C.S. Dept., Cambridge, Mass., January 1989.Google Scholar
  69. 69.
    Marti, J., and J. Fitch, “The Bath Concurrent Lisp Machine,” Proc. EURO-CAL '83 (Lecture Notes in Computer Science 162), Springer-Verlag, March 1983, pp. 78–90.Google Scholar
  70. 70.
    McCarthy, J, et al., LISP 1.5 Programmer's Manual, M.I.T. Press, Cambridge, Mass., 1962.Google Scholar
  71. 71.
    McGraw, J., et al., SISAL — Streams and Iteration it a Single-assignment Language, Language Reference Manual (version 1.0), Lawrence Livermore National Laboratory, Livermore, Calif., July 1983.Google Scholar
  72. 72.
    Meyer, A., and J. Riecke, “Continuations May Be Unreasonable: Preliminary Report,” 1988 ACM Symp. on Lisp and Functional Programming, Snowbird, Utah, July 1988, pp. 63–71.Google Scholar
  73. 73.
    Miller, J., MultiScheme: A Parallel Processing System Based on MIT Scheme, Technical Report TR-402, M.I.T. Laboratory for Computer Science, Cambridge, Mass., Sept. 1987.Google Scholar
  74. 74.
    Miller, J., “Implementing a Scheme-Based Parallel Processing System,” Int'l. J. of Parallel Programming 17:5, Oct. 1988, pp. 367–402.Google Scholar
  75. 75.
    Miller, J., and B. Epstein, “Garbage Collection in MultiScheme,” Proc. U.S./Japan Workshop on Parallel Lisp (Lecture Notes in Computer Science), Springer-Verlag, 1990.Google Scholar
  76. 76.
    Mohr, E., D. Kranz, and R. Halstead, “Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs,” 1990 ACM Conf. on Lisp and Functional Programming, Nice, France, June 1990.Google Scholar
  77. 77.
    Moon, D., “Garbage Collection in a Large Lisp System,” 1984 ACM Symp. on Lisp and Functional Programming, Austin, Tex., Aug. 1984, pp. 235–246.Google Scholar
  78. 78.
    Nuth, P., Communication Patterns in a Symbolic Multiprocessor, Technical Report TR-395, M.I.T. Laboratory for Computer Science, Cambridge, Mass., June 1987.Google Scholar
  79. 79.
    Osborne, R., Speculative Computation in Multilisp, Technical Report TR-464, MIT Laboratory for Computer Science, Cambridge, Mass., Dec. 1989.Google Scholar
  80. 80.
    Osborne, R., “Speculative Computation in Multilisp,” Proc. U.S./Japan Workshop on Parallel Lisp (Lecture Notes in Computer Science), Springer-Verlag, 1990.Google Scholar
  81. 81.
    Rees, J., N. Adams, and J. Meehan, The T Manual, fourth edition, Yale University Computer Science Department, January 1984.Google Scholar
  82. 82.
    Rees, J., and W. Clinger, eds., “Revised3 Report on the Algorithmic Language Scheme,” ACM SIGPLAN Notices 21:12, Dec. 1986, pp. 37–79.Google Scholar
  83. 83.
    Sitaram, D., and M. Felleisen, “Control Delimiters and their Hierarchies,” Lisp and Symbolic Computation 3, 1990, pp. 67–99.Google Scholar
  84. 84.
    Solomon, S., A Query Language on a Parallel Machine Operating System, S.B. thesis, M.I.T. E.E.C.S. Dept., Cambridge, Mass., May 1985.Google Scholar
  85. 85.
    Steele, G.L., Common Lisp: The Language, Digital Press, Burlington, Mass., 1984.Google Scholar
  86. 86.
    Steele, G.L., and W.D. Hillis, “Connection Machine Lisp: Fine-Grained Parallel Symbolic Processing,” 1986 ACM Conf. on Lisp and Functional Programming, Cambridge, Mass., Aug. 1986, pp. 279–297.Google Scholar
  87. 87.
    Stoy, J., Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory, M.I.T. Press, Cambridge, Mass., 1977.Google Scholar
  88. 88.
    Sugimoto, S., et al., “A Multi-Microprocessor System for Concurrent Lisp,” Proc. 1983 International Conf. on Parallel Processing, June 1983.Google Scholar
  89. 89.
    Swanson, M., R. Kessler, and G. Lindstrom, “An Implementation of Portable Standard Lisp on the BBN Butterfly,” 1988 ACM Symp. on Lisp and Functional Programming, Snowbird, Utah, July 1988, pp. 132–142.Google Scholar
  90. 90.
    Symbolics Corp., Symbolics Common Lisp: Language Concepts, Symbolics Corp., Cambridge, Mass., August 1986.Google Scholar
  91. 91.
    Turner, D., “A New Implementation Technique for Applicative Languages,” Software — Practice and Experience 9:1, Jan. 1979, pp. 31–49.Google Scholar
  92. 92.
    Wetherell, C., “Error Data Values in the Data-Flow Language VAL.” ACM Trans. on Prog. Languages and Systems 4:2, April 1982, pp. 226–238.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Robert H. HalsteadJr.
    • 1
  1. 1.Digital Equipment CorporationCambridge Research LabCambridge

Personalised recommendations