The NIP Parallel Object-Oriented Computational Model

  • Paul Watson
  • Savas Parastatidis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1602)


Implicitly parallel programming languages place the burden of exploiting and managing parallelism upon the compiler and runtime system, rather than on the programmer. This paper describes the design of NIP, a runtime system for supporting implicit parallelism in languages which combine both functional and object-oriented programming. NIP is designed for scaleable distributed memory systems including networks of workstations and custom parallel machines. The key components of NIP are: a parallel task execution unit which includes an efficient method for lazily creating parallel tasks from loop iterations; a distributed shared memory system optimised for parallel object-oriented programs; and a load balancing system for distributing work over the nodes of the parallel system. The paper describes the requirements placed on the runtime system by an implicitly parallel language and then details the design of the components that comprise NIP, showing how the components meet these requirements. Performance results for NIP running programs on a network of workstations are presented and analysed.


Load Balancer Shared Memory Parallel Task Runtime System Object Memory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Sunderam, V.S.: PVM: A framework for parallel distributed computing. Concurrency: Practice and Experience 2(4), 315–339 (1990)CrossRefGoogle Scholar
  2. 2.
    Forum M.P.I.: A message-passing interface standard. Journal of Supercomputer Applications and High Performance Computing 8(3/1) (1994)Google Scholar
  3. 3.
    Sterling, T., Mesina, P., Smith, P.H.: Enabling Technologies for PetaFLOPS Computing. MIT Press, Cambridge (1995)Google Scholar
  4. 4.
    Henson, M.C.: Elements of Functional Languages. Blackwell Scientific Publications, Malden (1987)Google Scholar
  5. 5.
    Hammond, K.: Parallel functional programming: an introduction. In: PASC’ 1994 -Conference on Parallel Symbolic Computation. World Scientific, Singapore (1994)Google Scholar
  6. 6.
    Darlington, J., Reece, M.J.: ALICE: A multiple-processor reduction machine for the parallel evaluation of adaptive languages. FPCA 1981 (1981)Google Scholar
  7. 7.
    Watson, I., Woods, V., Watson, P., Banach, R., Greenberg, M., Sargeant, J.: Flagship: a parallel architecture for declarative programming. In: 15th Annual Symposium on Computer Architecture (1988)Google Scholar
  8. 8.
    Sargeant, J.: United functions and objects: An overview. Technical Report UMCS-93-1-4, Dept. of Computer Science, University of Manchester (1993)Google Scholar
  9. 9.
    Mohr, E., Kranz, D.A., Halstead, R.H.J.: Lazy task creation: A technique for increasing the granularity of parallel programs. IEEE Transactions on Parallel and Distributed Systems 2(3), 264–280 (1991)CrossRefGoogle Scholar
  10. 10.
    Kaser, O., Ramakrishnan, C.R., Ramakrishnan, I.V., Sekar, R.C.: EQUALS -a fast parallel implementation of a lazy language. Journal of Functional Programming 7, 183–217 (1997)CrossRefGoogle Scholar
  11. 11.
    Burstall, R.M., Darlington, J.: Some transformations for developing recursive programs. Journal of the ACM 24(l), 44–67 (1977)zbMATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Goldstein, S.C., Schauser, K.E., Culler, D.E.: Lazy threads: Implementing a fast parallel call. Journal of Parallel and Distributed Computing 37(l), 5–20 (1996)CrossRefGoogle Scholar
  13. 13.
    Protic, J., Tomasevic, M., Milutinovic, V. (eds.): Distributed Shared Memory - Concepts and Systems. IEEE Computer Society, Los Alamitos (1998)Google Scholar
  14. 14.
    Keleher, P., Cox, A.L., Dwarkadas, S., Zwaenepoel, W.: Treadmarks: Distributed shared memory on standard workstations and operating systems. In: 1994 Winter USENIX Conference (1994)Google Scholar
  15. 15.
    Bershad, B.N., Zekauskas, M.J.: Midway: Shared memory parallel programming with entry consistency for distributed memory multiprocessors. Technical Report CMU-CS-91-170, School of Computer Science, Carnegie Mellon University (1991)Google Scholar
  16. 16.
    Ahuja, S., Carriero, N., Gelernter, D.: Linda and friends. Computer 19(8), 26–34 (1986)CrossRefGoogle Scholar
  17. 17.
    Jul, E., Henry, L., Hutchinson, N., Black, A.: Fine-grained mobility in the emerald system. ACM Transactions on Computer Systems 6(1), 109–133 (1988)CrossRefGoogle Scholar
  18. 18.
    Ramachandran, U., Khalidi, M.Y.A.: An implementation of distributed shared memory. Software Practise and Experience 21(5), 443–464 (1991)CrossRefGoogle Scholar
  19. 19.
    Keleher, P., Cox, A.L., Zwaenepoel, W.: Lazy release consistency for software distributed shared memory. In: 19th Annual International Symposium on Computer-Architecture, vol. 2, pp. 13–21 (1992)Google Scholar
  20. 20.
    Adve, S.V., Gharachorloo, K.: Shared memory consistency models: A tutorial. Computer 29(12), 66–77 (1996)CrossRefGoogle Scholar
  21. 21.
    Bershad, B.N., Zekauskas, M.J., Sawdon, W.A.: The midway distributed shared memory system. In: IEEE COMPCON Conference (1993)Google Scholar
  22. 22.
    von Eicken, T., Basu, A., Buch, V., Vogels, W.: U-net: A user-level network interface for parallel and distributed computing. In: 15th ACM Symposium on Operating Systems Principles (1995)Google Scholar
  23. 23.
    S.C Goldstein.: Lazy Threads: Compiler and Runtime Structures for Fine-Grained Parallel Programming. PhD thesis, Computer Science, Graduate Division, University of California at Berkeley (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Paul Watson
    • 1
  • Savas Parastatidis
    • 1
  1. 1.Department of Computing ScienceUniversity of Newcastle upon TyneUK

Personalised recommendations