A static scheduling system for a parallel machine (SM)2-II

  • Ling Xiao-ping
  • Hideharu Amano
Submitted Presentations
Part of the Lecture Notes in Computer Science book series (LNCS, volume 365)


(SM)2-II (Sparse Matrix Solving Machine II) is a large scale multiprocessor for widespread use in scientific computation. In this machine, problems are described in a concurrent process language. (SM)2-II is designed so as to manage a large number of small concurrent processes effectively. However, if the granularity of the processes is fine and the number of processes becomes great, the overhead of process switching and communication bottlenecks the machine.

If these processes are statically scheduled before execution, the overhead of process control can be greatly reduced. In this paper, a static scheduling system for concurrent processes is proposed. Using this system, processes are scheduled statically and merged into a smaller number of processes, according to the number of processing units. In the most successful case, no operating system is necessary.

In general, exactly optimized scheduling is an NP-complete problem. In our system, a practical optimization algorithm named LS-M (Level Scheduling with Merging) is utilized. Using LS-M, semi-optimized results can be obtained within a practical execution time. Some examples, including ordinary differential and simultaneously linear equations and expert systems written in OPS5, are used to evaluate the scheduling system.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Aman 83]
    H. Amano, T. Yoshida and H. Aiso, “(SM)2: Sparse Matrix Solving Machine,” Proc. of 10th Int. Symp. on Computer Architecture. pp.213–220 Jun. 1983.Google Scholar
  2. [Aman 85]
    H. Amano, et al., “(SM)2-II: The new version of the Sparse Matrix Solving Machine,” Proc. of 12th Int. Conf. on Computer Architecture, Jun. 1985, pp.100–107Google Scholar
  3. [Aman 87]
    H. Amano, “RSM: A communication mechanism for multiprocessors,” Proc. of 2nd Int. Conf. on Computer and Applications, Jun. 1987.Google Scholar
  4. [Boku 88]
    T. Boku, et al., “IMPULSE: a high performance processing unit for multiprocessors for scientific calculation,” Proc. of 15th Int. Symp. on Computer Architecture, May 1988, pp.365–372Google Scholar
  5. [Coff 72]
    E, G, Jr. Coffman and R. L. Graham, “Optimal Scheduling for Two Processor Systems,” Acta Information, Vol.1, 1972, pp.200–213.CrossRefGoogle Scholar
  6. [Forg 82]
    C. L. Forgy, “Rete: a fast algorithm for many pattern/many object pattern match problem,” Artifitial Intelligence Sept. 1982.Google Scholar
  7. [Hu 61]
    T. C. Hu, “Parallel Sequencing and Assembly Line problem,” Operations Research, Vol.9, No. 6 1961, pp.841–848.Google Scholar
  8. [Kasa 84]
    H. Kasahara, et al., “Practical Multiprocessor Scheduling Algorithms for Efficient Parallel Processing,” IEEE Trans. on Computers, vol. c-21, Nov. 1984, pp.1023–1029Google Scholar
  9. [Kudoh 85]
    T. Kudoh, et al., “NDL: A language for solving scientific problems on MIMD machines,” Proc. of 1st Int. Conf. on Super Computing Systems, Dec. 1985, pp.55–64Google Scholar
  10. [May 83]
    D. May, “OCCAM,” SIGPLAN Notices, V18 No.4, Apr. 1983, pp.69–79Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1989

Authors and Affiliations

  • Ling Xiao-ping
    • 1
  • Hideharu Amano
    • 1
  1. 1.Department of Electrical EngineeringKeio UniversityYokohamaJAPAN

Personalised recommendations