Data Flow Implementation of Generalized Guarded Commands

  • R. Govindarajan
  • Sheng Yu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 506)

Abstract

Earlier approaches to execute generalized alternative/repetitive commands of Communicating Sequential Processes (CSP) attempt the selection of guards in a sequential order. Also, these implementations are based on either shared memory or message passing multiprocessor systems, which exploit parallelism only among the processes of a CSP program. In contrast, we propose a data flow implementation for CSP with generalized guarded commands in which both inter-process and intra-process concurrencies are exploited. A significant feature of our implementation is that it attempts the selection of guards of a process in parallel. A simulated model empirically demonstrates correctness properties, namely ‘safety’ and ‘liveness’, of our implementation. The simulation experiments are also helpful in obtaining certain efficiency and fairness parameters of the implementation.

Keywords

Europe Expense Dine 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Arvind and Gostelow, K.P., “The U Interpreter”, IEEE Computer, vol. 15, no. 2, pp. 42–49, Feb. 1982.CrossRefGoogle Scholar
  2. [2]
    Back, R.J.R, Ekslund, P., and Kurki-Suonia, R., “A Fair and Efficient Implementation of CSP with Output Guards”, Technical Report, Ser. A, No. 38, Abo Akademic, Finland, 1984.Google Scholar
  3. [3]
    Bagrodia, R., “A Distributed Algorithm to Implement the Generalized Alternative Command in CSP”, In: Proc. of the 6th International Conference on Distributed Computing Systems, pp. 422–427, 1986.Google Scholar
  4. [4]
    Bagrodia, R., “Synchronization of Asynchronous Processes in CSP, ACM Transactions on Programming Languages and Systems, vol. 11, no. 4, pp. 585–597, 1989.CrossRefGoogle Scholar
  5. [5]
    Barahona, P.M.C.C., and Gurd, J.R., “Processor Allocation in a Multi-ring Data Flow Machine,” Journal of Parallel and Distributed Computing, vol. 3, no. 3, pp. 305–327, 1986.CrossRefGoogle Scholar
  6. [6]
    Bernstein, A.J., “Output Guards and Nondeterminism in Communicating Sequential Processes”, ACM Transactions on Programming Language and Systems, vol. 2, no. 2, pp. 234–238, 1980.CrossRefGoogle Scholar
  7. [7]
    Buckley, G.N. and Silberschatz, A., “An Effective Implementation for the Generalized Input-Output Construct of CSP”, ACM Transactions on Programming Languages and Systems, vol. 5, no. 2, pp. 223–235, 1983.CrossRefMATHGoogle Scholar
  8. [8]
    Davis, A.L. and Keller, R.M., “Data Flow Program Graphs”, IEEE Computer, vol. 15, no. 2, pp. 26–41, Feb. 1982.CrossRefGoogle Scholar
  9. [9]
    Francez, N., Fairness, Springer-Verlag, New York, 1986.CrossRefMATHGoogle Scholar
  10. [10]
    Fujimoto, R.N. and Hwa-chung Feng, “A Shared Memory Algorithm and Proof for the Generalized Alternative Construct in CSP”, International Journal of Parallel Programming, vol. 16, no. 3, pp. 215–241, 1987.MathSciNetCrossRefMATHGoogle Scholar
  11. [11]
    Gottlieb, A., Grishman, R., Kruskal, C.P., McAuliffe, Rudolph, L., and Snir, M., “The NYU Ultracomputer — Designing an MIMD Shared Memory Parallel Computer”, IEEE Transactions on Computers, vol. 0–32, no. 2, pp. 175–189, 1983.CrossRefGoogle Scholar
  12. [12]
    Govindarajan, R. and Yu. S, “Attempting Guards in Parallel: A Data Flow Approach to Execute Generalized Guarded Commands”, Technical Report # 273, Department of Computer Science, University of Western Ontario, London, May 1990.Google Scholar
  13. [13]
    Gurd, J.R., Watson, I., and Kirkham, C.C., “The Manchester Prototype Data Flow Computer”, Communications of the ACM, vol. 28, no. 1, pp. 34–52, 1985.CrossRefGoogle Scholar
  14. [14]
    Hoare, C.A.R., “Communicating Sequential Processes”, Communications of the ACM, vol. 21, no. 8, pp. 666–677, 1978.MathSciNetCrossRefMATHGoogle Scholar
  15. [15]
    Kieburtz, R.B. and Silberschatz, A., “Comments on Communicating Sequential Processes”, ACM Transactions on Programming Language and Systems, vol.l, no. 2, pp. 218–225, 1979.CrossRefMATHGoogle Scholar
  16. [16]
    Owicki, S. and Lamport, L., “Proving Liveness Properties of Concurrent Programs”, ACM Transactions on Programming Languages and Systems, vol. 6, no. 2, pp. 455–495, 1982.CrossRefMATHGoogle Scholar
  17. [17]
    Patnaik, L.M. and Basu, J., “Two Tools for Interprocess Communication in Distributed Data Flow Systems”, The Computer Journal, vol. 29, no. 6, pp. 506–521, 1986.CrossRefMATHGoogle Scholar
  18. [18]
    Ramesh, S., “A New Implementation of CSP with Output Guards” In: Proc. of the 7th International Conference on Distributed Computing Systems, pp. 266–273, 1987.Google Scholar
  19. [19]
    Reed, D.A., Malony, A.D., and McCredie, B.D., “Parallel Discrete Event Simulation: A Shared Memory Approach”, In: Proc. of the ACM SIGMETRIC S Conference on Measuring and Modeling Computer Systems, vol. 15, no.1, pp. 36–38, May 1987.Google Scholar
  20. [20]
    Silberschatz, A., “Communication and Synchronization in Distributed Systems” IEEE Transactions on Software Engineering, vol.SE-5, no. 6, pp. 542–546, Nov.1979.CrossRefMATHGoogle Scholar
  21. [21]
    Treleaven, P.C., Brownbridge, D.R., and Hopkins, R.P., “Data-Driven and Demand-Driven Architecture”, Computing Surveys, vol. 14, no. 1, pp. 93–143, Mar. 1982.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • R. Govindarajan
    • 1
  • Sheng Yu
    • 2
  1. 1.VLSI Design LaboratoryMcGill UniversityMontrealCanada
  2. 2.Department of Computer ScienceUniversity of WesternLondonCanada

Personalised recommendations