Advertisement

Configuration Method of Multiple Clusters for the Computational Grid

  • Pil-Sup Shin
  • Won-Kee Hong
  • Hiecheol Kim
  • Shin-Dug Kim
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1971)

Abstract

A Java-Internet cluster platform (JIP) is designed as a computing platform on the computational grid in order to utilize a large collection of computing resources on the Internet. For this goal, a basic cluster module of JIP is defened as a cluster of heterogeneous systems connected to a high-speed network. For a scalable JIP configuration on the computational grid, the basic cluster module can be expanded into a logical set of multiple clusters. JIP is featured with a Java based programming environment, a dynamic resource management scheme, and an efficient parallel task execution mechanism. A multiple cluster configuration is applied to decrease communication time, which is a major bottleneck of performance enhancement. According to the analysis, multiple cluster configuration can enhance the performance of JIP about 2.5 ~ 3 times depending on any application chosen comparing with a single basic cluster configuration.1

Keywords

Computational Grid Shared Memory Code Block Computing Node Multiple Cluster 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adve, S.V., Gharachoroloo, K.: Shared Memory Consistency Models: A Tutorial. IEEE Computer (1996) 66–76.Google Scholar
  2. 2.
    Bal, H.E., Plaat, A., Kielmann, T., et. al.: Parallel Computing on Wide-Area Clusters: the Albatross Project. Proc. of Extreme Linux Workshop (1999) 20–24.Google Scholar
  3. 3.
    Culler, D.E., Arpaci-Dusseau, A., Arpaci-Dusseau, R., Chun, B., Lumetta, S., Mainwaring, A., Martin, R., Yoshikawa, C., Wong, F.: Parallel Computing on the Berkeley NOW. Joint Symposium on Parallel Processing (1997) 30–40.Google Scholar
  4. 4.
    Foster, I., Kesselman, C.: The Globus Project: A Status Report. IPPS/SPDP’98 Heterogeneous Computing Workshop (1998) pp.4–18.Google Scholar
  5. 5.
    Grimshaw, A.S., Wulf, W.A., the Legion team: The Legion Vision of a Worldwide Virtual Computer. Communications of the ACM, Vol. 40(1) (1997).Google Scholar
  6. 7.
    Pinkston, T.M., Baylor, S.J.: Parallel Processor Memory Reference Analysis: Examining Locality and Clustering Potential. 5th SIAM Conference on Parallel Processing for Scientific Computing (1992) 513–518.Google Scholar
  7. 8.
    Raman, A., Rajkumar, et. al.: PARDISC: A Cost Effective Model for Parallel and Distributed Computing. Proc. of Int. Conf. on High Performance Computing, IEEE Press (1996).Google Scholar
  8. 9.
    Tannenbaum, T., Litzkow, M.: The Condor Distributed Processing System. Dr.Dobbs’ Journal, Vol. 20 (1995) 42–44.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Pil-Sup Shin
    • 1
  • Won-Kee Hong
    • 2
  • Hiecheol Kim
    • 3
  • Shin-Dug Kim
    • 2
  1. 1.Sungmi Telecom electronics co., LTD.Korea
  2. 2.Parallel Processing Lab., Dept. of Computer ScienceYonsei UniversitySeoulKorea
  3. 3.Computer & Communication Eng.Taegu UniversityKorea

Personalised recommendations