Skip to main content

Efficient Distributed File I/O for Visualization in Grid Environments

  • Conference paper
Simulation and Visualization on the Grid

Abstract

Large-scale simulations running in metacomputing environments face the problem of efficient file I/O. For efficiency it is desirable to write data locally, distributed across the computing environment, and then to minimize data transfer, that is, reduce remote file access. Both aspects require I/O approaches that differ from existing paradigms. For the data output of distributed simulations, one wants to use fast local parallel I/O for all participating nodes, producing a single distributed logical file, while keeping changes to the simulation code as small as possible. For reading the data file, as in postprocessing and file-based visualization, one wants to have efficient partial access to remote and distributed files, using a global naming scheme and efficient data caching, and again keeping the changes to the postprocessing code small. However, all available software solutions require all data to be staged locally (involving possible data recombination and conversion), or suffer from the performance problems of remote or distributed file systems. In this paper we show how to interface the HDF5 I/O library via its flexible Virtual File Driver layer to the Globus Data Grid. We show that combining these two toolkits in a suitable way provides us with a new I/O framework, which allows efficient, secure, distributed and parallel file I/O in a metacomputing environment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. A. Abromovici et al. Ligo: The laser interferometer gravitational wave observatory. Science, 256:325, 1992.

    Article  ADS  Google Scholar 

  2. G. Allen, W. Benger, C. Hege, J. Massó, A. Merzky, T. Radke, E. Seidel, and J. Shalf. Solving Einstein’s equations on supercomputers. In press in IEEE Computer, 1999.

    Google Scholar 

  3. G. Allen, T. Goodale, and E. Seidel. The cactus computational collaboratory: Enabling technologies for relativistic astrophysics, and a toolkit for solving PDEs by communities in science and engineering. In IEEE 7th Symp. on the Frontiers of Massively Parallel Computation (Frontiers ’99), 1999.

    Google Scholar 

  4. Amira- An advanced 3D visualization and volume modeling system, 1999 <.http://www.amiravis.com> or <http://amira.zib.de>

  5. P. Anninos, J. Massó, E. Seidel, and W.-M. Suen. Numerical relativity and black holes. Physics World,9(7):43–48, 1996.

    Google Scholar 

  6. W. Benger, I. Foster, J. Novotny, E. Seidel, J. Shalf, W. Smith, and P. Walker. Numerical relativity in a distributed environment. In Ninth SIAM Conference on Parallel Processing for Scientific Computing Proceedings, 1999.

    Google Scholar 

  7. J. Bester, I. Foster, C. Kesselman, J. Tedesco, and S. Tuecke. Gass: A data movement and access service for wide area computing systems. In Sixth Workshop on I/O in Parallel and Distributed Systems, 1999.

    Google Scholar 

  8. The cactus code server, 1999. <http://www.cactuscode.org/>.

  9. J. Carretero, F. Pérez, P. de Miguel, F. García, and L. Alonso. ParFiSys: A parallel file system for MPP. ACM SIGOPS, 30(2):74–80, 1996.

    Article  Google Scholar 

  10. A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke. The data grid: Towards an architecture for the distributed management and analysis of large scientific datasets. Submitted to NetStore ’99, 1999.

    Google Scholar 

  11. A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke. Storage client API specification. Unpublished, 1999.

    Google Scholar 

  12. DFN gigabit testbeds, 1999. <http://www.dfn.de /projekte/gigabit/home.>.

  13. I. Foster and C. Kesselman. Globus: A metacomputing infrastructure toolkit. Int. J. Supercomputer Applications, 11(2):115–128, 1997.

    Article  Google Scholar 

  14. I. Foster and C. Kesselman. The globus project: A status report. In IPPS/SPDP ’98 Heterogeneous Computing Workshop Proceedings, pages 4–18, 1998.

    Google Scholar 

  15. I. Foster and C. Kesselman, editors. The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, 1998.

    Google Scholar 

  16. A. S. Grimshaw, W. A. Wulf, J. C. French, A. C. Weaver, and P. F. Reynolds, Jr. A Synopsis of the Legion Project. Technical Report CS-94–20, University of Virgina Comp. Sci. Dept., 1994.

    Google Scholar 

  17. HDF5: API specification reference manual,1999.<http://hdf.ncsa.uiuc.edu/HDF5/doc/RM_H5Front.html>.

  18. M. Litzkow, M. Livny, and M. W. Mutka. Condor: A hunter of idle workstations. In 8th Int. Conf. of Distributed Computing Systems Proceedings, pages 104–111, 1988.

    Google Scholar 

  19. The Open Software Foundation. Introduction to OSF DCE. Prentice Hall, Englewood Cliffs, NJ, 1988–1991. PTR.

    Google Scholar 

  20. M. Romberg. The unicore architecture: Seamless access to distributed resources. In IEEE High Performance Distributed Computing Proceedings, volume HPDC-8, 1999.

    Google Scholar 

  21. E. Seidel. Technologies for collaborative, large scale simulation in astrophysics and a general toolkit for solving PDE’s in science and engineering. In T. Plesser and P. Wittenburg, editors, Forschung and wissenschaftliches Rechnen. MaxPlanck-Gesselschaft, München, Germany, 1999.

    Google Scholar 

  22. E. Seidel. Black hole coalescence and mergers: Review, status, and “where are we heading?”. In press in Progress of Theoretical Physics, 2000.

    Google Scholar 

  23. V. S. Sunderam. PVM: A framework for parallel distributed computing. Con-currency: Practice and Experience, 2(4):315–339, 1990.

    Article  Google Scholar 

  24. Tele-immersion: Collision of black holes,1999.<http://www.zib.de/Visual/projects/TIKSL/>.

  25. B. Tierney, J. Lee, B. Crowley, M. Holding, J. Hylton, and F. Drake. A network-aware distributed storage cache for data intensive environments. In IEEE High Performance Distributed Computing, Proceedings, volume HPDC-8, 1999.

    Google Scholar 

  26. A user’s guide for HDF5, 1999. <http://www.hdf.ncsa.uiuc.edu /HDF5/doc/H5.user.html >.

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Benger, W., Hege, HC., Merzky, A., Radke, T., Seidel, E. (2000). Efficient Distributed File I/O for Visualization in Grid Environments. In: Engquist, B., Johnsson, L., Hammill, M., Short, F. (eds) Simulation and Visualization on the Grid. Lecture Notes in Computational Science and Engineering, vol 13. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-57313-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-57313-2_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67264-7

  • Online ISBN: 978-3-642-57313-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics