Journal of Computer Science and Technology

, Volume 19, Issue 6, pp 965–972 | Cite as

I/O performance of an RAID-10 style parallel file system

  • Dan Feng
  • Hong Jiang
  • Yi-Feng Zhu


Without any additional cost, all the disks on the nodes of a cluster can be connected together through CEFT-PVFS, an RAID-10 style parallel file system, to provide a multi-GB/s parallel I/O performance. I/O response time is one of the most important measures of quality of service for a client. When multiple clients submit data-intensive jobs at the same time, the response time experienced by the user is an indicator of the power of the cluster. In this paper, a queuing model is used to analyze in detail the average response time when multiple clients access CEFT-PVFS. The results reveal that response time is with a function of several operational parameters. The results show that I/O response time decreases with the increases in I/O buffer hit rate for read requests, write buffer size for write requests and the number of server nodes in the parallel file system, while the higher the I/O requests arrival rate, the longer the I/O response time. On the other hand, the collective power of a large cluster supported by CEFT-PVFS is shown to be able to sustain a steady and stable I/O response time for a relatively large range of the request arrival rate.


PVFS parallel I/O I/O response time 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    General Parallel File System (GPFS) 1.4 for AIX_— Architecture and Performance. Scholar
  2. [2]
    Carns P H, Ligon W B, Ross R B, Thakur R. PVFS: A parallel file system for Linux clusters. InProceedings of the 4th Annual Linux Showcase and Conference, Atlanta, GA, October 2000, pp. 317–327.Google Scholar
  3. [3]
    Mache J, Bower-Cooley J, Guchereau Jet al. How to achieve 1GByte/sec I/O throughput with commodity IDE disks. InProceedings of SC2001—14th ACM/IEEE Conference on High-Performance Networking and Computing (refereed poster exhibit), Denver, Colorado, Nov. 2001.Google Scholar
  4. [4]
    Myrinet., Oct. 2002.Google Scholar
  5. [5]
    Khalil Amiri, David Petrou, Gregory R Ganger, Garth A Gibson. Dynamic function placement for dataintensive cluster computing. InUSENIX Annual Technical Conference, San Diego, CA, June 2000, pp. 307–322.Google Scholar
  6. [6]
    Yifeng Zhu, Hong Jiang, Xiao Qinet al. Design, implementation, and performance evaluation of a costeffective fault-tolerant parallel virtual file system. InInternational Workshop on Storage Network Architecture and Parallel I/Os, in conjunctions with12th International Conference on Parallel Architectures and Compilation Techniques, Sept. 27–Oct. 1. 2003, New Orleans, LA.Google Scholar
  7. [7]
    Gibson G A, Patterson D A. Designing disk arrays for high data reliability.Journal of Parallel and Distributed Computing, Jan. 1993, 17(1): 4–27.CrossRefGoogle Scholar
  8. [8]
    Chen S, Towsley D. The design and evaluation of RAID 5 and parity striping disk array architectures.Journal of Parallel and Distributed Computing, Jan. 1993, 17: 58–74.CrossRefGoogle Scholar
  9. [9]
    Bitton D. Arm scheduling in shadowing disks. InProc. Spring COMPCON 89, San Francisco, CA, February 1989, pp. 132–136.Google Scholar
  10. [10]
    Kim M Y, Tantawi A N. Asynchronous disk interleaving: Approximating access delays.IEEE Trans. Computers, July 1991, 40(7): 801–810.CrossRefGoogle Scholar
  11. [11]
    Kleinrock L. Queueing System. Volume 1, John Wiley & Sons, New York, 1975.Google Scholar
  12. [12]
    Jones R. html, Netperf: A network performance monitoring tool.Google Scholar

Copyright information

© Science Press, Beijing China and Allerton Press Inc. 2004

Authors and Affiliations

  1. 1.National Storage System Lab, Department of Computer Science and EngineeringHuazhong University of Science and TechnologyWuhanP.R. China
  2. 2.Department of Computer Science and EngineeringUniversity of Nebraska-LincolnLincolnU.S.A.

Personalised recommendations