Skip to main content

Randomized Parallel Prefetching and Buffer Management

  • Chapter
Advances in Randomized Parallel Computing

Part of the book series: Combinatorial Optimization ((COOP,volume 5))

  • 215 Accesses

Abstract

There is increasing interest in the use of multiple-disk parallel I/O systems to alleviate the I/O bottleneck. Effective use of I/O parallelism requires careful coordination between data placement, prefetching and caching policies. We address the problems of I/O scheduling and buffer management in a parallel I/O system. Using the standard parallel disk model with D disks and a shared I/O buffer of M blocks, we study the performance of on-line algorithms that use bounded lookahead.

We first discuss algorithms for read-once reference strings. It is known (see [3]) that any deterministic prefetching algorithm with either global M-block n local lookahead, must perform a significantly larger number of I/Os than the optimal off-line algorithm. We discuss several prefetching schemes based on a randomized data placement, and present a simple prefetching algorithm that is shown to perform the minimum (up to constants) expected number of I/Os.

For general read-many reference strings, introduce the concept of write-back whereby blocks are relocated between disks during the course of the computation. We show that any on-line algorithm wilh bounded lookahead using deterministic write-back and buffer management policies must have a competitive ratio of Ω(D). We therefore present a randomized algorithm, RAND-WB, that uses a novel randomized write-back scheme. RAND-WB obtains a competitive ratio of \(\Theta \left( {\sqrt D } \right)\), which is the best achievable by any on-line algorithm with only global M-block lookahead.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Albers. The Influence of Lookahead in Competitive Paging Algorithms. In 1st Annual European Symposium on Algorithms, volume 726, pages 1– 12. LNCS, Springer Verlag, 1993.

    Google Scholar 

  2. R. D. Barve, E. F. Grove, and J. S. Vitter. Simple Randomized Mergesort on Parallel Disks. Parallel Computing, 23 (4): 601 — 631, June 1996.

    Article  MathSciNet  Google Scholar 

  3. R. D. Barve, M. Kallahalla, P. J. Varman, and J. S. Vitter. Competitive Parallel Disk Prefetching and Buffer Management. In Fifth Annual Workshop on I/O in Parallel and Distributed Systems, pages 47–56. ACM, November 1997.

    Chapter  Google Scholar 

  4. L. A. Belady. A Study of Replacement Algorithms for a Virtual Storage Computer. IBM Systems Journal, 5 (2): 78–101, 1966.

    Article  Google Scholar 

  5. D. Breslauer. On Competitive On-Line Paging with Lookahead. In 13th Annual Symposium on Theoretical Aspects of Computer Science, volume 1046 of LNCS, pages 593–603. Springer Verlag, February 1996.

    Google Scholar 

  6. P. Cao, E. W. Felt en, A. R. Karlin, and K. Li. A Study of Integrated Prefetching and Caching Strategies. In Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems, pages 188–197. ACM, May 1995.

    Google Scholar 

  7. P. M. Chen, E. K. Lee, G. A. Gibson, R. H. Katz, and D. A. Patterson. RAID: High Performance Reliable Secondary Storage. ACM Computing Surveys, 26 (2): 145–185, 1994.

    Article  Google Scholar 

  8. C. S. Ellis and D. Kotz. Practical Prefetching Techniques for Multiprocessor File Systems. Journal of Distributed and Parallel Databases, 1(1):33– 51, 1999.

    Google Scholar 

  9. A. Fiat, R. Karp, M. Luby, L. McGeoch, D. D. Sleator, and N. E. Young. Competitive Paging Algorithms. Journal of Algorithms, 12 (4): 685–699, December 1991.

    Article  MATH  Google Scholar 

  10. N. L. Johnson and S. Kotz. Urn Models and Their Application: an Approach to Modern Discrete Probability Theory. Wiley, New York, 1977.

    MATH  Google Scholar 

  11. M. Kallahalla. Competitive Prefetching and Buffer Management for Parallel I/O Systems. Masters Thesis, Rice University (1997).

    Google Scholar 

  12. T. Kimbrel and A. R. Karlin. Near-Optimal Parallel Prefetching and Caching. In 37th Annual Symposium on Foundations of Computer Science, pages 540–549. IEEE, October 1996.

    Google Scholar 

  13. D. E. Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching. Addison-Wesley, 1973.

    Google Scholar 

  14. K. K. Lee, M. Kallahalla, B. S. Lee, and P. J. Varman. Performance Comparison of Sequential Prefetch and Forecasting Using Parallel I/O. In Proceedings of IASTED PDCN Conference, April 1997.

    Google Scholar 

  15. K. K. Lee and P. J. Varman. Prefetching and I/O Parallelism in Multiple Disk Systems. In Proceedings 24th International Conference on Parallel Processing, pages 111:160–163, August 1995.

    Google Scholar 

  16. L. A. McGeoch and D. D. Sleator. A Strongly Competitive Randomized Paging Algorithm. Algorithmica, (6): 816–825, 1991.

    Google Scholar 

  17. V. S. Pai, A. A. Schaffer, and P. J. Varman. Markov Analysis of Multiple-Disk Prefetching Strategies for External Merging. Theoretical Computer Science, 128 (l–2): 211–239, June 1994.

    Article  MathSciNet  MATH  Google Scholar 

  18. R. H. Patterson, G. Gibson, E. Ginting, D. Stodolsky, and J. Zelenka. Informed Prefetching and Caching. In Proceedings of the 15th ACM Symposium on Operating Systems Principles, pages 79–95, December 1995.

    Google Scholar 

  19. D. D. Sleator and R. E. Tarjan. Amortized Efficiency of List Update and Paging Rules. Communications of the ACM, 28 (2): 202–208, February 1985.

    Article  MathSciNet  Google Scholar 

  20. P. J. Varman and R. M. Verma. Tight Bounds for Prefetching and Buffer Management Algorithms for Parallel I/O Systems. In Proceedings of 1996 Symposium on Foundations of Software Technology and Theoretical Computer Science, volume 16. LNCS, Springer Verlag, December 1996.

    Google Scholar 

  21. J. S. Vitter and E. A. M. Shriver. Optimal Algorithms for Parallel Memory, I: Two-Level Memories. Algorithmica, 12(2–3): 110–147, 1994.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Kluwer Academic Publishers

About this chapter

Cite this chapter

Kallahalla, M., Varman, P.J. (1999). Randomized Parallel Prefetching and Buffer Management. In: Pardalos, P.M., Rajasekaran, S. (eds) Advances in Randomized Parallel Computing. Combinatorial Optimization, vol 5. Springer, Boston, MA. https://doi.org/10.1007/978-1-4613-3282-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-1-4613-3282-4_9

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-3284-8

  • Online ISBN: 978-1-4613-3282-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics