Abstract
Parallel processing is a technique to design high performance computers, which are required to solve “grand challenge” problems. There are also several other reasons for the increased interest in parallel processing. In the first section, we look at the reasons for this interest. Unlike uniprocessor systems, parallel systems can be designed using a variety of architectures. A brief overview of parallel architectures is given in Section 1.2. Managing resources efficiently is important in parallel systems in order to benefit from parallel processing. In this book, we are concerned with one aspect of resource management: job scheduling. We introduce job scheduling in Section 1.3. Starting with Chapter 3, we provide a detailed discussion of various job scheduling policies. Performance of parallel job scheduling policies depends on the type of software architecture employed. We briefly describe some of the basic software architectures in Section 1.4. We conclude the chapter with an overview of the book.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer Science+Business Media New York
About this chapter
Cite this chapter
Dandamudi, S. (2003). Introduction. In: Hierarchical Scheduling in Parallel and Cluster Systems. Series in Computer Science. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0133-6_1
Download citation
DOI: https://doi.org/10.1007/978-1-4615-0133-6_1
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-4938-9
Online ISBN: 978-1-4615-0133-6
eBook Packages: Springer Book Archive