Abstract
When you’re working with big data in a distributed, parallel processing environment like Hadoop, job scheduling and workflow management are vital for efficient operation. Schedulers enable you to share resources at a job level within Hadoop; in the first half of this chapter, I use practical examples to guide you in installing, configuring, and using the Fair and Capacity schedulers for Hadoop V1 and V2. Additionally, at a higher level, workflow tools enable you to manage the relationships between jobs. For instance, a workflow might include jobs that source, clean, process, and output a data source. Each job runs in sequence, with the output from one forming the input for the next. So, in the second half of this chapter, I demonstrate how workflow tools like Oozie offer the ability to manage these relationships.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2015 Michael Frampton
About this chapter
Cite this chapter
Frampton, M. (2015). Scheduling and Workflow. In: Big Data Made Easy. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-0094-0_5
Download citation
DOI: https://doi.org/10.1007/978-1-4842-0094-0_5
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-0095-7
Online ISBN: 978-1-4842-0094-0
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)