Abstract
About five years back, when the Apache Hadoop ecosystem was ready for its data processing challenges, it introduced its own tools for data ingestion, including Sqoop and Flume. These tools were initially unfamiliar, as was rest of the Hadoop ecosystem. I was assisting an IBM client (a big health insurance company) with its data warehousing needs, and its RDBMS-based solution was not performing well. The company also had a lot of historical data on mainframes, and the big volume of that data (about 10 TB) was an issue. Though Hadoop was new, I convinced the client of the need for a ten-node Hadoop pilot and used Sqoop to pull the data into HDFS (Hadoop Distributed File System). We had tried with 2 TB only, but the response time was about 1/50th of the mainframe response time, even with old hardware and slow disks. This encouraged us (as well as the client) and finally we deployed the solution for production usage.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Bhushan Lakhe
About this chapter
Cite this chapter
Lakhe, B. (2016). Implementing SQOOP and Flume-based Data Transfers. In: Practical Hadoop Migration. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-1287-5_8
Download citation
DOI: https://doi.org/10.1007/978-1-4842-1287-5_8
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-1288-2
Online ISBN: 978-1-4842-1287-5
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)