Implementing SQOOP and Flume-based Data Transfers

  • Bhushan Lakhe
Chapter

Abstract

About five years back, when the Apache Hadoop ecosystem was ready for its data processing challenges, it introduced its own tools for data ingestion, including Sqoop and Flume. These tools were initially unfamiliar, as was rest of the Hadoop ecosystem. I was assisting an IBM client (a big health insurance company) with its data warehousing needs, and its RDBMS-based solution was not performing well. The company also had a lot of historical data on mainframes, and the big volume of that data (about 10 TB) was an issue. Though Hadoop was new, I convinced the client of the need for a ten-node Hadoop pilot and used Sqoop to pull the data into HDFS (Hadoop Distributed File System). We had tried with 2 TB only, but the response time was about 1/50th of the mainframe response time, even with old hardware and slow disks. This encouraged us (as well as the client) and finally we deployed the solution for production usage.

Keywords

Configuration File Incremental Load Streaming Data Hadoop Distribute File System Hadoop Cluster 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Copyright information

© Bhushan Lakhe 2016

Authors and Affiliations

  • Bhushan Lakhe
    • 1
  1. 1.DarienUSA

Personalised recommendations