Implementing SQOOP and Flume-based Data Transfers
About five years back, when the Apache Hadoop ecosystem was ready for its data processing challenges, it introduced its own tools for data ingestion, including Sqoop and Flume. These tools were initially unfamiliar, as was rest of the Hadoop ecosystem. I was assisting an IBM client (a big health insurance company) with its data warehousing needs, and its RDBMS-based solution was not performing well. The company also had a lot of historical data on mainframes, and the big volume of that data (about 10 TB) was an issue. Though Hadoop was new, I convinced the client of the need for a ten-node Hadoop pilot and used Sqoop to pull the data into HDFS (Hadoop Distributed File System). We had tried with 2 TB only, but the response time was about 1/50th of the mainframe response time, even with old hardware and slow disks. This encouraged us (as well as the client) and finally we deployed the solution for production usage.