Parallel Implementation of Ant-Based Clustering Algorithm Based on Hadoop
Hadoop is a distributed system infrastructure of cloud computing. Based on the characteristics of ant-based clustering algorithm, the paper implements the parallelization of this algorithm using MapReduce on Hadoop. The Map function calculates the average similarity of the object with its neighborhood objects. The Reduce function processes the objects with the Map outputs and updates related information of both ants and the objects to get ready for the next job. Results on the Hadoop clusters show that our method can significantly improve the computational efficiency with the premise of maintaining clustering accuracy.
KeywordsAnt-based Clustering Parallelization Hadoop MapReduce model
Unable to display preview. Download preview PDF.
- 1.Dean, J., Ghemawat, S.: MapReduce: Simplified data processing on large clusters. In: Operating Systems Design and Implementation, pp. 137–149 (2004)Google Scholar
- 2.Apache Hadoop. Hadoop, http://hadoop.apache.org
- 4.Borthakur, D.: The Hadoop Distributed File System: Architecture and Design. The Apache Software Foundation, http://hadoop.apache.org
- 5.Wei, J., Ravi, V.T., Agrawal, G.: Comparing map-reduce and FREERIDE for data-intensive applications. In: IEEE International Conference on Cluster Computing and Workshops. CLUSTER 2009, pp. 1–10 (2009)Google Scholar
- 7.Yang, Y., Kamel, M.: Clustering Ensemble Using Swarm Intelligence. In: IEEE Swarm Intelligence Symposium, pp. 65–71 (2003)Google Scholar