Towards Truly Elastic Distributed Graph Computing in the Cloud

  • Lu Lu
  • Xuanhua Shi
  • Hai JinEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9464)


Elasticity is very important to the scale-out distributed systems running on today’s large-scale multi-tenant clouds, regardless public or private. An elastic distributed data processing system must have the capability of: (1) dynamically balancing the computing load among workers due to their performance heterogeneity and dynamicity; (2) fast recovering the lost memory state of failure workers with acceptable overheads during the regular execution.

Unfortunately, we found that the design of the state-of-the-art distributed graph computing system only works well in small sized dedicated clusters. We implement a distributed graph computing prototype, X-Graph, and demonstrate the capabilities of being elastic in three ways. First, we present menger, a novel two-level graph partition framework, which further splits one worker-level partition into several sub-partitions as the basic migration units, and each has the “migration affinity” with one of the other workers. Second, we implement a dynamical load balancer based on menger, which prefers the worker that has the affinity of the sub-partition to be migrated as the destination, and completely avoids the costly sophistical graph re-partitioning algorithms. Third, we implement a differentiated replication frame-work, which supports parallel recovery for lost partitions just like general-purpose dataflow systems.


Fault Tolerance Input Graph Execution Engine Distribute File System Dynamic Load Balance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This paper is partly supported by the NSFC under grant No. 61433019 and No. 61370104, International Science and Technology Cooperation Program of China under grant No. 2015DFE12860, MOE- Intel Special Research Fund of Information Technology under grant MOE-INTEL-2012-01, and Chinese Universities Scientific Fund under grant No. 2014TS008.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
    Ahmad, F., Chakradhar, S., Raghunathan, A., Vijaykumar, T.N.: Tarazu: optimizing mapreduce on heterogeneous clusters. In: ASPLOS 2012, pp. 61–74 (2012)Google Scholar
  5. 5.
    Chen, R., Weng, X., He, B., Yang, M., Choi, B., Li, X.: Improving large graph processing on partitioned graphs in the cloud. In: SoCC 2012, p. 3 (2012)Google Scholar
  6. 6.
    Cipar, J., Ho, Q., Kim, J.K., Lee, S., Ganger, G.R., Gibson, G.: Solving the straggler problem with bounded staleness. In: HotOS 2013 (2013)Google Scholar
  7. 7.
    Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM 51, 107–113 (2008)CrossRefGoogle Scholar
  8. 8.
    Gonzalez, J.E., Low, Y., Gu, H., Bickson, D., Guestrin, C.: PowerGraph: distributed graph-parallel computation on natural graphs. In: OSDI 2012, p. 2 (2012)Google Scholar
  9. 9.
    Hendrickson, B., Devine, K.: Dynamic load balancing in computational mechanics. Comput. Methods Appl. Mech. Eng. 184, 485–500 (2000)CrossRefzbMATHGoogle Scholar
  10. 10.
    Hindman, B., Konwinski, A., Zaharia, M., Ghodsi, A., Joseph, A.D., Katz, R., Shenker, S., Stoica, I.: Mesos: a platform for fine-grained resource sharing in the data center. In: NSDI 2011, p. 22 (2011)Google Scholar
  11. 11.
    Khayyat, Z., Awara, K., Alonazi, A.: Mizan: a system for dynamic load balancing in large-scale graph processing. In: EuroSys 2013, pp. 169–182 (2013)Google Scholar
  12. 12.
    Kumar, V., Vavilapallih, Murihyh, A.C., Douglasm, C., Agarwali, S., Konarh, M., Evansy, R., Gravesy, T., Lowey, J., Shahh, H., Sethh, S., Sahah, B., Curinom, C., Omaleyh, O., Radiah, S.: Apache hadoop YARN: yet another resource negotiator. In: SoCC 2013, p. 5 (2013)Google Scholar
  13. 13.
    Kyrola, A., Blelloch, G., Guestrin, C.: GraphChi: large-scale graph computation on just a pc. In: OSDI 2012, pp. 31–46 (2012)Google Scholar
  14. 14.
    Low, Y., Gonzalez, J., Kyrola, A., Bickon, D., Guestrin, C., Hellersten, J.M.: GraphLab: a new framework for parallel machine learning. In: UAI 2010 (2010)Google Scholar
  15. 15.
    Low, Y., Gonzalez, J., Kyrola, A., Bickon, D., Guestrin, C., Hellersten, J.M.: Distributed GraphLab: a framework for machine learning in the cloud. In: PVLDB 2012, pp. 716–727 (2012)Google Scholar
  16. 16.
    Malewicz, G., Austern, M.H., L., Hundt, R.: Whare-Map: heterogeneity in homogeneous warehouse-scale computers. In: ISCA 2013, pp. 619–630 (2013)Google Scholar
  17. 17.
    Nguyen, D., Lenharth, A., Pingali, K.: A lightweight infrastructure for graph analytics. In: SOSP 2013, pp. 456–471 (2013)Google Scholar
  18. 18.
    Power, R., Li, J.: Piccolo: building fast, distributed programs with partitioned tables. In: OSDI 2010, pp. 1–14 (2010)Google Scholar
  19. 19.
    Roy, A., Mihailovic, I., Zwaenpoel, W.: X-Stream: edge-centric graph processing using streaming partitions. In: SOSP 2013, pp. 472–488 (2013)Google Scholar
  20. 20.
    Salihoglu, S., Widom, J.: GPS: a graph processing system. In: SSDBM 2013, p. 8 (2013)Google Scholar
  21. 21.
    Schwarzkopf, M., Konwinski, A., Abdelmalek, M., Wilkes, J.: Omega: flexible, scalable schedulers for large compute clusters. In: EuroSys 2013, pp. 351–364 (2013)Google Scholar
  22. 22.
    Stanton, I., Kliot, G.: Streaming graph partitioning for large distributed graphs. In: KDD 2012, pp. 1222–1230 (2012)Google Scholar
  23. 23.
    Yu, Y., Isard, M., Fetterly, D., Budiu, M., Lfarer-Lingsson, Kumar, P., Currey, G.J.: DryadLINQ: a system for general-purpose distributed data-parallel computing using a high-level language. In: OSDI 2008, pp. 1–14 (2008)Google Scholar
  24. 24.
    Zaharia, M., Chowdhury, M., Das, T., Dave, A., Ma, J., Mccauley, M., Frankin, M.J., Shenker, S., Stoca, I.: Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. In: NSDI 2012 (2012)Google Scholar
  25. 25.
    Zhang, X., Tune, E., Hagmann, R., Jnagal, R., Gokhale, V., Wilkes, J.: CPI2: CPU performance isolation for shared compute cluster. In: EuroSys 2013, pp. 379–391 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and TechnologyHuazhong University of Science and TechnologyWuhanChina

Personalised recommendations