Skip to main content

Shared Memory Abstraction

  • Chapter
  • First Online:
Systems for Big Graph Analytics

Part of the book series: SpringerBriefs in Computer Science ((BRIEFSCOMPUTER))

  • 854 Accesses

Abstract

The systems we have presented so far all adopt the Bulk Synchronous Parallel (BSP) execution model, and expose a message passing API to end users. In this chapter, we review another important type of vertex-centric systems that adopt a shared memory programming abstraction, and support asynchronous execution. We remark that although the programming interface of these systems simulates a shared memory environment, the underlying execution engine is not shared memory. We focus on introducing the various models implementing shared memory abstraction, and leave readers to explore the system usage by following their respective system websites.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. J. Cheng, Q. Liu, Z. Li, W. Fan, J. C. S. Lui, and C. He. VENUS: vertex-centric streamlined graph computation on a single PC. In ICDE, pages 1131–1142, 2015.

    Google Scholar 

  2. J. E. Gonzalez, Y. Low, H. Gu, D. Bickson, and C. Guestrin. Powergraph: Distributed graph-parallel computation on natural graphs. In OSDI, pages 17–30, 2012.

    Google Scholar 

  3. M. Han, K. Daudjee, K. Ammar, M. T. Özsu, X. Wang, and T. Jin. An experimental comparison of Pregel-like graph processing systems. PVLDB, 7(12):1047–1058, 2014.

    Google Scholar 

  4. W. Han, S. Lee, K. Park, J. Lee, M. Kim, J. Kim, and H. Yu. TurboGraph: a fast parallel graph engine handling billion-scale graphs in a single PC. In KDD, pages 77–85, 2013.

    Google Scholar 

  5. G. Karypis and V. Kumar. Multilevel k-way partitioning scheme for irregular graphs. J. Parallel Distrib. Comput., 48(1):96–129, 1998.

    Article  MATH  Google Scholar 

  6. F. Khorasani, K. Vora, R. Gupta, and L. N. Bhuyan. Cusha: vertex-centric graph processing on gpus. In HPDC, pages 239–252, 2014.

    Google Scholar 

  7. A. Kyrola, G. E. Blelloch, and C. Guestrin. GraphChi: Large-scale graph computation on just a PC. In OSDI, pages 31–46, 2012.

    Google Scholar 

  8. Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Graphlab: A new framework for parallel machine learning. In UAI, pages 340–349, 2010.

    Google Scholar 

  9. Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Distributed GraphLab: A framework for machine learning in the cloud. PVLDB, 5(8):716–727, 2012.

    Google Scholar 

  10. Y. Lu, J. Cheng, D. Yan, and H. Wu. Large-scale distributed graph computing systems: An experimental evaluation. PVLDB, 8(3):281–292, 2014.

    Google Scholar 

  11. A. Roy, L. Bindschaedler, J. Malicevic, and W. Zwaenepoel. Chaos: scale-out graph processing from secondary storage. In SOSP, pages 410–424, 2015.

    Google Scholar 

  12. A. Roy, I. Mihailovic, and W. Zwaenepoel. X-stream: edge-centric graph processing using streaming partitions. In SOSP, pages 472–488, 2013.

    Google Scholar 

  13. D. Yan, Y. Bu, Y. Tian, and A. Deshpande. Big graph analytics platforms. Foundations and Trends in Databases, 7(1–2):1–195, 2017.

    Article  Google Scholar 

  14. D. Yan, Y. Huang, J. Cheng, and H. Wu. Efficient processing of very large graphs in a small cluster. CoRR, abs/1601.05590, 2016.

    Google Scholar 

  15. Y. Zhang, Q. Gao, L. Gao, and C. Wang. Maiter: An asynchronous graph processing framework for delta-based accumulative iterative computation. IEEE Trans. Parallel Distrib. Syst., 25(8):2091–2100, 2014.

    Article  Google Scholar 

  16. D. Zheng, D. Mhembere, R. C. Burns, J. T. Vogelstein, C. E. Priebe, and A. S. Szalay. Flashgraph: Processing billion-node graphs on an array of commodity ssds. In FAST, pages 45–58, 2015.

    Google Scholar 

  17. J. Zhong and B. He. Medusa: Simplified graph processing on gpus. IEEE Trans. Parallel Distrib. Syst., 25(6):1543–1552, 2014.

    Article  MathSciNet  Google Scholar 

  18. X. Zhu, W. Han, and W. Chen. Gridgraph: Large-scale graph processing on a single machine using 2-level hierarchical partitioning. In USENIX ATC, pages 375–386, 2015.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2017 The Author(s)

About this chapter

Cite this chapter

Yan, D., Tian, Y., Cheng, J. (2017). Shared Memory Abstraction. In: Systems for Big Graph Analytics. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-58217-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-58217-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-58216-0

  • Online ISBN: 978-3-319-58217-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics