Advertisement

Memory Management Strategies in CPU/GPU Database Systems: A Survey

  • Iya ArefyevaEmail author
  • David Broneske
  • Gabriel Campero
  • Marcus Pinnecke
  • Gunter Saake
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 928)

Abstract

GPU-accelerated in-memory database systems have gained a lot of popularity over the last several years. However, GPUs have limited memory capacity, and the data to process might not fit into the GPU memory entirely and cause a memory overflow. Fortunately, this problem has many possible solutions, like splitting the data and processing each portion separately, or storing the data in the main memory and transferring it to the GPU on demand. This paper provides a survey of four main techniques for managing GPU memory and their applications for query processing in cross-device powered database systems.

Keywords

Cross-device query processing GPU memory management Divide-and-conquer Mapped memory Unified Virtual Addressing Unified Memory 

Notes

Acknowledgment

This work was partially funded by the DFG (grant no.: SA 465/50-1).

References

  1. 1.
    Appuswamy, R., Karpathiotakis, M., Porobic, D., Ailamaki, A.: The case for heterogeneous HTAP. In: CIDR (2017)Google Scholar
  2. 2.
    Arefyeva, I., Broneske, D., Pinnecke, M., Bhatnagar, M., Saake, G.: Column vs. row stores for data manipulation in hardware oblivious CPU/GPU database systems. In: GvDB, pp. 24–29. CEUR-WS (2017)Google Scholar
  3. 3.
    Bakkum, P., Chakradhar, S.: Efficient data management for GPU databases. Technical report, High Performance Computing on Graphics Processing Units (2012)Google Scholar
  4. 4.
    Bakkum, P., Skadron, K.: Accelerating SQL database operations on a GPU with CUDA. In: GPGPU, pp. 94–103. ACM (2010)Google Scholar
  5. 5.
    Breß, S.: The design and implementation of CoGaDB: a column-oriented GPU-accelerated DBMS. Datenbank-Spektrum 14(3), 199–209 (2014)CrossRefGoogle Scholar
  6. 6.
    Chantrapornchai, C., Choksuchat, C., Haidl, M., Gorlatch, S.: TripleID: a low-overhead representation and querying using GPU for large RDFs. In: Kozielski, S., Mrozek, D., Kasprowski, P., Małysiak-Mrozek, B., Kostrzewa, D. (eds.) BDAS 2015-2016. CCIS, vol. 613, pp. 400–415. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-34099-9_31CrossRefGoogle Scholar
  7. 7.
    DeWitt, D.J., Katz, R.H., Olken, F., Shapiro, L.D., Stonebraker, M.R., Wood, D.A.: Implementation techniques for main memory database systems. In: SIGMOD, vol. 14, pp. 1–8. ACM (1984)Google Scholar
  8. 8.
    Gregg, C., Hazelwood, K.: Where is the data? Why you cannot debate CPU vs. GPU performance without the answer. In: ISPASS, pp. 134–144. IEEE (2011)Google Scholar
  9. 9.
    He, B., et al.: Relational query coprocessing on graphics processors. TODS 34(4), 21 (2009)CrossRefGoogle Scholar
  10. 10.
    He, B., Yu, J.X.: High-throughput transaction executions on graphics processors. VLDB 4(5), 314–325 (2011)Google Scholar
  11. 11.
    Heimel, M., Saecker, M., Pirk, H., Manegold, S., Markl, V.: Hardware-oblivious parallelism for in-memory column-stores. VLDB 6(9), 709–720 (2013)Google Scholar
  12. 12.
    Kim, Y., Lee, J., Jo, J.E., Kim, J.: GPUdmm: a high-performance and memory-oblivious GPU architecture using dynamic memory management. In: HPCA, pp. 546–557. IEEE (2014)Google Scholar
  13. 13.
    Landaverde, R., Zhang, T., Coskun, A.K., Herbordt, M.: An investigation of unified memory access performance in CUDA. In: HPEC, pp. 1–6. IEEE (2014)Google Scholar
  14. 14.
    Li, J., Tseng, H.W., Lin, C., Papakonstantinou, Y., Swanson, S.: HippogriffDB: balancing I/O and GPU bandwidth in big data analytics. Proc. VLDB Endow. 9(14), 1647–1658 (2016)CrossRefGoogle Scholar
  15. 15.
    Mostak, T.: An overview of MapD (massively parallel database). Technical report, MIT (2013)Google Scholar
  16. 16.
    Negrut, D., Serban, R., Li, A., Seidl, A.: Unified memory in CUDA 6: a brief overview and related data access. Technical report, TR-2014-09, University of Wisconsin-Madison (2014)Google Scholar
  17. 17.
    Pinnecke, M., Broneske, D., Durand, G.C., Saake, G.: Are databases fit for hybrid workloads on GPUs? A storage engine’s perspective. In: ICDE, pp. 1599–1606. IEEE (2017)Google Scholar
  18. 18.
    Pirk, H., Manegold, S., Kersten, M.: Waste not... efficient co-processing of relational data. In: ICDE, pp. 508–519. IEEE (2014)Google Scholar
  19. 19.
    Shirahata, K., Sato, H., Matsuoka, S.: Out-of-core GPU memory management for MapReduce-based large-scale graph processing. In: CLUSTER, pp. 221–229. IEEE (2014)Google Scholar
  20. 20.
    Sitaridi, E.: GPU-acceleration of in-memory data analytics. Ph.D. thesis, Columbia University (2016)Google Scholar
  21. 21.
    Wang, K., et al.: Concurrent analytical query processing with GPUs. Proc. VLDB Endow. 7(11), 1011–1022 (2014)CrossRefGoogle Scholar
  22. 22.
    Wu, R., Zhang, B., Hsu, M.: GPU-accelerated large scale analytics. Technical report, HPL- 2009–38, HP Laboratories (2009)Google Scholar
  23. 23.
    Yuan, Y., Lee, R., Zhang, X.: The Yin and Yang of processing data warehousing queries on GPU devices. VLDB 6(10), 817–828 (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Iya Arefyeva
    • 1
    Email author
  • David Broneske
    • 1
  • Gabriel Campero
    • 1
  • Marcus Pinnecke
    • 1
  • Gunter Saake
    • 1
  1. 1.University of MagdeburgMagdeburgGermany

Personalised recommendations