Skip to main content

Data Layout in Main Memory

  • Chapter
  • First Online:
A Course in In-Memory Data Management

Abstract

In this chapter, we address the question how data is organized in memory. Relational database tables have a two-dimensional structure but main memory is organized unidimensional, providing memory addresses that start at zero and increase serially to the highest available location. The database storage layer has to decide how to map the two-dimensional table structures to the linear memory address space.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 74.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. T.W. Barr, A.L. Cox, S. Rixner, Translation caching: skip, don’t walk (the Page Table). ACM SIGARCH Comput Arch. News 38(3), 48–59 (2010)

    Article  Google Scholar 

  2. V. Babka, P. T\(\mathop {{\rm {u}}}\limits ^{\circ }\)ma, Investigating cache parameters of x86 family processors. Comput. Perform. Eval. Benchmarking. 77–96 (2009)

    Google Scholar 

  3. M. Grund, J. Krueger, H. Plattner, A. Zeier, S. Madden, P. Cudre-Mauroux, HYRISE - A hybrid main memory storage engine, in VLDB (2011)

    Google Scholar 

  4. J. Krueger, C. Kim, M. Grund, N. Satish, D. Schwalb, J. Chhugani, H. Plattner, P. Dubey, A. Zeier, Fast updates on read-optimized databases using multi-core CPUs, in PVLDB (2011)

    Google Scholar 

  5. D. Schwalb, J. Krueger, H. Plattner, Cache conscious column organization in in-memory column stores. Technical Report 60, Hasso-Plattner-Institute, December 2012.

    Google Scholar 

  6. R.H. Saavedra, A.J. Smith, Measuring cache and TLB performance and their effect on benchmark runtimes. IEEE Trans. Comput. 44(10), 1223–1235 (1995)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hasso Plattner .

Self Test Questions

Self Test Questions

 

  1. 1.

    When DRAM can be accessed randomly with the same costs, why are consecutive accesses usually faster than stride accesses?

    1. (a)

      With consecutive memory locations, the probability that the next requested location has already been loaded in the cache line is higher than with randomized/strided access. Furthermore is the memory page for consecutive accesses probably already in the TLB

    2. (b)

      The bigger the size of the stride, the higher the probability, that two values are both in one cache line

    3. (c)

      Loading consecutive locations is not faster, since the CPU performs better on prefetching random locations, than prefetching consecutive locations

    4. (d)

      With modern CPU technologies like TLBs, caches and prefetching, all three access methods expose the same performance.

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Plattner, H. (2013). Data Layout in Main Memory. In: A Course in In-Memory Data Management. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36524-9_8

Download citation

Publish with us

Policies and ethics