Skip to main content

A Priority and Fairness Mixed Compaction Scheduling Mechanism for LSM-tree Based KV-Stores

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11334))

Abstract

Key-value (KV) stores have become a backbone of large-scale applications in today’s data centers. Write-optimized data structures like the Log-Structured Merge-tree (LSM-tree) and their variants are widely used in KV storage systems. Conventional LSM-tree organizes KV items into multiple, successively larger components, and uses compaction to push KV items from one smaller component to another adjacent larger component until the KV items reach the largest component. Unfortunately, LSM-tree has severe file retention phenomenon. File retention phenomenon means that lots of SSTables locate in one component and then too many SSTables are involved in one compaction, which causes one compaction occupies long time and causes front-end writing pauses or even stops frequently. We propose a new compaction scheduling scheme called Slot, and implement it on LevelDB. The main idea of Slot is to combine score centric priority based compaction scheduling with time-slice centric fairness based compaction scheduling to alleviate the file retention and then decrease the write amplification of LSM-tree based key/value stores. Slot avoids too many files involved in one compaction and decreases the frequency of write pause or write stop. We conduct extensive evaluations and the experimental results demonstrate that Slot keeps the writing procedure more smoothly and outperforms LevelDB by 20–210% on write throughput without sacrificing the read latency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahn, J.-S., Seo, C., Mayuram, R., Yaseen, R., Kim, J.-S., Maeng, S.: ForestDB: a fast key-value storage system for variable-length string keys. IEEE Trans. Comput. 65(3), 902–915 (2016)

    Article  MathSciNet  Google Scholar 

  2. Carlson, J.L.: Redis in Action. Manning Publications Co., Shelter Island (2013)

    Google Scholar 

  3. Cooper, B.F., et al.: PNUTS: Yahoo!’s hosted data serving platform. Proc. VLDB Endow. 1(2), 1277–1288 (2008)

    Article  Google Scholar 

  4. Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM Symposium on Cloud Computing, pp. 143–154. ACM (2010)

    Google Scholar 

  5. Debnath, B., Sengupta, S., Li, J.: FlashStore: high throughput persistent key-value store. Proc. VLDB Endow. 3(1–2), 1414–1425 (2010)

    Article  Google Scholar 

  6. Debnath, B., Sengupta, S., Li, J.: SkimpyStash: RAM space skimpy key-value store on flash-based storage. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, pp. 25–36. ACM (2011)

    Google Scholar 

  7. Fitzpatrick, B.: Distributed caching with memcached. Linux J. 2004(124), 5 (2004)

    Google Scholar 

  8. George, L.: HBase: The Definitive Guide: Random Access to Your Planet-size Data. O’Reilly Media Inc., Sebastopol (2011)

    Google Scholar 

  9. Ghemawat, S., Dean, J.: LevelDB (2011). https://github.com/google/leveldb, http://leveldb.org

  10. Lai, C., et al.: Atlas: Baidu’s key-value storage system for cloud data. In: 2015 31st Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–14. IEEE (2015)

    Google Scholar 

  11. Li, C., Cox, A.L.: GD-Wheel: a cost-aware replacement policy for key-value stores. In: Proceedings of the Tenth European Conference on Computer Systems, p. 5. ACM (2015)

    Google Scholar 

  12. Lim, H., Fan, B., Andersen, D.G., Kaminsky, M.: SILT: a memory-efficient, high-performance key-value store. In: Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles, pp. 1–13. ACM (2011)

    Google Scholar 

  13. Lu, L., Pillai, T.S., Gopalakrishnan, H., Arpaci-Dusseau, A.C., Arpaci-Dusseau, R.H.: WiscKey: separating keys from values in SSD-conscious storage. ACM Trans. Storage (TOS) 13(1), 5 (2017)

    Google Scholar 

  14. Marmol, L., et al.: NVMKV: a scalable and lightweight flash aware key-value store. In: HotStorage, p. 8 (2014)

    Google Scholar 

  15. O’Neil, P., Cheng, E., Gawlick, D., O’Neil, E.: The log-structured merge-tree (LSM-tree). Acta Informatica 33(4), 351–385 (1996)

    Article  Google Scholar 

  16. Pan, F., Yue, Y., Xiong, J.: dCompaction: delayed compaction for the LSM-tree. Int. J. Parallel Program. 45(6), 1310–1325 (2017)

    Article  Google Scholar 

  17. Sears, R., Ramakrishnan, R.: bLSM: a general purpose log structured merge tree. In: Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, pp. 217–228. ACM (2012)

    Google Scholar 

  18. Shetty, P., Spillane, R.P., Malpani, R., Andrews, B., Seyster, J., Zadok, E.: Building workload-independent storage with VT-trees. In: Usenix Conference on File and Storage Technologies, pp. 17–30 (2013)

    Google Scholar 

  19. Wang, P., et al.: An efficient design and implementation of LSM-tree based key-value store on open-channel SSD. In: Proceedings of the Ninth European Conference on Computer Systems, p. 16. ACM (2014)

    Google Scholar 

  20. Wu, X., Xu, Y., Shao, Z, Jiang, S.: LSM-trie: an LSM-tree-based ultra-large key-value store for small data. In: Proceedings of the 2015 USENIX Conference on Usenix Annual Technical Conference, pp. 71–82. USENIX Association (2015)

    Google Scholar 

  21. Wu, X., Zhang, L., Wang, Y., Ren, Y., Hack, M., Jiang, S.: zExpander: a key-value cache with both high performance and fewer misses. In: Proceedings of the Eleventh European Conference on Computer Systems, p. 14. ACM (2016)

    Google Scholar 

  22. Yao, T., et al.: A light-weight compaction tree to reduce i/o amplification toward efficient key-value stores. In: Proceedings of the 33rd International Conference on Massive Storage Systems and Technology (MSST 2017) (2017)

    Google Scholar 

  23. Yue, Y., He, B., Li, Y., Wang, W.: Building an efficient put-intensive key-value store with skip-tree. IEEE Trans. Parallel Distrib. Syst. 28(4), 961–973 (2017)

    Article  Google Scholar 

  24. Zhang, Z., et al.: Pipelined compaction for the LSM-tree. In: 2014 IEEE 28th International Parallel and Distributed Processing Symposium, pp. 777–786. IEEE (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yinliang Yue .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, L., Yue, Y., Wang, H., Wu, J. (2018). A Priority and Fairness Mixed Compaction Scheduling Mechanism for LSM-tree Based KV-Stores. In: Vaidya, J., Li, J. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2018. Lecture Notes in Computer Science(), vol 11334. Springer, Cham. https://doi.org/10.1007/978-3-030-05051-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-05051-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-05050-4

  • Online ISBN: 978-3-030-05051-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics