Advertisement

A Priority and Fairness Mixed Compaction Scheduling Mechanism for LSM-tree Based KV-Stores

  • Lidong Chen
  • Yinliang Yue
  • Haobo Wang
  • Jianhua Wu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11334)

Abstract

Key-value (KV) stores have become a backbone of large-scale applications in today’s data centers. Write-optimized data structures like the Log-Structured Merge-tree (LSM-tree) and their variants are widely used in KV storage systems. Conventional LSM-tree organizes KV items into multiple, successively larger components, and uses compaction to push KV items from one smaller component to another adjacent larger component until the KV items reach the largest component. Unfortunately, LSM-tree has severe file retention phenomenon. File retention phenomenon means that lots of SSTables locate in one component and then too many SSTables are involved in one compaction, which causes one compaction occupies long time and causes front-end writing pauses or even stops frequently. We propose a new compaction scheduling scheme called Slot, and implement it on LevelDB. The main idea of Slot is to combine score centric priority based compaction scheduling with time-slice centric fairness based compaction scheduling to alleviate the file retention and then decrease the write amplification of LSM-tree based key/value stores. Slot avoids too many files involved in one compaction and decreases the frequency of write pause or write stop. We conduct extensive evaluations and the experimental results demonstrate that Slot keeps the writing procedure more smoothly and outperforms LevelDB by 20–210% on write throughput without sacrificing the read latency.

Keywords

LSM-tree KV-Stores Compaction 

References

  1. 1.
    Ahn, J.-S., Seo, C., Mayuram, R., Yaseen, R., Kim, J.-S., Maeng, S.: ForestDB: a fast key-value storage system for variable-length string keys. IEEE Trans. Comput. 65(3), 902–915 (2016)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Carlson, J.L.: Redis in Action. Manning Publications Co., Shelter Island (2013)Google Scholar
  3. 3.
    Cooper, B.F., et al.: PNUTS: Yahoo!’s hosted data serving platform. Proc. VLDB Endow. 1(2), 1277–1288 (2008)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM Symposium on Cloud Computing, pp. 143–154. ACM (2010)Google Scholar
  5. 5.
    Debnath, B., Sengupta, S., Li, J.: FlashStore: high throughput persistent key-value store. Proc. VLDB Endow. 3(1–2), 1414–1425 (2010)CrossRefGoogle Scholar
  6. 6.
    Debnath, B., Sengupta, S., Li, J.: SkimpyStash: RAM space skimpy key-value store on flash-based storage. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, pp. 25–36. ACM (2011)Google Scholar
  7. 7.
    Fitzpatrick, B.: Distributed caching with memcached. Linux J. 2004(124), 5 (2004)Google Scholar
  8. 8.
    George, L.: HBase: The Definitive Guide: Random Access to Your Planet-size Data. O’Reilly Media Inc., Sebastopol (2011)Google Scholar
  9. 9.
    Ghemawat, S., Dean, J.: LevelDB (2011). https://github.com/google/leveldb, http://leveldb.org
  10. 10.
    Lai, C., et al.: Atlas: Baidu’s key-value storage system for cloud data. In: 2015 31st Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–14. IEEE (2015)Google Scholar
  11. 11.
    Li, C., Cox, A.L.: GD-Wheel: a cost-aware replacement policy for key-value stores. In: Proceedings of the Tenth European Conference on Computer Systems, p. 5. ACM (2015)Google Scholar
  12. 12.
    Lim, H., Fan, B., Andersen, D.G., Kaminsky, M.: SILT: a memory-efficient, high-performance key-value store. In: Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles, pp. 1–13. ACM (2011)Google Scholar
  13. 13.
    Lu, L., Pillai, T.S., Gopalakrishnan, H., Arpaci-Dusseau, A.C., Arpaci-Dusseau, R.H.: WiscKey: separating keys from values in SSD-conscious storage. ACM Trans. Storage (TOS) 13(1), 5 (2017)Google Scholar
  14. 14.
    Marmol, L., et al.: NVMKV: a scalable and lightweight flash aware key-value store. In: HotStorage, p. 8 (2014)Google Scholar
  15. 15.
    O’Neil, P., Cheng, E., Gawlick, D., O’Neil, E.: The log-structured merge-tree (LSM-tree). Acta Informatica 33(4), 351–385 (1996)CrossRefGoogle Scholar
  16. 16.
    Pan, F., Yue, Y., Xiong, J.: dCompaction: delayed compaction for the LSM-tree. Int. J. Parallel Program. 45(6), 1310–1325 (2017)CrossRefGoogle Scholar
  17. 17.
    Sears, R., Ramakrishnan, R.: bLSM: a general purpose log structured merge tree. In: Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, pp. 217–228. ACM (2012)Google Scholar
  18. 18.
    Shetty, P., Spillane, R.P., Malpani, R., Andrews, B., Seyster, J., Zadok, E.: Building workload-independent storage with VT-trees. In: Usenix Conference on File and Storage Technologies, pp. 17–30 (2013)Google Scholar
  19. 19.
    Wang, P., et al.: An efficient design and implementation of LSM-tree based key-value store on open-channel SSD. In: Proceedings of the Ninth European Conference on Computer Systems, p. 16. ACM (2014)Google Scholar
  20. 20.
    Wu, X., Xu, Y., Shao, Z, Jiang, S.: LSM-trie: an LSM-tree-based ultra-large key-value store for small data. In: Proceedings of the 2015 USENIX Conference on Usenix Annual Technical Conference, pp. 71–82. USENIX Association (2015)Google Scholar
  21. 21.
    Wu, X., Zhang, L., Wang, Y., Ren, Y., Hack, M., Jiang, S.: zExpander: a key-value cache with both high performance and fewer misses. In: Proceedings of the Eleventh European Conference on Computer Systems, p. 14. ACM (2016)Google Scholar
  22. 22.
    Yao, T., et al.: A light-weight compaction tree to reduce i/o amplification toward efficient key-value stores. In: Proceedings of the 33rd International Conference on Massive Storage Systems and Technology (MSST 2017) (2017)Google Scholar
  23. 23.
    Yue, Y., He, B., Li, Y., Wang, W.: Building an efficient put-intensive key-value store with skip-tree. IEEE Trans. Parallel Distrib. Syst. 28(4), 961–973 (2017)CrossRefGoogle Scholar
  24. 24.
    Zhang, Z., et al.: Pipelined compaction for the LSM-tree. In: 2014 IEEE 28th International Parallel and Distributed Processing Symposium, pp. 777–786. IEEE (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Lidong Chen
    • 1
    • 2
  • Yinliang Yue
    • 1
    • 2
  • Haobo Wang
    • 1
    • 2
  • Jianhua Wu
    • 3
  1. 1.Institute of Information EngineeringChinese Academy of SciencesBeijingChina
  2. 2.School of Cyber Security, University of Chinese Academy of SciencesBeijingChina
  3. 3.Tencent TEGShenzhenChina

Personalised recommendations