Skip to main content

Benchmarking for Transaction Processing Database Systems in Big Data Era

  • Conference paper
  • First Online:
Benchmarking, Measuring, and Optimizing (Bench 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11459))

Included in the following conference series:

Abstract

Benchmarking is an essential suite supporting development of database management systems. It runs a set of well defined data and workloads on a specific hardware configuration to gather the results to fill the measurements. It is used widely for evaluating new technology or comparing different systems so as to promote the progress of database systems. To date, under the requirement of data management, new databases are designed and issued for different application requirements. Most of the state-of-the-art benchmarks are also designed for specific types of applications. Based on our experiences, however, we argue that considering the characteristics of data or workloads in big data era, benchmarking transaction processing databases (TP) must put much effort for domain specific needs to reflet 4V properties (i.e. volume, velocity, variety and veracity). With the critical transaction processing requirements of new applications, we see an explosion of designing innovative scalable databases or new processing architecture on traditional databases dealing with high intensive transaction workloads, which are called SecKill and can saturate the traditional database systems by high workloads, for example “11\(\cdot 11\)” of Tmall, “ticket booking” during China Spring Festival and “Stock Exchange” applications.

In this paper, we first analyze SecKill applications and the implementation logics, and also summarize and abstract the business model in details. Then, we propose a totally new benchmark called PeakBench for simulating SecKill applications, including workload characteristics definition, workload distribution simulating, and logics implementing. Additionally, we define new evaluation metrics for performance comparison among DBMSs under different implementation architecture from the micro- and macro- points of views. At last, we provide a package of tools for simulating and evaluating purpose.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arasu, A., Kaushik, R., Li, J.: Data generation using declarative constraints. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, pp. 685–696. ACM (2011)

    Google Scholar 

  2. Binnig, C., Kossmann, D., Lo, E., Özsu, M.T.: QAGen: generating query-aware test databases. In Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data, pp. 341–352. ACM (2007)

    Google Scholar 

  3. Bitton, D., DeWitt, D.J., Turbyfill, C.: Benchmarking database systems: a systematic approach. Computer Sciences Department, University of Wisconsin-Madison (1983)

    Google Scholar 

  4. Bruno, N., Chaudhuri, S.: Flexible database generators. In: Proceedings of the 31st International Conference on Very Large Data Bases, pp. 1097–1107. VLDB Endowment (2005)

    Google Scholar 

  5. Cahill, M.J., Röhm, U., Fekete, A.D.: Serializable isolation for snapshot databases. ACM Trans. Database Syst. (TODS) 34(4), 20 (2009)

    Article  Google Scholar 

  6. Chen, S., et al.: TPC-E vs. TPC-C: characterizing the new TPC-E benchmark via an I/O comparison study. ACM SIGMOD Rec. 39(3), 5–10 (2011)

    Article  Google Scholar 

  7. Zhang, Y.L.R., Zhang, R.: https://github.com/daseecnu/db-testing

  8. Cole, R., et al.: The mixed workload CH-benchmark. In: Proceedings of the Fourth International Workshop on Testing Database Systems, p. 8. ACM (2011)

    Google Scholar 

  9. Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM Symposium on Cloud Computing, pp. 143–154. ACM (2010)

    Google Scholar 

  10. Trans-Pacific Partnership Council: TPC-H benchmark specification, vol. 21, pp. 592-603 (2008). http://www.tcp.org/hspec.html

  11. Difallah, D.E., Pavlo, A., Curino, C., Cudre-Mauroux, P.: OLTP-bench: an extensible testbed for benchmarking relational databases. Proc. VLDB Endow. 7(4), 277–288 (2013)

    Article  Google Scholar 

  12. George, L.: HBase: The Definitive Guide: Random Access to Your Planet-size Data. O’Reilly Media, Inc., Newton (2011)

    Google Scholar 

  13. Gray, J.: Benchmark Handbook: For Database and Transaction Processing Systems. Morgan Kaufmann Publishers, Inc., Burlington (1992)

    MATH  Google Scholar 

  14. Patil, S., et al.: YCSB++: benchmarking and performance debugging advanced features in scalable table stores. In: Proceedings of the 2nd ACM Symposium on Cloud Computing, p. 9. ACM (2011)

    Google Scholar 

  15. Stonebraker, M.: A measure of transaction processing power. Datamation 31(7), 112–118 (1985)

    Google Scholar 

  16. Tay, Y.C.: Data generation for application-specific benchmarking. VLDB 4, 1470–1473 (2011)

    Google Scholar 

  17. LLC VoltDB: VoltDB technical overview. Whitepaper (2010)

    Google Scholar 

  18. Wolski, A.: TATP benchmark description (version 1.0) (2009)

    Google Scholar 

  19. Yang, Z.: The architecture of oceanbase relational database system. J. East China Norm. Univ. (Nat. Sci.) 9(5), 141–148 (2014)

    Google Scholar 

Download references

Acknowledgment

We are partially supported by the Key Program of National Natural Science Foundation of China (No. 2018YFB1003402) and National Science Foundation of China (No. 61432006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rong Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, C., Li, Y., Zhang, R., Qian, W., Zhou, A. (2019). Benchmarking for Transaction Processing Database Systems in Big Data Era. In: Zheng, C., Zhan, J. (eds) Benchmarking, Measuring, and Optimizing. Bench 2018. Lecture Notes in Computer Science(), vol 11459. Springer, Cham. https://doi.org/10.1007/978-3-030-32813-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32813-9_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32812-2

  • Online ISBN: 978-3-030-32813-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics