Abstract
Benchmarking is an essential suite supporting development of database management systems. It runs a set of well defined data and workloads on a specific hardware configuration to gather the results to fill the measurements. It is used widely for evaluating new technology or comparing different systems so as to promote the progress of database systems. To date, under the requirement of data management, new databases are designed and issued for different application requirements. Most of the state-of-the-art benchmarks are also designed for specific types of applications. Based on our experiences, however, we argue that considering the characteristics of data or workloads in big data era, benchmarking transaction processing databases (TP) must put much effort for domain specific needs to reflet 4V properties (i.e. volume, velocity, variety and veracity). With the critical transaction processing requirements of new applications, we see an explosion of designing innovative scalable databases or new processing architecture on traditional databases dealing with high intensive transaction workloads, which are called SecKill and can saturate the traditional database systems by high workloads, for example “11\(\cdot 11\)” of Tmall, “ticket booking” during China Spring Festival and “Stock Exchange” applications.
In this paper, we first analyze SecKill applications and the implementation logics, and also summarize and abstract the business model in details. Then, we propose a totally new benchmark called PeakBench for simulating SecKill applications, including workload characteristics definition, workload distribution simulating, and logics implementing. Additionally, we define new evaluation metrics for performance comparison among DBMSs under different implementation architecture from the micro- and macro- points of views. At last, we provide a package of tools for simulating and evaluating purpose.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arasu, A., Kaushik, R., Li, J.: Data generation using declarative constraints. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, pp. 685–696. ACM (2011)
Binnig, C., Kossmann, D., Lo, E., Özsu, M.T.: QAGen: generating query-aware test databases. In Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data, pp. 341–352. ACM (2007)
Bitton, D., DeWitt, D.J., Turbyfill, C.: Benchmarking database systems: a systematic approach. Computer Sciences Department, University of Wisconsin-Madison (1983)
Bruno, N., Chaudhuri, S.: Flexible database generators. In: Proceedings of the 31st International Conference on Very Large Data Bases, pp. 1097–1107. VLDB Endowment (2005)
Cahill, M.J., Röhm, U., Fekete, A.D.: Serializable isolation for snapshot databases. ACM Trans. Database Syst. (TODS) 34(4), 20 (2009)
Chen, S., et al.: TPC-E vs. TPC-C: characterizing the new TPC-E benchmark via an I/O comparison study. ACM SIGMOD Rec. 39(3), 5–10 (2011)
Zhang, Y.L.R., Zhang, R.: https://github.com/daseecnu/db-testing
Cole, R., et al.: The mixed workload CH-benchmark. In: Proceedings of the Fourth International Workshop on Testing Database Systems, p. 8. ACM (2011)
Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM Symposium on Cloud Computing, pp. 143–154. ACM (2010)
Trans-Pacific Partnership Council: TPC-H benchmark specification, vol. 21, pp. 592-603 (2008). http://www.tcp.org/hspec.html
Difallah, D.E., Pavlo, A., Curino, C., Cudre-Mauroux, P.: OLTP-bench: an extensible testbed for benchmarking relational databases. Proc. VLDB Endow. 7(4), 277–288 (2013)
George, L.: HBase: The Definitive Guide: Random Access to Your Planet-size Data. O’Reilly Media, Inc., Newton (2011)
Gray, J.: Benchmark Handbook: For Database and Transaction Processing Systems. Morgan Kaufmann Publishers, Inc., Burlington (1992)
Patil, S., et al.: YCSB++: benchmarking and performance debugging advanced features in scalable table stores. In: Proceedings of the 2nd ACM Symposium on Cloud Computing, p. 9. ACM (2011)
Stonebraker, M.: A measure of transaction processing power. Datamation 31(7), 112–118 (1985)
Tay, Y.C.: Data generation for application-specific benchmarking. VLDB 4, 1470–1473 (2011)
LLC VoltDB: VoltDB technical overview. Whitepaper (2010)
Wolski, A.: TATP benchmark description (version 1.0) (2009)
Yang, Z.: The architecture of oceanbase relational database system. J. East China Norm. Univ. (Nat. Sci.) 9(5), 141–148 (2014)
Acknowledgment
We are partially supported by the Key Program of National Natural Science Foundation of China (No. 2018YFB1003402) and National Science Foundation of China (No. 61432006).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, C., Li, Y., Zhang, R., Qian, W., Zhou, A. (2019). Benchmarking for Transaction Processing Database Systems in Big Data Era. In: Zheng, C., Zhan, J. (eds) Benchmarking, Measuring, and Optimizing. Bench 2018. Lecture Notes in Computer Science(), vol 11459. Springer, Cham. https://doi.org/10.1007/978-3-030-32813-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-32813-9_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32812-2
Online ISBN: 978-3-030-32813-9
eBook Packages: Computer ScienceComputer Science (R0)