Advertisement

Runtime and Memory Evaluation of Data Race Detection Tools

  • Pei-Hung LinEmail author
  • Chunhua Liao
  • Markus Schordan
  • Ian Karlin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11245)

Abstract

An analysis tool’s usefulness depends on whether its runtime and memory consumption remain within reasonable bounds for a given program. In this paper we present an evaluation of the memory consumption and runtime of four data race detection tools: Archer, ThreadSanitizer, Helgrind, and Intel Inspector, using DataRaceBench version 1.1.1 using 79 microbenchmarks. Our evaluation consists of four different analyses: (1) runtime and memory consumption of the four data race detection tools using all DataRaceBench microbenchmarks, (2) comparison of the analysis techniques implemented in the evaluated tools, (3) for selected benchmarks an in-depth analysis of runtime behavior with CPU profiler and the identified differences, (4) data analysis to investigate correlations within collected data. We also show the effectiveness of the tools using three quantitative metrics: precision, recall, and accuracy.

Keywords

Data race detection DataRaceBench Evaluation Benchmark 

References

  1. 1.
    Alowibdi, J.S., Stenneth, L.: An empirical study of data race detector tools. In: 2013 25th Chinese Control and Decision Conference (CCDC), May 2013, pp. 3951–3955 (2013).  https://doi.org/10.1109/CCDC.2013.6561640
  2. 2.
    Basupalli, V., et al.: ompVerify: polyhedral analysis for the OpenMP programmer. In: Chapman, B.M., Gropp, W.D., Kumaran, K., Müller, M.S. (eds.) IWOMP 2011. LNCS, vol. 6665, pp. 37–53. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21487-5_4CrossRefGoogle Scholar
  3. 3.
    Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: requirements and solutions. Int. J. Softw. Tools Technol. Transf. 1–29 (2017).  https://doi.org/10.1007/s10009-017-0469-y
  4. 4.
    Effinger-Dean, L., Lucia, B., Ceze, L., Grossman, D., Boehm, H.: IFRit: interference-free regions for dynamic data-race detection. In: Leavens, G.T., Dwyer, M.B. (eds.) Proceedings of the 27th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2012, part of SPLASH 2012, Tucson, AZ, USA, 21–25 October 2012, pp. 467–484. ACM (2012).  https://doi.org/10.1145/2384616.2384650
  5. 5.
    Flanagan, C., Freund, S.N.: FastTrack: efficient and precise dynamic race detection. In: Hind, M., Diwan, A. (eds.) Proceedings of the 2009 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2009, Dublin, Ireland, 15–21 June 2009, pp. 121–133. ACM (2009).  https://doi.org/10.1145/1542476.1542490
  6. 6.
    Ha, O.-K., Kim, Y.-J., Kang, M.-H., Jun, Y.-K.: Empirical comparison of race detection tools for OpenMP programs. In: Ślęzak, D., Kim, T., Yau, S.S., Gervasi, O., Kang, B.-H. (eds.) GDC 2009. CCIS, vol. 63, pp. 108–116. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-10549-4_13CrossRefGoogle Scholar
  7. 7.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The Weka data mining software: an update. SIGKDD Explor. Newsl. 11(1), 10–18 (2009)CrossRefGoogle Scholar
  8. 8.
    Huang, J., Meredith, P.O., Rosu, G.: Maximal sound predictive race detection with control flow abstraction. In: O’Boyle, M.F.P., Pingali, K. (eds.) ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2014, Edinburgh, UK, 09–11 June 2014, pp. 337–348. ACM (2014).  https://doi.org/10.1145/2594291.2594315
  9. 9.
    Kahlon, V., Yang, Y., Sankaranarayanan, S., Gupta, A.: Fast and accurate static data-race detection for concurrent programs. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 226–239. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-73368-3_26CrossRefGoogle Scholar
  10. 10.
    Kim, Y., Kim, D., Jun, Y.: An empirical analysis of Intel thread checker for detecting races in OpenMP programs. In: Lee, R.Y. (ed.) 7th IEEE/ACIS International Conference on Computer and Information Science, IEEE/ACIS ICIS 2008, Portland, Oregon, USA, 14–16 May 2008, pp. 409–414. IEEE Computer Society (2008).  https://doi.org/10.1109/ICIS.2008.79
  11. 11.
    Lamport, L.: Time, clocks, and the ordering of events in a distributed system. Commun. ACM 21(7), 558–565 (1978).  https://doi.org/10.1145/359545.359563CrossRefzbMATHGoogle Scholar
  12. 12.
    Liao, C., Lin, P.H., Asplund, J., Schordan, M., Karlin, I.: DataRaceBench: a benchmark suite for systematic evaluation of data race detection tools. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2017, pp. 11:1–11:14. ACM, New York (2017).  https://doi.org/10.1145/3126908.3126958
  13. 13.
    Liao, C., Quinlan, D.J., Willcock, J.J., Panas, T.: Semantic-aware automatic parallelization of modern applications using high-level abstractions. Int. J. Parallel Program. 38(5), 361–378 (2010)CrossRefGoogle Scholar
  14. 14.
    Maiya, P., Kanade, A., Majumdar, R.: Race detection for android applications. In: O’Boyle, M.F.P., Pingali, K. (eds.) ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2014, Edinburgh, UK, 09–11 June 2014, pp. 316–325. ACM (2014).  https://doi.org/10.1145/2594291.2594311
  15. 15.
    Müehlenfeld, A., Wotawa, F.: Fault detection in multi-threaded C++ server applications. In: Proceedings of the 12th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2007, pp. 142–143. ACM, New York (2007)Google Scholar
  16. 16.
    O’Callahan, R., Choi, J.: Hybrid dynamic data race detection. In: Eigenmann, R., Rinard, M.C. (eds.) Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP 2003, San Diego, CA, USA, 11–13 June 2003, pp. 167–178. ACM (2003).  https://doi.org/10.1145/781498.781528
  17. 17.
    de Oliveira, A.B., Petkovich, J.C., Reidemeister, T., Fischmeister, S.: DataMill: rigorous performance evaluation made easy. In: Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, ICPE 2013, pp. 137–148. ACM, New York (2013).  https://doi.org/10.1145/2479871.2479892
  18. 18.
    Petersen, P., Shah, S.: OpenMP support in the Intel® thread checker. In: Voss, M.J. (ed.) WOMPAT 2003. LNCS, vol. 2716, pp. 1–12. Springer, Heidelberg (2003).  https://doi.org/10.1007/3-540-45009-2_1CrossRefGoogle Scholar
  19. 19.
    Poznianski, E., Schuster, A.: Efficient on-the-fly data race detection in multithreaded C++ programs. In: 17th International Parallel and Distributed Processing Symposium (IPDPS 2003), Nice, France, 22–26 April 2003. CD-ROM/Abstracts Proceedings, p. 287. IEEE Computer Society (2003).  https://doi.org/10.1109/IPDPS.2003.1213513
  20. 20.
    Pratikakis, P., Foster, J.S., Hicks, M.W.: LOCKSMITH: context-sensitive correlation analysis for race detection. In: Schwartzbach, M.I., Ball, T. (eds.) Proceedings of the ACM SIGPLAN 2006 Conference on Programming Language Design and Implementation, Ottawa, Ontario, Canada, 11–14 June 2006, pp. 320–331. ACM (2006).  https://doi.org/10.1145/1133981.1134019
  21. 21.
    Sack, P., Bliss, B.E., Ma, Z., Petersen, P., Torrellas, J.: Accurate and efficient filtering for the Intel thread checker race detector. In: Proceedings of the 1st Workshop on Architectural and System Support for Improving Software Dependability, pp. 34–41. ACM (2006)Google Scholar
  22. 22.
    Savage, S., Burrows, M., Nelson, G., Sobalvarro, P., Anderson, T.E.: Eraser: a dynamic data race detector for multithreaded programs. ACM Trans. Comput. Syst. 15(4), 391–411 (1997).  https://doi.org/10.1145/265924.265927CrossRefGoogle Scholar
  23. 23.
    Serebryany, K., Iskhodzhanov, T.: ThreadSanitizer: data race detection in practice. In: Proceedings of the Workshop on Binary Instrumentation and Applications, WBIA 2009, pp. 62–71. ACM, New York (2009)Google Scholar
  24. 24.
    Smaragdakis, Y., Evans, J., Sadowski, C., Yi, J., Flanagan, C.: Sound predictive race detection in polynomial time. In: Field, J., Hicks, M. (eds.) Proceedings of the 39th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2012, Philadelphia, Pennsylvania, USA, 22–28 January 2012, pp. 387–400. ACM (2012).  https://doi.org/10.1145/2103656.2103702
  25. 25.
    Suh, Y., Snodgrass, R.T., Kececioglu, J.D., Downey, P.J., Maier, R.S., Yi, C.: EMP: execution time measurement protocol for compute-bound programs. Softw. Pract. Exper. 47(4), 559–597 (2017).  https://doi.org/10.1002/spe.2476CrossRefGoogle Scholar
  26. 26.
    Voung, J.W., Jhala, R., Lerner, S.: RELAY: static race detection on millions of lines of code. In: Crnkovic, I., Bertolino, A. (eds.) Proceedings of the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Dubrovnik, Croatia, 3–7 September 2007, pp. 205–214. ACM (2007).  https://doi.org/10.1145/1287624.1287654
  27. 27.
    Yu, M., Park, S.M., Chun, I., Bae, D.H.: Experimental performance comparison of dynamic data race detection techniques. ETRI J. 39(1), 124–134 (2017).  https://doi.org/10.4218/etrij.17.0115.1027CrossRefGoogle Scholar
  28. 28.
    Yu, Y., Rodeheffer, T., Chen, W.: RaceTrack: efficient detection of data race conditions via adaptive tracking. In: Herbert, A., Birman, K.P. (eds.) Proceedings of the 20th ACM Symposium on Operating Systems Principles 2005, SOSP 2005, Brighton, UK, 23–26 October 2005, pp. 221–234. ACM (2005).  https://doi.org/10.1145/1095810.1095832

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Pei-Hung Lin
    • 1
    Email author
  • Chunhua Liao
    • 1
  • Markus Schordan
    • 1
  • Ian Karlin
    • 1
  1. 1.Lawrence Livermore National LaboratoryLivermoreUSA

Personalised recommendations