Advertisement

A Method for Improving the Precision and Coverage of Atomicity Violation Predictions

  • Reng ZengEmail author
  • Zhuo Sun
  • Su Liu
  • Xudong He
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9035)

Abstract

Atomicity violations are the most common non-deadlock concurrency bugs, which have been extensively studied in recent years. Since detecting the actual occurrences of atomicity violations is extremely hard and exhaustive testing of a multi-threaded program is in general impossible, many predictive methods have been proposed, which make error predictions based on a small number of instrumented interleaved executions. Predictive methods often make tradeoffs between precision and coverage. An over-approximate predictive method ensures coverage but lacks precision and thus may report a large number of false bugs. An under-approximate predictive method ensures precision but lacks coverage and thus can miss significant real bugs. This paper presents a post-prediction analysis method for improving the precision of the prediction results obtained through over-approximation while achieving better coverage than that obtained through under-approximation. Our method analyzes and filters the prediction results of over-approximation by evaluating a subset of read-after-write relationships without enforcing all of them as in existing under-approximation methods. Our post-prediction method is a static analysis method on the predicted traces from dynamic instrumentation of C/C++ executable, and is faster than dynamic replaying methods for ensuring precision.

Keywords

Shared Variable Concurrent Program Predictive Method Data Constraint False Prediction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Chen, F., Serbanuta, T.F., Rosu, G.: jPredictor: a predictive runtime analysis tool for java. In: Proceedings of the 30th International Conference on Software Engineering (ICSE 2008), Leipzig, Germany, pp. 221–230 (2008)Google Scholar
  2. 2.
    Dunlap, G.W., King, S.T., Cinar, S., Basrai, M.A., Chen, P.M.: ReVirt: enabling intrusion analysis through virtual-machine logging and replay. In: Proceedings of the 5th Symposium on Operating Systems Design and Implementation (OSDI 2002), Boston, MA, USA, pp. 211–224 (2002)Google Scholar
  3. 3.
    Farchi, E., Nir, Y., Ur, S.: Concurrent bug patterns and how to test them. In: Proceedings of the 17th International Symposium on Parallel and Distributed Processing, IPDPS 2003 (2003)Google Scholar
  4. 4.
    Farzan, A., Madhusudan, P.: The complexity of predicting atomicity violations. In: Kowalewski, S., Philippou, A. (eds.) TACAS 2009. LNCS, vol. 5505, pp. 155–169. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  5. 5.
    Flanagan, C., Freund, S.N., Yi, J.: Velodrome: a sound and complete dynamic atomicity checker for multithreaded programs. In: Proceedings of the 2008 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2008), Tucson, AZ, USA, pp. 293–303 (2008)Google Scholar
  6. 6.
    Flanagan, C., Qadeer, S.: A type and effect system for atomicity. In: Proceedings of the 2003 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2003), San Diego, CA, USA, pp. 338–349 (2003)Google Scholar
  7. 7.
    Ganai, M.K.: Scalable and precise symbolic analysis for atomicity violations. In: Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011), Lawrence, KS, USA, pp. 123–132 (2011)Google Scholar
  8. 8.
    Kahlon, V., Wang, C.: Universal causality graphs: A precise happens-before model for detecting bugs in concurrent programs. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 434–449. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  9. 9.
    Konuru, R., Srinivasan, H., Choi, J.D.: Deterministic replay of distributed java applications. In: Proceedings of 14th International Parallel and Distributed Processing Symposium (IPDPS 2000), Cancun, Mexico, pp. 219–227 (2000)Google Scholar
  10. 10.
    Liu, X., Lin, W., Pan, A., Zhang, Z.: WiDS checker: combating bugs in distributed systems. In: Proceedings of the 4th USENIX Conference on Networked Systems Design and Implementation (NSDI 2007), Cambridge, MA, USA, pp. 19–19 (2007)Google Scholar
  11. 11.
    Lu, S., Park, S., Zhou, Y.: Finding Atomicity-Violation bugs through unserializable interleaving testing. IEEE Transactions on Software Engineering 38(4), 844–860 (2011)CrossRefGoogle Scholar
  12. 12.
    Lu, S., Tucek, J., Qin, F., Zhou, Y.: AVIO: detecting atomicity violations via access interleaving invariants. In: Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2006), San Jose, CA, USA, pp. 37–48 (2006)Google Scholar
  13. 13.
    Musuvathi, M., Qadeer, S., Ball, T., Basler, G., Nainar, P.A., Neamtiu, I.: Finding and reproducing heisenbugs in concurrent programs. In: Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation (OSDI 2008), San Diego, CA, USA, pp. 267–280 (2008)Google Scholar
  14. 14.
    Sen, K., Roşu, G., Agha, G.: Detecting errors in multithreaded programs by generalized predictive analysis of executions. In: Steffen, M., Zavattaro, G. (eds.) FMOODS 2005. LNCS, vol. 3535, pp. 211–226. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  15. 15.
    Şerbănuţă, T.F., Chen, F., Roşu, G.: Maximal causal models for sequentially consistent systems. In: Qadeer, S., Tasiran, S. (eds.) RV 2012. LNCS, vol. 7687, pp. 136–150. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  16. 16.
    Sinha, A., Malik, S., Wang, C., Gupta, A.: Predictive analysis for detecting serializability violations through trace segmentation. In: Proceedings of the 9th International Conference on Formal Methods and Models for Codesign (MEMOCODE 2011), Cambridge, UK, pp. 99–108 (2011)Google Scholar
  17. 17.
    Sorrentino, F., Farzan, A., Madhusudan, P.: Penelope: weaving threads to expose atomicity violations. In: Proceedings of the 18th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2010), Santa Fe, NM, USA, pp. 37–46 (2010)Google Scholar
  18. 18.
    Wang, C., Limaye, R., Ganai, M., Gupta, A.: Trace-based symbolic analysis for atomicity violations. In: Esparza, J., Majumdar, R. (eds.) TACAS 2010. LNCS, vol. 6015, pp. 328–342. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  19. 19.
    Wang, L., Stoller, S.D.: Runtime analysis of atomicity for multithreaded programs. IEEE Transactions on Software Engineering 32, 93–110 (2006)CrossRefGoogle Scholar
  20. 20.
    Yi, J., Sadowski, C., Flanagan, C.: SideTrack: generalizing dynamic atomicity analysis. In: Proceedings of the 7th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD 2009), Chicago, IL, USA, pp. 8:1–8:10 (2009)Google Scholar
  21. 21.
    Zeng, R., Sun, Z., Liu, S., He, X.: McPatom: A predictive analysis tool for atomicity violation using model checking. In: Donaldson, A., Parker, D. (eds.) SPIN 2012. LNCS, vol. 7385, pp. 191–207. Springer, Heidelberg (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.School of Computing and Information SciencesFlorida International UniversityMiamiUSA

Personalised recommendations