Skip to main content

Trace-Context Sensitive Performance Profiling for Enterprise Software Applications

  • Conference paper
Performance Evaluation: Metrics, Models and Benchmarks (SIPEW 2008)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 5119))

Included in the following conference series:

Abstract

Software response time distributions can be of high variance and multi-modal. Such characteristics reduce confidence or applicability in various statistical evaluations.

We contribute an approach to correlating response times to their corresponding operation execution sequence. This provides calling-context sensitive timing behavior models. The approach is based on three equivalence relations: caller-context, stack-context, and trace-context equivalence. To prevent model size explosion, a tree-based hierarchy provides timing behavior models that provide a trade-off between timing behavior model size and the amount of calling-context information considered.

In the case study, our approach provides response time distributions with significantly lower standard deviation, compared to using less or no calling-context information. An example from a performance analysis of an industry system demonstrates that multi-modal distributions can be replaced by multiple unimodal distributions using trace-context analysis.

This work is supported by the German Research Foundation (DFG), grant GRK 1076/1.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rohr, M., van Hoorn, A., Giesecke, S., Matevska, J., Hasselbring, W.: Trace-context sensitive performance models from monitoring data of software-intensive systems. In: Workshop on Tools Infrastructures and Methodologies for the Evaluation of Research Systems (TIMERS 2008) at IEEE International Symposium on Performance Analysis of Systems and Software (April 2008)

    Google Scholar 

  2. Hamou-Lhadj, A., Lethbridge, T.C.: A survey of trace exploration tools and techniques. In: Conference of the Centre for Advanced Studies on Collaborative research CASCON 2004, pp. 42–55. IBM Press (2004)

    Google Scholar 

  3. Jain, R.: The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling, 1st edn. John Wiley & Sons, Chichester (1991)

    MATH  Google Scholar 

  4. Bulej, L., Kalibera, T., Tůma, P.: Repeated results analysis for middleware regression benchmarking. Performance Evaluation 60(1-4), 345–358 (2005)

    Article  Google Scholar 

  5. Arlitt, M.F., Krishnamurthy, D., Rolia, J.: Characterizing the scalability of a large web-based shopping system. ACM Transactions on Internet Technology 1(1), 44–69 (2001)

    Article  Google Scholar 

  6. Ammons, G., Ball, T., Larus, J.R.: Exploiting hardware performance counters with flow and context sensitive profiling. In: Conference on Programming Language Design and Implementation (PLDI 1997), pp. 85–96. ACM, New York (1997)

    Chapter  Google Scholar 

  7. Graham, S.L., Kessler, P.B., McKusick, M.K.: gprof: a call graph execution profiler. SIGPLAN Notes 17(6), 120–126 (1982)

    Article  Google Scholar 

  8. Object Management Group (OMG): Unified Modeling Language: Superstructure Version 2.1.1 (February 2007)

    Google Scholar 

  9. Barrett, J.P., Goldsmith, L.: When is n sufficiently large? The American Statistician 30(2), 67–70 (1976)

    Article  Google Scholar 

  10. Mielke, A.: Elements for response-time statistics in ERP transaction systems. Performance Evaluation 63(7), 635–653 (2006)

    Article  Google Scholar 

  11. Rohr, M., van Hoorn, A., Matevska, J., Sommer, N., Stoever, L., Giesecke, S., Hasselbring, W.: Kieker: Continuous monitoring and on demand visualization of Java software behavior. In: IASTED International Conference on Software Engineering 2008, pp. 80–85. ACTA Press (February 2008)

    Google Scholar 

  12. van Hoorn, A., Rohr, M., Hasselbring, W.: Generating probabilistic and intensity-varying workload for web-based software systems. In: SPEC International Performance Evaluation Workshop (SIPEW 2008). LNCS, vol. 5119. Springer, Heidelberg (2008)

    Google Scholar 

  13. Montgomery, D.C., Runger, G.C.: Applied Statistics and Probability for Engineers, 3rd edn. John Wiley & Sons, Inc., Chichester (2003)

    Google Scholar 

  14. Duzbayev, N., Poernomo, I.: Runtime prediction of queued behaviour. In: Hofmeister, C., Crnković, I., Reussner, R. (eds.) QoSA 2006. LNCS, vol. 4214, pp. 78–94. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  15. Diaconescu, A., Mos, A., Murphy, J.: Automatic performance management in component based software systems. In: First International Conference on Autonomic Computing (ICAC 2004), pp. 214–221. IEEE, Los Alamitos (2004)

    Chapter  Google Scholar 

  16. Agarwal, M.K., Appleby, K., Gupta, M., Kar, G., Neogi, A., Sailer, A.: Problem determination using dependency graphs and run-time behavior models. In: Sahai, A., Wu, F. (eds.) DSOM 2004. LNCS, vol. 3278, pp. 171–182. Springer, Heidelberg (2004)

    Google Scholar 

  17. Govindraj, K., Narayanan, S., Thomas, B., Nair, P., P, S.: On using AOP for Application Performance Management. In: AOSD 2006 - Industry Track Proceedings (Technical Report IAI-TR-2006-3, University of Bonn), pp. 18–30 (March 2006)

    Google Scholar 

  18. Briand, L.C., Labiche, Y., Leduc, J.: Toward the reverse engineering of UML sequence diagrams for distributed Java software. IEEE Transactions on Software Engineering 32(9), 642–663 (2006)

    Article  Google Scholar 

  19. Koziolek, H., Becker, S., Happe, J.: Predicting the Performance of Component-based Software Architectures with different Usage Profiles. In: 3rd International Conference on the Quality of Software Architectures (QoSA 2007). LNCS, vol. 4880, pp. 145–163. Springer, Heidelberg (2008)

    Google Scholar 

  20. Rohr, M., Giesecke, S., Hasselbring, W.: Timing Behavior Anomaly Detection in Enterprise Information Systems. In: 9th International Conference on Enterprise Information Systems (ICEIS 2007), June 2007, pp. 494–497. INSTICC Press (2007)

    Google Scholar 

  21. Xie, T., Notkin, D.: An empirical study of Java dynamic call graph extractors. Technical Report UW-CSE-02-12-03, University of Washington Department of Computer Science and Engineering, Seattle, WA, USA (December 2002)

    Google Scholar 

  22. Hamou-Lhadj, A.: Techniques to Simplify the Analysis of Execution Traces for Program Comprehension. PhD thesis, Ottawa-Carleton Institute for Computer Science, School of Information Technology and Engineering (SITE), University of Ottawa (2005)

    Google Scholar 

  23. Barham, P., Isaacs, R., Mortier, R., Narayanan, D.: Magpie: online modelling and performance-aware systems. In: 9th Conference on Hot Topics in Operating Systems (HOTOS 2003), USENIX Association, p. 15 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Samuel Kounev Ian Gorton Kai Sachs

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rohr, M., van Hoorn, A., Giesecke, S., Matevska, J., Hasselbring, W., Alekseev, S. (2008). Trace-Context Sensitive Performance Profiling for Enterprise Software Applications. In: Kounev, S., Gorton, I., Sachs, K. (eds) Performance Evaluation: Metrics, Models and Benchmarks. SIPEW 2008. Lecture Notes in Computer Science, vol 5119. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69814-2_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-69814-2_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-69813-5

  • Online ISBN: 978-3-540-69814-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics