Specification of Parametric Monitors

Quantified Event Automata versus Rule Systems
  • Klaus Havelund
  • Giles Reger


Specification-based runtime verification is a technique for monitoring program executions against specifications formalized in formal logic. Such logics are usually temporal in nature, capturing the relation between events occurring at different time points. A particular challenge in runtime verification is the elegant specification and efficient monitoring of streams of events that carry data, also referred to as parametric monitoring. This paper presents two parametric runtime verification systems representing two quite different approaches to the problem. qea (Quantified Event Automata) is a state machine approach based on trace-slicing, while LogFire is a rule-based approach based on the Rete algorithm, known from AI as being the basis for many rule systems. The presentation focuses on how easy it is to specify properties in the two approaches by specifying a collection of properties gathered during the 1st International Competition of Software for Runtime Verification (CSRV 2014), affiliated with RV 2014 in Toronto, Canada.


Model Check Temporal Logic Linear Temporal Logic Execution Trace Rule System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
  2. 2.
  3. 3.
    C. Allan, P. Avgustinov, A. S. Christensen, L. Hendren, S. Kuzins, O. Lhot8.k, O. de Moor, D. Sereni, G. Sittamplao, aod J. Tibble. Adding trace matching with free variables to AspectJ. In OOPSLA '05. ACM Press, 2005.Google Scholar
  4. 4.
    H. Barringer, Y. Faleone, K. Havelund, G. Reger, and D. Rydeheard. Quaotified Event Automata – towards expressive and efficient runtime monitors. In 18 th International Symposium on Formal Methoos (FM'12), Paris, F'rance, August 27- 91, 2offt. Proceedings, volume 7436 of LNCS. Springer, 2of2.Google Scholar
  5. 5.
    H. Barringer, M. Fisher, D. M. Gabbay, G. Gough, and R. Owens. Metatem: An introduction. Formal Asp. Comput., 7(5):533–549, 1995.MATHCrossRefGoogle Scholar
  6. 6.
    H. Barringer, A. Goldberg, K. Havelund, and K. Sen. Program monitoring with LTL in Eagle. In Parallel and Distributed Systems: Testing and Debugging (PAD TAD' 04), Santa Fee, New Mexico , USA, volume 17 of IEEE Computer Society , April 2004.Google Scholar
  7. 7.
    H. Barringer, A. Goldberg, K. Havelund, and K. Sen. Rule-based runtbne verifieation. In VMCAI, volume 2937 of LNCS, pages 44–57. Springer, 2004.Google Scholar
  8. 8.
    H. Barringer, A. Groce, K. Havelund, and M. Srnitb. Formal analysis of log files. Journal of Aerospace Computing, Information, and Communication, 7(11):365- 390,2of0.Google Scholar
  9. 9.
    H. Barringer and K. Havelund. TraceContract: A Scala DSL far trace analysis. In 17th International Symposium on Formal Methods (FM'l1), Limerick, Ireland, June 20-24, 2011. Proceedings, volume 6664 of LNCS, pages 57-72. Springer, 2ofl.Google Scholar
  10. 10.
    H. Barringer, K. Havelund, D. Rydeheard, and A. Grace. Rule systems far runtime verification: A short tutorial. In Proc . of the 9th Int. Workshop on Runtime Verificotion (RV'09), volume 5779 of LNCS, pages 1-24. Springer, 2009.Google Scholar
  11. 11.
    H. Barringer, D. Rydeheard, and K. Havelund. Rule systems for run-time monitoring: from Eagle to RuleR. In Proc . of the 7th Int. Workshop on Runtime Verificotion (RV'07), volume 4839 of LNCS, pages 111-125, Vaneouver, Canada, 2007. Springer.Google Scholar
  12. 12.
    H. Barringer, D. E. Rydeheard, and K. Havelund. Rule systems for run-time monitoring: from Eagle to RuleR. J. Log. Comput., 20(3):675-706, 2of0.Google Scholar
  13. 13.
    D. A. Basin, F. K1aedtke, and S. Müller. Poliey monitoring in first-order temporal logic. In T. Touili, B. Cook, and P. Jackson, editors, Computer Aided Verification , 22nd International Con/erence, CA V 2of0, Edinburgh, UK, July 15-19, Proceedings, volume 6174 of LNCS, pages 1-18. Springer, 2of0.Google Scholar
  14. 14.
    A. Bauer, J.-C. Küster, and G. Vegliach. From propositional to first-order monitoring. In Runtime Verification - 4th Int. Conference, RV'19, Rennes, France , September 24-27, 2013. Proeeedings, volume 8174 of LNCS, pages 59-75. Springer, 2013.Google Scholar
  15. 15.
    A. Bauer, M. Leucker, and C. Scba1lhart. The good, the bad, and the ugly, but how ngly is ngly? In Proe. of the 7th Int. Workshop on Runtime Verificotion (RV'07), volume 4839 of LNCS, pages 126–138, Vancouver, Canada, 2007. Springer.Google Scholar
  16. 16.
    A. Bauer, M. Leucker, and J. Streit. SALT – struetured assertion language for temporallogic. In Z. Liu and J. He, editors, Formal Methods and Software Engineering , volume 4260 of Lecture Notes in Computer Seieneo, pages 757–775. Springer Berlin Heidelberg, 2006.Google Scholar
  17. 17.
    E. Bodden. MOPBox: A library approach to runtime verifieation. In Runtime Verification - 2nd Int. Conference, RV'll, San Francisco, USA, September 27-90 , 2011. Proceedings, volume 7186 of LNCS, pages 36 & -369. Springer, 201l.Google Scholar
  18. 18.
    F. Chen and G. Ro§u. Parametrie trace slicing and monitoring. In Proceedings of the 15th International Con/erenee on Tools and Algorithms /or the Construetion and Analysis of Systems (TACAS'09), volume 5505 of LNCS, pages 246–261, 2009.Google Scholar
  19. 19.
  20. 20.
    H. C. Cruz. Optimisation techniques for runtime verification. Master's thesis, University of Manchester, 2of4.Google Scholar
  21. 21.
    M. D' Amorim and K. Havelund. Event-based runtime verification of Java programs. In Workshop on Dynamie Program Analysis (WODA '05), volume 30(4) of ACM Sigsoft Software Engineering Notes, pages 1–7, 2005.Google Scholar
  22. 22.
    N. Decker, M. Leucker, and D. Thoma. Monitoring modulo theories. In E. Abraham and K. Havelund, editors, Tools and Algorithms /or the Constmetion and Analysis of Systems20th International Con/erenee, TACAS 2of4, Grenoble, F'rnnce, April 7-11, 2of4. Proceedings, volume 8413 of LNCS, pages 341-356. Springer, 2of4.Google Scholar
  23. 23.
    R. B. Doorenbos. Production Matching for Large Leaming Systems. PhD thesis, Carnegie Mellon University, Pittsburgh, PA, 1995.Google Scholar
  24. 24.
  25. 25.
    Droola functional programming extensions website:
  26. 26.
    D. Drusinsky. The temporal rover and the ATG rover. In SPIN Model Checking and Software Verijiootion, volume 1885 of LNCS, pages 323-330. Springer, 2000.Google Scholar
  27. 27.
    D. Drusinsky. Modeling and Verijiootion using UML Statecharts. Elsevier, 2006. ISBN-13: 978-0-7506-7949-7, 400 pages.Google Scholar
  28. 28.
    Y. Falcone, J.-C. Fernandez, and L. Mounier. Runtime verification of safetyprogress properties. In Proc. of the 9th Int. Workshop on Runtime Verijication (RV'09), volume 5779 of LNCS, pages 40-59. Springer, 2009.Google Scholar
  29. 29.
    Y. Falcone, J.-C. Fernandez, and L. Mounier. What can you verify and enforce at runtime? J Software Tools for Technology Transfer, 14(3):349-382, 2of2.Google Scholar
  30. 30.
    Y. Falcone, K. Havelund, and G. Reger. A tutorial on runtime verification. In M. Broy and D. Peled, editors, Summer School Marktoberdorf 2of2Engineering Dependable Software Systems, to appear . lOS Press, 2013. Google Scholar
  31. 31.
    C. Forgy. R.ete: A fast algorithm for the many pattern/many object pattern match problem. Artijiciallntelligence, 19:17–37, 1982.Google Scholar
  32. 32.
    M. Fusco. Hammurabi – a Scala rule engine. In Scala Days 2011, Stanford University, California, 2011.Google Scholar
  33. 33.
    R. Gerth, D. Peled, M. Y. Vardi, and P. Wolper. Simple on-th … fly automatie verification of linear temporallogic. fu P. Dembinski and M. Sredniawa, editors, In Protocol Specijication Testing and Verijication (PSTV), volume 38, pages 3–18. Chapman & Hall, 1995.Google Scholar
  34. 34.
    J. Goubault-Larrecq and J. Olivain. A smell of ORCHIDS. In Proc. of the 8th Int. Workshop on Runtime Verijiootion (RV'08), volume 5289 of LNCS, pages 1-20, Budapest, Hungary, 2008. Springer.Google Scholar
  35. 35.
    A. Groce, K. Havelund, and M. H. Smith. From scripts to specifications: the evolution of a ßight software testing effort. In 92nd Int. Conference on Software Engineering (ICSE'10), Cape Town, South Ajrioo, ACM SIG, pages 129-138, 2of0.Google Scholar
  36. 36.
    S. Halle and R. Villemaire. Runtime enforcem.ent of web service message contracts with data. IEEE Transactions on Services Computing, 5(2):192-206, 2of2.Google Scholar
  37. 37.
    K. Havelund. Runtime verifieation of C programs. In Proc. of the 1st TestCom/FATES conference, volume 5047 of LNCS, 'Ibkyo, Japan, 2008. Springer.Google Scholar
  38. 38.
    K. Havelund. Data automata in Scala. In M. Leucker and J. Wang, editors, 8th International Symposium on Theoretiool Aspects of Software Engineering, TASE 2of.1" Changsha, China, September 1-3. Proceedings. IEEE Computer Society Press, 2of4.Google Scholar
  39. 39.
    K. Havelund. Monitoring with data automata. In T. Margaria and B. Steffen, editors, 6th International Symposium On Levemging Applications of Formal Methods, Verijication and Validation. Track: Statistical Model Ghecking, Past Present and Puture (organized by Kim Larsen and Axel Legay), Corfu, Greece, October 8-11. Proceedings, volume 8803 of LNCS, pages 254-273. Springer, 2of4.Google Scholar
  40. 40.
    K. Havelund. Rule-based runtime verification revisited. Software Tools for Tech nology Transfer (STTT), April 2of4. Published online.Google Scholar
  41. 41.
    K. Havelund and A. Goldberg. Verify your runs. In Verijied Software: Theories, Tools, Experiments, VSTTE 2005, pages 374-383, 2008.Google Scholar
  42. 42.
    K. Havelund and G. R.o§u. Eflieient monitoring of safety properties. Software Tools for Technology Transfer, 6(2):158–173, 2004.CrossRefGoogle Scholar
  43. 43.
    K. Havelund and G. Rosu. Monitoring programs using rewriting. In 16th ASE conferenee, San Diego, CA , USA, pages 135-143, 20of.Google Scholar
  44. 44.
    G. J. Holzmann. The Spin Model CheckerPrimer and Re/erence Manual. Addison-Wesley, 2004.Google Scholar
  45. 45.
  46. 46.
    G. Kiczales, E. Hilsdale, J. Hugunin, M. Kersten, J. Palm, and W. G. Griswold. An overview of AspectJ. In J. L. Knudsen, editor, Proc. of the 15th European Con/erence on Object-Oriented Programming, volume 2072 of LNCS, pages 327- 353. Springer, 20of.Google Scholar
  47. 47.
    C. Lee, D. Jin, P. O. Meredith, and G. Ro§u. Towards categorizing and formalizing the JDK API. Tecbnical Report, Department of Computer Science, University of Illinois at Urbana-Champaign, March 2of2.
  48. 48.
    I. Lee, S. Kannan, M. Kim, O. Sokolsky, and M. Viswanathan. Runtime assurance based on formal specifications. In PDPTA, pages 279–287. CSREA Press, 1999.Google Scholar
  49. 49.
    M. Leucker and C. Schallhart. Abrief account of runtime verification. Journal of Logic and Algebraic Programming, 78(5):293-303, may /june 2008.Google Scholar
  50. 50.
    D. Luckham, editor. The Power of Events: An Introduction to Gomplex Event Processing in Distributed Enterprise Systems. Addison-Wesley, 2002.Google Scholar
  51. 51.
    P. Meredith, D. Jin, D. Griffith, F. Chen, and G. Ro§u. An overview of the MOP runtime verification framework. Software Tools /or Technology Trans/er (STTT), 14(3):249-289, 2of2.Google Scholar
  52. 52.
    A. Pnueli. The temporallogic of programs. In 18th Annual Symposium on Foun dations of Computer Science, pages 46–57. IEEE Computer Society, 1977. Google Scholar
  53. 53.
    G. Reger. Automata Based Monitoring and Mining of Execution 'lTaces . PhD thesis, University of Manchester, 2014. Google Scholar
  54. 54.
    G. Reger, H. C. Cruz, and D. Rydeheard. MARQ: monitoring at runtime with QEA. In Proceedings of the 21st International Con/erence on Tools and Algorithms /or the Constmction and Analysis of Systems (TACAS'15), 2o15. Google Scholar
  55. 55.
  56. 56.
    M. Smith, G. Holzmann, and K. Ettessami. Events and constraints: a graphical editor for capturing logic properties of programs. In 5th [nt Sym . on Requirements Engineering, Toronto, Canada , volume 55(2), pages 14-22. 2001. Google Scholar
  57. 57.
    V. Stolz. Temporal assertians with parameterized propositions. In Proc. of the 7th Int. Workshop on Runtime Verijication (RV'07), volume 4839 of LNCS, pages 176-187, Vancouver, Canada, 2007. Springer.Google Scholar
  58. 58.
    V. Stolz and E. Bodden. Temporal assertians using AspectJ. In Proc. of the 5 th Int. Workshop on Runtime Verijication (RV'05), volume 144(4) of ENTCS, pages 109–124. Elsevier, 2006.Google Scholar
  59. 59.
    V. Stolz and F. Huch. Runtime verification of concurrent Haskell programs. In Proc. of the 4th Int. Workshop on Runtime Verijication (RV'04), volume 113 of ENTCS, pages 2of-216. Elsevier, 2005.Google Scholar
  60. 60.
    M. Vardi. From Church and Prior to PSL. In O. Grumberg and H. Veith, editors, 25 Years of Model Checking , volume 5000 of Lecture Notes in Computer Science , pages 150–171. Springer Berlin Heidelberg, 2008.Google Scholar

Copyright information

© Springer Fachmedien Wiesbaden 2015

Authors and Affiliations

  1. 1.California Inst. of TechnologyJet Propulsion LaboratoryPasadenaUSA

Personalised recommendations