Advertisement

A Proposal of an Example and Experiments Repository to Foster Industrial Adoption of Formal Methods

  • Rupert SchlickEmail author
  • Michael Felderer
  • Istvan Majzik
  • Roberto Nardone
  • Alexander Raschke
  • Colin Snook
  • Valeria Vittorini
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11247)

Abstract

Formal methods (in a broad sense) have been around almost since the beginning of computer science. Nonetheless, there is a perception in the formal methods community that take-up by industry is low considering the potential benefits. We take a look at possible reasons and give candidate explanations for this effect. To address the issue, we propose a repository of industry-relevant example problems with an accompanying open data storage for experiment results in order to document, disseminate and compare exemplary solutions from formal model based methods. This would allow potential users from industry to better understand the available solutions and to more easily select and adopt a formal method that fits their needs. At the same time, it would foster the adoption of open data and good scientific practice in this research field.

Keywords

Formal models Formal methods Benchmarks Industrial adoption 

Notes

Acknowledgments

The authors thank Wolfgang Herzner for his valuable feedback on the content of this paper.

The work of Istvan Majzik was supported by the BME - Artificial Intelligence FIKP grant of EMMI (BME FIKP-MI/SC). The work of Roberto Nardone and Valeria Vittorini has been partially supported by MIUR within the GAUSS project (CUP E52F16002700001) of the PRIN 2015 program. And by DIETI within the project MODAL (MOdel-Driven AnaLysis of Critical Industrial Systems). The work of Rupert Schlick has received funding from the EU (program H2020) and national Austrian funding from BMVIT (program ICT of the future) in ECSEL project AutoDrive (Grant No. 737469).

References

  1. 1.
    Abrial, J.R., Börger, E., Langmaack, H.: Methods for Semantics and Specification, vol. 117. Dagstuhl Seminar No. 9523, Schloss Dagstuhl, International Conference and Research Center for Computer Science (1995)Google Scholar
  2. 2.
    Abrial, J.R., Börger, E., Langmaack, H.: Formal Methods for Industrial Applications: Specifying and Programming the Steam Boiler Control. LNCS, vol. 9. Springer, Heidelberg (1996).  https://doi.org/10.1007/BFb0027227CrossRefzbMATHGoogle Scholar
  3. 3.
    Alglave, J., Donaldson, A.F., Kroening, D., Tautschnig, M.: Making software verification tools really work. In: Bultan, T., Hsiung, P.-A. (eds.) ATVA 2011. LNCS, vol. 6996, pp. 28–42. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-24372-1_3CrossRefGoogle Scholar
  4. 4.
    Ambler, S.W.: Process Patterns. Cambridge University Press, Cambridge (1998)Google Scholar
  5. 5.
    Benerecetti, M., et al.: Dynamic state machines for modelling railway control systems. Sci. Comput. Programm. 133, 116–153 (2017)CrossRefGoogle Scholar
  6. 6.
    Bettenburg, N., Nagappan, M., Hassan, A.E.: Think locally, act globally: improving defect and effort prediction models. In: Proceedings of the 9th IEEE Working Conference on Mining Software Repositories, pp. 60–69 (2012)Google Scholar
  7. 7.
    Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: requirements and solutions. Int. J. Softw. Tools Technol. Transf. 1–29 (2017)Google Scholar
  8. 8.
    Bicarregui, J., Hoare, C., Woodcock, J.: The verified software repository: a step towards the verifying compiler. Formal Aspects Comput. 18(2), 143–151 (2006).  https://doi.org/10.1007/s00165-005-0079-4CrossRefzbMATHGoogle Scholar
  9. 9.
    Boniol, F., Wiels, V., Ait-Ameur, Y., Schewe, K.D. (eds.): ABZ 2014: The Landing Gear Case Study. Communications in Computer and Information Science, vol. 433. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-07512-9CrossRefGoogle Scholar
  10. 10.
    Boulanger, J.L., Ochem, Q.: AdaCore Technologies for CENELEC EN 50128: 2011 (2015)Google Scholar
  11. 11.
    Bowen, J.P., Hinchey, M.G.: The use of industrial-strength formal methods. In: Proceedings of The Twenty-First Annual International Computer Software and Applications Conference, COMPSAC 1997, pp. 332–337 (1997)Google Scholar
  12. 12.
    Bowen, J.P., Hinchey, M.G.: Ten commandments of formal methods... ten years later. Computer 39(1), 40–48 (2006)CrossRefGoogle Scholar
  13. 13.
    Briand, L., Bianculli, D., Nejati, S., Pastore, F., Sabetzadeh, M.: The case for context-driven software engineering research: generalizability is overrated. IEEE Softw. 34(5), 72–75 (2017)CrossRefGoogle Scholar
  14. 14.
    Center for Internet Security: CIS Benchmarks (2018). https://www.cisecurity.org/cis-benchmarks/
  15. 15.
    Chauduri, S., Dayal, U.: An overview of data warehousing and OLAP technology. ACM SIGMOD Rec. 26, 65–74 (1997)CrossRefGoogle Scholar
  16. 16.
    Davis, J.A., et al.: Study on the barriers to the industrial adoption of formal methods. In: Pecheur, C., Dierkes, M. (eds.) FMICS 2013. LNCS, vol. 8187, pp. 63–77. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-41010-9_5CrossRefGoogle Scholar
  17. 17.
    Dwyer, M.B., Avrunin, G.S., Corbett, J.C.: Patterns in property specifications for finite-state verification. In: Proceedings 1999 International Conference on Software Engineering, pp. 411–420. IEEE (1999)Google Scholar
  18. 18.
    ENABLE-S3 Consortium: V&V Methodology. Project deliverable D3.2.2 v1 (2017). https://www.enable-s3.eu/media/deliverables/
  19. 19.
    Farkas, R., Bergmann, G.: Towards reliable benchmarks of timed automata. In: Proceedings of the 25th PhD Mini-Symposium, pp. 20–23. Budapest University of Technology and Economics (2018)Google Scholar
  20. 20.
    Frappier, M., Habrias, H.: Software Specification Methods. ISTE Ltd. (2006)Google Scholar
  21. 21.
    Garousi, V., Felderer, M.: Worlds apart: industrial and academic focus areas in software testing. IEEE Softw. 34(5), 38–45 (2017)CrossRefGoogle Scholar
  22. 22.
    Garousi, V., Felderer, M., Kuhrmann, M., Herkiloğlu, K.: What industry wants from academia in software testing?: hearing practitioners’ opinions. In: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering, pp. 65–69. ACM (2017)Google Scholar
  23. 23.
    Geske, M., Jasper, M., Steffen, B., Howar, F., Schordan, M., van de Pol, J.: RERS 2016: parallel and sequential benchmarks with focus on LTL verification. In: Margaria, T., Steffen, B. (eds.) ISoLA 2016. LNCS, vol. 9953, pp. 787–803. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-47169-3_59CrossRefGoogle Scholar
  24. 24.
    Glinz, M.: Statecharts for requirements specification-as simple as possible, as rich as needed. In: Proceedings of the ICSE2002 Workshop on Scenarios and State Machines: Models, Algorithms, and Tools (2002)Google Scholar
  25. 25.
    Havelund, K., Skou, A., Larsen, K.G., Lund, K.: Formal modelling and analysis of an audio/video protocol: an industrial case study using UPPAAL. In: Proceedings of 18th IEEE Real-Time Systems Symposium, pp. 2–13. IEEE CS (1997)Google Scholar
  26. 26.
    Hoang, T.S., Butler, M., Reichl, K.: The hybrid ERTMS/ETCS level 3 case study. In: Butler, M., Raschke, A., Hoang, T.S., Reichl, K. (eds.) ABZ 2018. LNCS, vol. 10817, pp. 251–261. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-91271-4_17CrossRefzbMATHGoogle Scholar
  27. 27.
    Hoare, T., Misra, J.: Verified software: theories, tools, experiments vision of a grand challenge project. In: Meyer, B., Woodcock, J. (eds.) VSTTE 2005. LNCS, vol. 4171, pp. 1–18. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-69149-5_1CrossRefGoogle Scholar
  28. 28.
    Huppler, K.: The art of building a good benchmark. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 18–30. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-10424-4_3CrossRefGoogle Scholar
  29. 29.
    Kanoun, K., Spainhower, L.: Dependability Benchmarking for Computer Systems. Wiley - IEEE Computer Society Press (2008)Google Scholar
  30. 30.
  31. 31.
    Kitchenham, B.A., Dyba, T., Jorgensen, M.: Evidence-based software engineering. In: Proceedings of the 26th International Conference on Software Engineering, pp. 273–281. IEEE Computer Society (2004)Google Scholar
  32. 32.
    Kunzli, S., Poletti, F., Benini, L., Thiele, L.: Combining simulation and formal methods for system-level performance analysis. In: Proceedings of Design, Automation and Test in Europe, DATE 2006, vol. 1, pp. 1–6. IEEE (2006)Google Scholar
  33. 33.
    Madeira, H., Costa, J., Vieira, M.: The OLAP and data warehousing approaches for analysis and sharing of results from dependability evaluation experiments. In: Proceedings of IEEE 2003 International Conference on Dependable Systems and Networks, DSN 2003, pp. 22–25. IEEE CS (2003)Google Scholar
  34. 34.
    Mankins, J.C.: Technology readiness levels. White Paper, 6 April 1995Google Scholar
  35. 35.
    Mashkoor, A.: The hemodialysis machine case study. In: Butler, M., Schewe, K.-D., Mashkoor, A., Biro, M. (eds.) ABZ 2016. LNCS, vol. 9675, pp. 329–343. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-33600-8_29CrossRefGoogle Scholar
  36. 36.
    MBAT Consortium: MBAT Analysis and Testing Patterns (2014). http://mbat-wiki.iese.fraunhofer.de/index.php/MBAT_Analysis_and_Testing_Patterns
  37. 37.
    Menzies, T., Krishna, R., Pryor, D.: The Promise Repository of Empirical Software Engineering Data (2015). North Carolina State University, Department of Computer Science. http://openscience.us/repo
  38. 38.
    Menzies, T., Kocaguneli, E., Turhan, B., Minku, L., Peters, F.: Sharing Data and Models in Software Engineering. Morgan Kaufmann, Burlington (2014)Google Scholar
  39. 39.
    Morin, G.: EN 50128:2011 – what’s new and applying it to your development and verification process (2014). http://www.esterel-technologies.com/wp-content/uploads/2014/10/Webinar-EN-50128-2011-Wha-is-New.pdf. Accessed 17 May 2018
  40. 40.
    Newcombe, C.: Why Amazon chose TLA\(^{+}\). In: Ait Ameur, Y., Schewe, K.D. (eds.) ABZ 2014. LNCS, vol. 8477. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-662-43652-3_3CrossRefGoogle Scholar
  41. 41.
    Newcombe, C., Rath, T., Zhang, F., Munteanu, B., Brooker, M., Deardeuff, M.: How Amazon web services uses formal methods. Commun. ACM 58(4), 66–73 (2015)CrossRefGoogle Scholar
  42. 42.
    Peleska, J., et al.: A real-world benchmark model for testing concurrent real-time systems in the automotive domain. In: Wolff, B., Zaïdi, F. (eds.) ICTSS 2011. LNCS, vol. 7019, pp. 146–161. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-24580-0_11CrossRefGoogle Scholar
  43. 43.
    Standard Performance Evaluation Corporation: SPEC’s Benchmarks (2018). http://spec.org/benchmarks.html
  44. 44.
    Stepney, S., Cooper, D., Woodcock, J.: An electronic purse: specification, refinement, and proof. Technical monograph PRG-126, Oxford University Computing Laboratory, July 2000Google Scholar
  45. 45.
    Stieglbauer, G., Roncevic, I.: Objecting to the revolution: model-based engineering and the industry - root causes beyond classical research topics. In: Pires, L.F., Hammoudi, S., Selic, B. (eds.) Proceedings of the 5th International Conference on Model-Driven Engineering and Software Development, MODELSWARD 2017, pp. 629–639. SciTePress (2017)Google Scholar
  46. 46.
    Szárnyas, G., Izsó, B., Ráth, I., Varró, D.: The train benchmark: cross-technology performance evaluation of continuous model queries. Softw. Syst. Model. 17(4), 1365–1393 (2017)CrossRefGoogle Scholar
  47. 47.
    Tempero, E., et al.: The qualitas corpus: a curated collection of java code for empirical studies. In: Proceedings of the 2010 Asia Pacific Software Engineering Conference, APSEC 2010, pp. 336–345. IEEE Computer Society, Washington, DC (2010).  https://doi.org/10.1109/APSEC.2010.46
  48. 48.
    Transaction Processing Performance Council: Active TPC Benchmarks (2018). http://www.tpc.org/information/benchmarks.asp
  49. 49.
    Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach. Morgan Kaufmann, Burlington (2010)Google Scholar
  50. 50.
    Woodcock, J., Larsen, P.G., Bicarregui, J., Fitzgerald, J.: Formal methods: practice and experience. ACM Comput. Surv. 41(4), 1–36 (2009)CrossRefGoogle Scholar
  51. 51.
    Zendel, O., Honauer, K., Murschitz, M., Humenberger, M., Domínguez, G.F.: Analyzing computer vision data — the good, the bad and the ugly. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6670–6680, July 2017Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Rupert Schlick
    • 1
    Email author
  • Michael Felderer
    • 2
    • 3
  • Istvan Majzik
    • 4
  • Roberto Nardone
    • 5
  • Alexander Raschke
    • 6
  • Colin Snook
    • 7
  • Valeria Vittorini
    • 5
  1. 1.Center for Digital Safety and SecurityAIT Austrian Institute of Technology GmbHViennaAustria
  2. 2.Department of Computer ScienceUniversity of InnsbruckInnsbruckAustria
  3. 3.Department of Software EngineeringBlekinge Institute of TechnologyKarlskronaSweden
  4. 4.Department of Measurement and Information SystemsBudapest University of Technology and EconomicsBudapestHungary
  5. 5.Department of Electrical Engineering and Information Technology (DIETI)University of Naples Federico IINaplesItaly
  6. 6.Institute of Software Engineering and Programming LanguagesUlm UniversityUlmGermany
  7. 7.Electronics and Computer ScienceUniversity of SouthamptonSouthamptonUK

Personalised recommendations