Risk Assessment and Security Testing of Large Scale Networked Systems with RACOMAT

  • Johannes ViehmannEmail author
  • Frank Werner
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9488)


Risk management is an important part of the software quality management because security issues can result in big economical losses and even worse legal consequences. While risk assessment as the base for any risk treatment is widely regarded to be important, doing a risk assessment itself remains a challenge especially for complex large scaled networked systems. This paper presents an ongoing case study in which such a system is assessed. In order to deal with the challenges from that case study, the RACOMAT method and the RACOMAT tool for compositional risk assessment closely combined with security testing and incident simulation for have been developed with the goal to reach a new level of automation results in risk assessment.


Risk assessment Security testing Incident simulation 


  1. 1.
    Lund, M.S., Solhaug, B., Stølen, K.: Model-Driven Risk Analysis – The CORAS Approach. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  2. 2.
    International Standards Organization. ISO 31000:2009(E), Risk management – Principles and guidelines, (2009)Google Scholar
  3. 3.
    International Standards Organization. ISO 29119 Software and system engineering - Software Testing-Part 1–4 (2012)Google Scholar
  4. 4.
    Bouti, A., Kadi, D.A.: A state-of-the-art review of FMEA/FMECA. Int. J. Reliab. Qual. Saf. Eng. 1, 515–543 (1994)CrossRefGoogle Scholar
  5. 5.
    International Electrotechnical Commission: IEC 61025 Fault Tree Analysis (FTA) (1990)Google Scholar
  6. 6.
    International Electrotechnical Commission: IEC 60300-3-9 Dependability management – Part 3: Application guide – Section 9: Risk analysis of technological systems – Event Tree Analysis (ETA) (1995)Google Scholar
  7. 7.
    Lund, M.S., Solhaug, B., Stølen, K.: Evolution in relation to risk and trust management. IEEE Comput. 43(5), 49–55 (2010)CrossRefGoogle Scholar
  8. 8.
    Kaiser, B., Liggesmeyer, P., Mäckel, O.: A new component concept for fault trees. In: 8th Australian Workshop on Safety Critical Systems and Software (SCS 2003), pp. 37–46. Australian Computer Society (2003)Google Scholar
  9. 9.
    Papadoupoulos, Y., McDermid, J., Sasse, R., Heiner, G.: Analysis and synthesis of the behaviour of complex programmable electronic systems in conditions of failure. Reliab. Eng. Syst. Saf. 71(3), 229–247 (2001). ElsevierCrossRefGoogle Scholar
  10. 10.
    Viehmann, J.: Reusing risk analysis results - an extension for the CORAS risk analysis method. In: 4th International Conference on Information Privacy, Security, Risk and Trust (PASSAT 2012), pp. 742–751. IEEE (2012). doi: 10.1109/SocialCom-PASSAT.2012.91
  11. 11.
    Gleißner, W., Berger, T.: Auf nach Monte Carlo: Simulationsverfahren zur Risiko-Aggregation. RiskNews 1, 30–37 (2004). doi: 10.1002/risk.200490005. WileyCrossRefGoogle Scholar
  12. 12.
    Greenland, S.: Sensitivity analysis, monte carlo risk analysis, and bayesian uncertainty assessment. Risk Anal. 21, 579–584 (2001)CrossRefGoogle Scholar
  13. 13.
    Viehmann, J.: Towards integration of compositional risk analysis using Monte Carlo simulation and security Testing. In: Bauer, T., Großmann, J., Seehusen, F., Stølen, K., Wendland, M.-F. (eds.) RISK 2013. LNCS, vol. 8418, pp. 109–119. Springer, Heidelberg (2014)Google Scholar
  14. 14.
    Handbook: webMethods Command Central Help, Version 9.6, Software AG Darmstadt Germany, April 2014.
  15. 15.
    Kloos, J., Hussain, T., and Eschbach, R.: Risk-based testing of safety-critical embedded systems driven by fault tree analysis. In: Software Testing, Verication and Validation Work-shops (ICSTW 2011), pp. 26–33. IEEE (2011)Google Scholar
  16. 16.
    Stallbaum, H., Metzger, A., Pohl, K.: An automated technique for risk-based test case generation and prioritization. In: Proceedings of Workshop on Automation of Software Test, AST 2008, Germany, pp. 67–70 (2008)Google Scholar
  17. 17.
    Smith, B.: Security Test Patterns (2008).
  18. 18.
    Erdogan, G., Seehusen, F., Stølen, K., Aagedal, J.: Assessing the usefulness of testing for validating the correctness of security risk models based on an industrial case study. In: Proceedings of the Workshop on Quantitative Aspects in Security Assurance (QASA 2012), Pisa (2012)Google Scholar
  19. 19.
    Benet, A.F.: A risk driven approach to testing medical device software. In: Advances in Systems Safety, pp. 157–168. Springer (2011)Google Scholar
  20. 20.
    Großmann, J., Schneider, M., Viehmann, J., Wendland, M.-F.: Combining risk analysis and security testing. In: Margaria, T., Steffen, B. (eds.) ISoLA 2014, Part II. LNCS, vol. 8803, pp. 322–336. Springer, Heidelberg (2014)Google Scholar
  21. 21.
    Federal Office for Information Security (BSI): IT-Grundschutz Catalogues, Bonn Germany (2013).
  22. 22.
    MITRE: Common Attack Pattern Enumeration and Classification, MITRE (2015).
  23. 23.
    MITRE: Common Weakness Enumeration, MITRE (2015).
  24. 24.
    MITRE: Common Vulnerabilities and Exposures, MITRE (2015).

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Fraunhofer FOKUSBerlinGermany
  2. 2.Software AGDarmstadtGermany

Personalised recommendations