Upper Bounds for Adversaries’ Utility in Attack Trees
Attack trees model the decision making process of an adversary who plans to attack a certain system. Attack-trees help to visualize possible attacks as Boolean combinations of atomic attacks and to compute attack-related parameters such as cost, success probability and likelihood. The known methods of estimating adversarie’s utility are of high complexity and set many unnatural restrictions on adversaries’ behavior. Hence, their estimations are incorrect—even if the computed utility is negative, there may still exist beneficial ways of attacking the system. For avoiding unnatural restrictions, we study fully adaptive adversaries that are allowed to try atomic attacks in arbitrary order, depending on the results of the previous trials. At the same time, we want the algorithms to be efficient. To achieve both goals, we do not try to measure the exact utility of adversaries but only upper bounds. If adversaries’ utility has a negative upper bound, it is safe to conclude that there are no beneficial ways of attacking the system, assuming that all reasonable atomic attacks are captured by the attack tree.
KeywordsBoolean Function Success Probability Boolean Formula Adaptive Model Disjunctive Normal Form
Unable to display preview. Download preview PDF.
- 3.Convery, S., Cook, D., Franz, M.: An attack tree for the Border Gateway Protocol (2004)Google Scholar
- 4.Downs, D.D., Haddad, R.: Penetration testing—the gold standard for security rating and ranking. In: Proceedings of the 1st Workshop on Information-Security-System Rating and Ranking (WISSRR), Williamsburg, Virginia, USA (2001)Google Scholar
- 5.Edge, K.S.: A framework for analyzing and mitigating the vulnerabilities of complex systems via attack and protection trees. Ph.D. thesis, Air Force Institute of Technology, Ohio (2007)Google Scholar
- 6.Ericson, C.: Fault tree analysis—a history. In: The 17th International System Safety Conference (1999)Google Scholar
- 11.Schneier, B.: Attack trees: Modeling security threats. Dr. Dobbs Journal 24(12), 21–29 (1999)Google Scholar
- 12.Schudel, G., Wood, B.: Adversary Work Factor As a Metric for Information Assurance. In: Proceedings of the 2000 Workshop on New Security Paradigms, Ballycotton, County Cork, Ireland, pp. 23–30 (2000)Google Scholar
- 13.Weiss, J.D.: A system security engineering process. In: Proc. of the 14th National Computer Security Conf., pp. 572–581 (1991)Google Scholar
- 14.Wood, B., Bouchard, J.: Read team work factor as a security measurement. In: Proc. of the 1st Workshop on Information-Security-System Rating and Ranking (WISSRR 2001), Williamsburg, Virginia, USA (2001)Google Scholar