Abstract
I want to start with some thoughts that popped into my mind after Babak’s talk.
The obvious approach – which I am going to claim is not the right one – is shown in Figure 1. You have, in your model of your system, the set of all the possible states that you think it could get in to, based on what people can actually do. And then you’ve got a subset of those, that you think are the permissible states, which are the ones that you’re happy about the system being in. And the auditors are happy with them, the shareholders are happy with them, but then there’s these other states that you might get into but don’t want to be in. And you’ve got an outer protection perimeter round the possible states, you’re saying you can’t get outside that outer perimeter but you might get outside this inner perimeter round the permissible states, and you want to protect yourself when you do, and then you’ve got some sort of audit function.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Christianson, B. (2000). Auditing against Impossible Abstractions. In: Christianson, B., Crispo, B., Malcolm, J.A., Roe, M. (eds) Security Protocols. Security Protocols 1999. Lecture Notes in Computer Science, vol 1796. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10720107_8
Download citation
DOI: https://doi.org/10.1007/10720107_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67381-1
Online ISBN: 978-3-540-45570-7
eBook Packages: Springer Book Archive