Confining Adversary Actions via Measurement

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9987)


Systems designed with measurement and attestation in mind are often layered, with the lower layers measuring the layers above them. Attestations of such systems must report the results of a diverse set of application-specific measurements of various parts of the system. There is a pervasive intuition that measuring the system “bottom-up” (i.e. measuring lower layers before the layers above them) is more robust than other orders of measurement. This is the core idea behind trusted boot processes. In this paper we justify this intuition by characterizing the adversary actions required to escape detection by bottom-up measurement. In support of that goal, we introduce a formal framework with a natural and intuitive graphical representation for reasoning about layered measurement systems.


Virtual Machine Adversary Action Layered System Trust Platform Module Virtual Machine Monitor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



I would like to thank Pete Loscocco for suggesting and guiding the direction of this research. Many thanks also to Perry Alexander and Joshua Guttman. Their valuable feedback on during the formation of these ideas was invaluable. Thanks also to Sarah Helble and Aaron Pendergrass for lively discussions about implementations of measurement and attestation systems. Finally, I would like to thank the anonymous reviewers as well as the GraMSec participants for their insightful comments and suggestions for improving the paper.


  1. 1.
    Cabuk, S., Chen, L., Plaquin, D., Ryan, M.: Trusted integrity measurement and reporting for virtualized platforms. In: Chen, L., Yung, M. (eds.) INTRUST 2009. LNCS, vol. 6163, pp. 180–196. Springer, Heidelberg (2010)Google Scholar
  2. 2.
    Coker, G., Guttman, J.D., Loscocco, P., Herzog, A.L., Millen, J.K., O’Hanlon, B., Ramsdell, J.D., Segall, A., Sheehy, J., Sniffen, B.T.: Principles of remote attestation. Int. J. Inf. Secur. 10(2), 63–81 (2011)CrossRefGoogle Scholar
  3. 3.
    Intel Corporation: Open attestation. Accessed 16 Dec 2015Google Scholar
  4. 4.
    Datta, A., Franklin, J., Garg, D., Kaynar, D.K.: A logic of secure systems and its application to trusted computing. In: 30th IEEE Symposium on Security and Privacy (S&P 2009), Oakland, California, USA, 17–20 May 2009, pp. 221–236 (2009)Google Scholar
  5. 5.
    Davi, L., Sadeghi, A.-R., Winandy, M.: Dynamic integrity measurement, attestation: towards defense against return-oriented programming attacks. In: Proceedings of the 4th ACM Workshop on Scalable Trusted Computing, STC 2009, Chicago, Illinois, USA, 13 November 2009, pp. 49–54 (2009)Google Scholar
  6. 6.
    Fisher, C., Bukovick, D., Bourquin, R., Dobry, R.: SAMSON - Secure Authentication Modules. Accessed 16 Dec 2015Google Scholar
  7. 7.
    Trusted Computing Group. TCG Trusted Network Connect Architecture for Interoperability version 1.5 (2012)Google Scholar
  8. 8.
    Jackson, D.: Software Abstractions: Logic Language and Analysis, 2nd edn. MIT Press, Cambridge (2012)Google Scholar
  9. 9.
    Kil, C., Sezer, E.C., Azab, A.M., Ning, P., Zhang, X.: Remote attestation to dynamic system properties: towards providing complete system integrity evidence. In: Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2009, Estoril, Lisbon, Portugal, 29 June–2 July 2009, pp. 115–124 (2009)Google Scholar
  10. 10.
    Loscocco, P., Wilson, P.W., Pendergrass, J.A., McDonell, C.D.: Linux kernel integrity measurement using contextual inspection. In: Proceedings of the 2nd ACM Workshop on Scalable Trusted Computing, STC 2007, Alexandria, VA, USA, 2 November 2007, pp. 21–29 (2007)Google Scholar
  11. 11.
    Maliszewski, R., Sun, N., Wang, S., Wei, J., Qiaowei, R.: Trusted boot (tboot). Accessed 16 Dec 2015Google Scholar
  12. 12.
    Rowe, P.D.: Bundling evidence for layered attestation. In: Franz, M., Papadimitratos, P. (eds.) TRUST 2016. LNCS, vol. 9824, pp. 119–139. Springer, Heidelberg (2016). doi: 10.1007/978-3-319-45572-3_7 CrossRefGoogle Scholar
  13. 13.
    Saghafi, S., Dougherty, D.J.: Razor: provenance and exploration in model-finding. In: 4th Workshop on Practical Aspects of Automated Reasoning (PAAR) (2014)Google Scholar
  14. 14.
    Sailer, R., Zhang, X., Jaeger, T., van Doorn, L.: Design and implementation of a TCG-based integrity measurement architecture. In: Proceedings of the 13th USENIX Security Symposium, San Diego, CA, USA, 9–13 August 2004, pp. 223–238 (2004)Google Scholar
  15. 15.
    Wei, J., Calton, P., Rozas, C.V., Rajan, A., Zhu, F.: Modeling the runtime integrity of cloud servers: a scoped invariant perspective. In: Cloud Computing, Second International Conference, CloudCom 2010, Indianapolis, Indiana, USA, Proceedings, 30 November–3 December 2010, pp. 651–658 (2010)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.The MITRE CorporationBedfordUSA

Personalised recommendations