Advertisement

Preserving Privacy versus Data Retention

  • Markus Hinkelmann
  • Andreas Jakoby
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5532)

Abstract

The retention of communication data has recently attracted much public interest, mostly because of the possibility of its misuse. In this paper, we present protocols that address the privacy concerns of the communication partners. Our data retention protocols store streams of encrypted data items, some of which may be flagged as critical (representing misbehavior). The frequent occurrence of critical data items justifies the self-decryption of all recently stored data items, critical or not. Our first protocol allows the party gathering the retained data to decrypt all data items collected within, say, the last half year whenever the number of critical data items reaches some threshold within, say, the last month. The protocol ensures that the senders of data remain anonymous but may reveal that different critical data items came from the same sender. Our second, computationally more complex scheme obscures this affiliation of critical data with high probability.

Keywords

Data Item Data Retention Security Parameter Critical Activity Pseudorandom Number Generator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agarwal, A., Li, H., Roy, K.: Drg-cache: a data retention gated-ground cache for low power. In: DAC, pp. 473–478. ACM, New York (2002)Google Scholar
  2. 2.
    Benaloh, J.C., Leichter, J.: Generalized secret sharing and monotone functions. In: Goldwasser, S. (ed.) CRYPTO 1988. LNCS, vol. 403, pp. 27–35. Springer, Heidelberg (1990)Google Scholar
  3. 3.
    Blakley, G.: Safeguarding cryptographic keys. In: AFIPS (1979)Google Scholar
  4. 4.
    Blanchette, J.-F., Johnson, D.G.: Data retention and the panoptic society: The social benefits of forgetfulness. The Information Society 18, 33–45 (2002)CrossRefGoogle Scholar
  5. 5.
    Blum, M., Micali, S.: How to generate cryptographically strong sequences of pseudo random bits. In: FOCS, pp. 112–117 (1982)Google Scholar
  6. 6.
    Boneh, D., Naor, M.: Timed commitments. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 236–254. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  7. 7.
    Chaum, D.: Untraceable electronic mail, return addresses, and digital pseudonyms. Commun. ACM 24(2), 84–88 (1981)CrossRefGoogle Scholar
  8. 8.
    Chaum, D.: Security without identification: Transaction systems to make big brother obsolete. Commun. ACM 28(10), 1030–1044 (1985)CrossRefGoogle Scholar
  9. 9.
    Csirmaz, L.: The size of a share must be large. J. Cryptology 10(4), 223–231 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    European Parliament and Council. Directive 2006/24/EC (March 2006)Google Scholar
  11. 11.
    Haber, S., Stornetta, W.S.: How to time-stamp a digital document. J. Cryptology 3(2), 99–111 (1991)CrossRefGoogle Scholar
  12. 12.
    Håstad, J., Impagliazzo, R., Levin, L.A., Luby, M.: A pseudorandom generator from any one-way function. SIAM J. Comput. 28(4), 1364–1396 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Ito, M., Saito, A., Nishizeki, T.: Secret sharing scheme realizing general access structure. In: Globecom, pp. 99–102 (1987)Google Scholar
  14. 14.
    Jarecki, S., Shmatikov, V.: Handcuffing big brother: an abuse-resilient transaction escrow scheme. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 590–608. Springer, Heidelberg (2004)Google Scholar
  15. 15.
    Krawczyk, H.: Distributed fingerprints and secure information dispersal. In: PODC, pp. 207–218 (1993)Google Scholar
  16. 16.
    Landau, S.: Security, liberty, and electronic communications. In: Franklin, M. (ed.) CRYPTO 2004. LNCS, vol. 3152, pp. 355–372. Springer, Heidelberg (2004)Google Scholar
  17. 17.
    Marx, G.T.: An ethics for the new surveillance. Inf. Soc. 14(3) (1998)Google Scholar
  18. 18.
    Naor, M.: Bit commitment using pseudorandomness. J. Crypt. 4(2), 151–158 (1991)zbMATHGoogle Scholar
  19. 19.
    Ng, K., Liu, H.: Customer retention via data mining. Artif. Intell. Rev. 14(6), 569–590 (2000)zbMATHCrossRefGoogle Scholar
  20. 20.
    Rabin, M.O.: Efficient dispersal of information for security, load balancing, and fault tolerance. J. ACM 36(2), 335–348 (1989)zbMATHCrossRefMathSciNetGoogle Scholar
  21. 21.
    Rivest, R.L., Shamir, A., Wagner, D.A.: Time-lock puzzles and timed-release crypto. Technical report, Cambridge, MA, USA (1996)Google Scholar
  22. 22.
    Shamir, A.: How to share a secret. Commun. ACM 22(11), 612–613 (1979)zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    van Wanrooij, W., Pras, A.: Data on retention. In: Schönwälder, J., Serrat, J. (eds.) DSOM 2005. LNCS, vol. 3775, pp. 60–71. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  24. 24.
    Yao, A.C.-C.: Theory and applications of trapdoor functions. In: FOCS, pp. 80–91 (1982)Google Scholar
  25. 25.
    Zuccato, A., Rannenberg, K.: Data retention has serious consequences. CEPIS Position Paper, LSI SIN (04)01 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Markus Hinkelmann
    • 1
  • Andreas Jakoby
    • 1
  1. 1.Institut für Theoretische InformatikUniversität zu LübeckGermany

Personalised recommendations