Advertisement

A Survey on Deep Learning Techniques for Privacy-Preserving

  • Harry Chandra Tanuwidjaja
  • Rakyong Choi
  • Kwangjo KimEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11806)

Abstract

There are challenges and issues when machine learning algorithm needs to access highly sensitive data for the training process. In order to address these issues, several privacy-preserving deep learning techniques, including Secure Multi-Party Computation and Homomorphic Encryption in Neural Network have been developed. There are also several methods to modify the Neural Network, so that it can be used in privacy-preserving environment. However, there is trade-off between privacy and performance among various techniques. In this paper, we discuss state-of-the-art of Privacy-Preserving Deep Learning, evaluate all methods, compare pros and cons of each approach, and address challenges and issues in the field of privacy-preserving by deep learning.

Keywords

Secure Multi-Party Computation Homomorphic encryption Trade-Off Privacy-Preserving Deep Learning 

References

  1. 1.
    Lazer, D., Pentland, A.S., Adamic, L., Aral, S., Barabasi, A.L.: Life in the network: the coming age of computational social science. Science 323, 721 (2009)CrossRefGoogle Scholar
  2. 2.
    Nasrabadi, N.M.: Pattern recognition and machine learning. J. Electron. Imaging 16, 049901 (2007)CrossRefGoogle Scholar
  3. 3.
    Chen, M., Hao, Y., Hwang, K., Wang, L.: Disease prediction by machine learning over big data from healthcare communities. IEEE Access 5, 8869–8879 (2017)CrossRefGoogle Scholar
  4. 4.
    Zhang, D., Chen, X., Wang, D., Shi, J.: A survey on collaborative deep learning and privacy-preserving. In: IEEE Third International Conference on Data Science in Cyberspace, pp. 652–658 (2018)Google Scholar
  5. 5.
    Meints, M., Moller, J.: Privacy-preserving data mining: a process centric view from a european perspective (2004)Google Scholar
  6. 6.
    Rivest, R.L., Adleman, L., Dertouzos, M.L.: On data banks and privacy homomorphisms. Found. Secure Comput. 4(11), 169–180 (1978)MathSciNetGoogle Scholar
  7. 7.
    Paillier, P.: Public-key cryptosystems based on composite degree residuosity classes. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 223–238. Springer, Heidelberg (1999).  https://doi.org/10.1007/3-540-48910-X_16CrossRefGoogle Scholar
  8. 8.
    Gentry, C.: Fully homomorphic encryption using ideal lattices. In: Annual ACM on Symposium on Theory of Computing, pp. 169–178. ACM (2009)Google Scholar
  9. 9.
    Brakerski, Z., Vaikuntanathan, V.: Efficient fully homomorphic encryption from (Standard) LWE. SIAM J. Comput. 43(2), 831–871 (2014)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Brakerski, Z., Vaikuntanathan, V.: Fully homomorphic encryption from ring-LWE and security for key dependent messages. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 505–524. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-22792-9_29CrossRefGoogle Scholar
  11. 11.
    Gentry, C., Sahai, A., Waters, B.: Homomorphic encryption from learning with errors: conceptually-simpler, asymptotically-faster, attribute-based. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8042, pp. 75–92. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40041-4_5CrossRefGoogle Scholar
  12. 12.
    Brakerski, Z., Gentry, C., Vaikuntanathan, V.: (Leveled) Fully homomorphic encryption without bootstrapping. ACM Transact. Comput. Theory (TOCT) 6(3), 13 (2014)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Clear, M., McGoldrick, C.: Multi-identity and multi-key leveled FHE from learning with errors. In: Gennaro, R., Robshaw, M. (eds.) CRYPTO 2015. LNCS, vol. 9216, pp. 630–656. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-48000-7_31CrossRefGoogle Scholar
  14. 14.
    van Dijk, M., Gentry, C., Halevi, S., Vaikuntanathan, V.: Fully homomorphic encryption over the integers. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 24–43. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-13190-5_2CrossRefGoogle Scholar
  15. 15.
    Cheon, J.H., et al.: Batch fully homomorphic encryption over the integers. In: Johansson, T., Nguyen, P.Q. (eds.) EUROCRYPT 2013. LNCS, vol. 7881, pp. 315–335. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-38348-9_20CrossRefGoogle Scholar
  16. 16.
    Halevi, S., Shoup, V.: Algorithms in HElib. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014. LNCS, vol. 8616, pp. 554–571. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-662-44371-2_31CrossRefzbMATHGoogle Scholar
  17. 17.
    Ducas, L., Micciancio, D.: FHEW: bootstrapping homomorphic encryption in less than a second. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 617–640. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-46800-5_24CrossRefzbMATHGoogle Scholar
  18. 18.
    Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017. LNCS, vol. 10624, pp. 409–437. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-70694-8_15CrossRefGoogle Scholar
  19. 19.
    Yao, A.C.-C.: How to generate and exchange secrets. In: Foundations of Computer Science 27th Annual Symposium, pp. 162–167. IEEE (1986)Google Scholar
  20. 20.
    Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game. In: Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, pp. 218–229. ACM (1987)Google Scholar
  21. 21.
    Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006).  https://doi.org/10.1007/11681878_14CrossRefGoogle Scholar
  22. 22.
    Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006).  https://doi.org/10.1007/11787006_1CrossRefGoogle Scholar
  23. 23.
    Chaudhuri, K., Monteleoni, C., Sarwate, A.D.: Differentially private empirical risk minimization. J. Mach. Learn. Res. 12, 1069–1109 (2011)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Kifer, D., Smith, A., Thakurta, A.: Private convex empirical risk minimization and high-dimensional regression. In: Conference on Learning Theory, pp. 1–25 (2012)Google Scholar
  25. 25.
    Jagannathan, G., Pillaipakkamnatt, K., Wright, R.N.: A practical differentially private random decision tree classifier. In: IEEE International Conference on Data Mining Workshops 2009, ICDMW 2009, pp. 114–121. IEEE (2009)Google Scholar
  26. 26.
    LeCun, Y., Haffner, P., Bottou, L., Bengio, Y.: Object recognition with gradient-based learning. Shape, Contour and Grouping in Computer Vision. LNCS, vol. 1681, pp. 319–345. Springer, Heidelberg (1999).  https://doi.org/10.1007/3-540-46805-6_19CrossRefGoogle Scholar
  27. 27.
    Goodfellow, I.: Generative adversarial nets. In: Advances in neural information processing systems, pp. 2672–2680 (2014)Google Scholar
  28. 28.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 (2015)
  29. 29.
    Hesamifard, E., Takabi, H., Ghasemi, M.: CryptoDL: deep neural networks over encrypted data. arXiv:1711.05189 (2017)
  30. 30.
    Liu, W., Pan, F., Wang, X.A., Cao, Y., Tang, D.: Privacy-preserving all convolutional net based on homomorphic encryption. In: International Conference on Network-Based Information Systems, pp. 752–762 (2018)Google Scholar
  31. 31.
    Graepel, T., Lauter, K., Naehrig, M.: ML confidential: machine learning on encrypted data. In: International Conference on Information Security and Cryptology, pp. 1–21 (2012)Google Scholar
  32. 32.
    Abadi, M., Erlingsson, U., Goodfellow, I.: On the protection of private information in machine learning systems: two recent approches. In: Computer Security Foundations Symposium, pp. 1–6 (2017)Google Scholar
  33. 33.
    Papernot, N., Abadi, M., Erlingsson, U.: Semi-supervised knowledge transfer for deep learning from private training data. arXiv:1610.05755 (2016)
  34. 34.
    Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: Cryptonets: applying neural networks to encrypted data with high throughput and accuracy. In: International Conference on Machine Learning, pp. 201–210 (2016)Google Scholar
  35. 35.
    Chabanne, H., de Wargny, A., Milgram, J., Morel, C., Prouff, E.: Privacy-preserving classification on deep neural network. IACR Cryptology ePrint Archive (2017)Google Scholar
  36. 36.
    Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning, pp. 19–38 (2017)Google Scholar
  37. 37.
    Rabin, M.O.: How to exchange secrets with oblivious transfer. IACR Cryptology ePrint Archive, p. 187 (2005)Google Scholar
  38. 38.
    Shamir, A.: How to share a secret. Commun. ACM 22(11), 612–613 (1979)MathSciNetCrossRefGoogle Scholar
  39. 39.
    Xue, H., et al.: Distributed large scale privacy-preserving deep mining. In: IEEE Third International Conference on Data Science in Cyberspace, pp. 418–422 (2018)Google Scholar
  40. 40.
    Rouhani, B., Riazi, M., Koushanfar, F.: DeepSecure: scalable provably-secure deep learning. In: 55th ACM/ESDA/IEEE Design Automation Conference, pp. 1–6 (2018)Google Scholar
  41. 41.
    Deng, L.: The MNIST database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29, 141–142 (2012)CrossRefGoogle Scholar
  42. 42.
    Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via MiniONN transformations. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 619–631 (2017)Google Scholar
  43. 43.
    Juvekar, C., Vaikuntanathan, V., Chandrakasan, A.: GAZELLE: a low latency framework for secure neural network inference. In: 27th USENIX Security Symposium, pp. 1651–1669 (2018)Google Scholar
  44. 44.
    Sanyal, A., Kusner, M.J., Gascón, A., Kanade, V.: TAPAS: tricks to accelerate (Encrypted) prediction as a service. arXiv preprint, arXiv:1806.03461 (2018)
  45. 45.
    Bourse, F., Minelli, M., Minihold, M., Paillier, P.: Fast homomorphic evaluation of deep discretized neural networks. In: Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018. LNCS, vol. 10993, pp. 483–512. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-96878-0_17CrossRefGoogle Scholar
  46. 46.
    Mohassel, P., Rindal, P.: ABY 3: a mixed protocol framework for machine learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 35–52. ACM (2018)Google Scholar
  47. 47.
    Jiang, X., Kim, M., Lauter, K., Song, Y.: Secure outsourced matrix computation and application to neural networks. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 1209–1222. ACM (2018)Google Scholar
  48. 48.
    Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. In: IEEE Transactions on Information Forensics and Security, pp. 1333–1345. IEEE (2018)Google Scholar
  49. 49.
    Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321. ACM (2015)Google Scholar
  50. 50.
    Zhang, Q., Yang, L.T., Castiglione, A., Chen, Z., Li, P.: Secure weighted possibilistic C-means algorithm on cloud for clustering big data. Inf. Sci. 479, 515–525 (2019)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Harry Chandra Tanuwidjaja
    • 1
  • Rakyong Choi
    • 1
  • Kwangjo Kim
    • 1
    Email author
  1. 1.School of ComputingKorea Advanced Institute of Science and Technology (KAIST)DaejeonKorea

Personalised recommendations