Perturbation Paradigms of Maintaining Privacy-Preserving Monotonicity for Differential Privacy
To preserve confidential information for numeric and character data, there are corresponding to differential privacy mechanisms. However, current work without uniform evaluation criterion for these differential privacy mechanisms, because the data types are different. In this paper, we proposed privacy-preserving monotonicity principle as an evaluation criterion of differential privacy mechanisms. Firstly, this paper summarized three perturbation paradigms of existing work, including the linear perturbation, non-linear perturbation, and randomized perturbation. Secondly, for numeric and character data, we proposed privacy-preserving monotonicity principle of differential privacy based on computational indistinguishability, respectively. Finally, through analysis privacy-preserving monotonicity of existing perturbation methods for each perturbation paradigm, we presented constrained perturbation paradigms for numeric and character data that can achieve privacy-preserving monotonicity. Therefore, our privacy-preserving monotonicity principle shows the tradeoff between privacy and utility, and it can be regarded as an evaluation criterion of differential privacy mechanisms. Furthermore, we show that constrained perturbation paradigms of maintaining privacy-preserving monotonicity provide a useful guideline for differential privacy development.
KeywordsComputational indistinguishability Differential privacy Perturbation paradigms Privacy metrics Privacy-preserving monotonicity
This work was supported by the National Natural Science Foundation of China with No. 61173190, No. 61602290, and No. 61662009, Natural Science Basic Research Program of Shaanxi Province with No. 2017JQ6038, Fundamental Research Founds for the Central Universities with No. GK201704016, No. 2016CBY004, No. GK201603093, No. GK201501008, and No. GK201402004, Program of Key Science and Technology Innovation Team in Shaanxi Province with No. 2014KTC-18, and Open Project Fund of Guizhou Provincial Key Laboratory of Public Big Data with No. 2017BDKFJJ026.
- 3.McSherry, F., Talwar, K.: Mechanism design via differential privacy. In: 48th Annual IEEE Symposium on Foundations of Computer Science, pp. 94–103. IEEE (2007)Google Scholar
- 7.Yang, B., Sato, I., Nakagawa, H.: Bayesian differential privacy on correlated data. In: Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pp. 747–762. ACM (2015)Google Scholar
- 8.Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318. ACM (2016)Google Scholar
- 10.Mitrovic, M., Bun, M., Krause, A., Karbasi, A.: Differentially private submodular maximization: data summarization in disguise. In: International Conference on Machine Learning, pp. 2478–2487. ACM (2017)Google Scholar
- 11.Qin, Z., Yang, Y., Yu, T., Khalil, I., Xiao, X., Ren, K.: Heavy hitter estimation over set-valued data with local differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 192–203. ACM (2016)Google Scholar