Advertisement

Privacy and Ethical Challenges in Big Data

  • Sébastien GambsEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11358)

Abstract

The advent of Big Data coupled with the profiling of users has lead to the development of services and decision-making processes that are highly personalized, but also raise fundamental privacy and ethical issues. In particular, the absence of transparency has lead to the loss of control of individuals on the collection and use on their personal information while making it impossible for an individual to question the decision taken by the algorithm and to make it accountable for it. Nonetheless, transparency is only a prerequisite to be able to analyze the possible biases that personalized algorithms could have (e.g., discriminating against a particular group in the population) and then potentially correct them. In this position paper, I will review in a non-exhaustive manner some of the main privacy and ethical challenges associated with Big Data that have emerged in recent years before highlighting a few approaches that are currently investigated to address these challenges.

Keywords

Big Data Privacy Transparency Interpretability Fairness 

References

  1. 1.
    Acar, G., Eubank, C., Englehardt, S., Juárez, M., Narayanan, A., Díaz, C.: The Web never forgets: persistent tracking mechanisms in the wild. In: ACM Conference on Computer and Communications Security, pp. 674–689 (2014)Google Scholar
  2. 2.
    Aïvodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S., Tapp, A.: Fairwashing: the risk of rationalization. CoRR abs/1901.09749 (2019)Google Scholar
  3. 3.
    Blondel, V.D., Decuyper, A., Krings, G.: A survey of results on mobile phone datasets analysis. EPJ Data Sci. 4(1), 10 (2015)CrossRefGoogle Scholar
  4. 4.
    Diakopoulos, N.: Algorithmic accountability reporting: on the investigation of black boxes. Tow Center (2014)Google Scholar
  5. 5.
    Doshi-Velez, F., Kim, B.: A Roadmap for a Rigorous Science of Interpretability. CoRR abs/1702.08608 (2017)Google Scholar
  6. 6.
    Dwork, C.: Differential privacy. ICALP 2, 1–12 (2006)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: USENIX Security Symposium, pp. 17–32 (2014)Google Scholar
  8. 8.
    Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., Roth, D.: A comparative study of fairness-enhancing interventions in machine learning. CoRR abs/1802.04422 (2018)Google Scholar
  9. 9.
    Fung, B.C.M., Wang, K., Chen, R., Yu, P.S.: Privacy-preserving data publishing: a survey of recent developments. ACM Comput. Surv. 42(4), 14:1–14:53 (2010)CrossRefGoogle Scholar
  10. 10.
    Gambs, S., Killijian, M.-O., del Prado Cortez, M.N.: Show me how you move and i will tell you who you are. Trans. Data Privacy 4(2), 103–126 (2011)MathSciNetGoogle Scholar
  11. 11.
    Goldberg, I.: Digital privacy: theory, technologies, and practices. In: Privacy-Enhancing Technologies for the Internet III: Ten Years Later, December 2007Google Scholar
  12. 12.
    Goldreich, O.: Foundations of Cryptography. Cambridge University Press, Cambridge (2009)zbMATHGoogle Scholar
  13. 13.
    Goodman, B., Flaxman, S.R.: European Union Regulations on algorithmic decision-making and a “Right to Explanation”. AI Mag. 38(3), 50–57 (2017)CrossRefGoogle Scholar
  14. 14.
    Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box Models. ACM Comput. Surv. 51(5), 93:1–93:42 (2019)Google Scholar
  15. 15.
    Janic, M., Wijbenga, J.P., Veugen, T.: Transparency enhancing tools (TETs): an overview. STAST, pp. 18–25 (2013)Google Scholar
  16. 16.
    LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  17. 17.
    Lécuyer, M., et al.: XRay: enhancing the Web’s transparency with differential correlation. In: USENIX Security Symposium, pp. 49–64 (2014)Google Scholar
  18. 18.
    Lindell, Y., Pinkas, B.: Secure multiparty computation for privacy-preserving data mining. IACR Cryptology ePrint Archive 2008:197 (2008)Google Scholar
  19. 19.
    Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: KDD, pp. 150–158 (2012)Google Scholar
  20. 20.
    Milli, S., Schmidt, L., Dragan, A.D., Hardt, M.: Model reconstruction from model explanations. CoRR abs/1807.05185 (2018)Google Scholar
  21. 21.
    Naveed, M., Ayday, E., Clayton, E.W., Fellay, J., Gunter, C.A., Hubaux, J.-P., Malin, B.A., Wang, X.F.: Privacy in the Genomic Era. ACM Comput. Surv. 48(1), 6:1–6:44 (2015)CrossRefGoogle Scholar
  22. 22.
    Pasquale, F.: The Black Box Society, the Secret Algorithms that Control Money and Information. Harvard University Press, Cambridge (2015)CrossRefGoogle Scholar
  23. 23.
    Romei, A., Ruggieri, S.: Discrimination data analysis: a multi-disciplinary bibliography. In: Custers, B., Calders, T., Schermer, B., Zarsky, T. (eds.) Discrimination and Privacy in the Information Society, pp. 109–135. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-30487-3_6CrossRefGoogle Scholar
  24. 24.
    Rouvroy, A., Berns, T.: Gouvernementalité algorithmique et perspectives d’émancipation. Le disparate comme condition d’individualisation par la relation? Réseaux, n 177 (2013)Google Scholar
  25. 25.
    Ruggieri, S., Pedreschi, D., Turini, F.: Data mining for discrimination discovery. TKDD 4(2), 91–940 (2010)CrossRefGoogle Scholar
  26. 26.
    Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: IEEE Symposium on Security and Privacy, pp. 3–18 (2017)Google Scholar
  27. 27.
    Sweeney, L.: k-anonymity: a model for protecting privacy. Int. J. Uncertainty Fuzziness Knowl. Based Syst. 10(5), 557–570 (2002)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Verma, S., Rubin, J.: Fairness definitions explained. In: FairWare@ICSE 2018, pp. 1–7 (2018)Google Scholar
  29. 29.
    Zliobaite, I.: Measuring discrimination in algorithmic decision making. Data Min. Knowl. Discov. 31(4), 1060–1089 (2017)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Université du Québec à Montréal (UQAM)MontrealCanada

Personalised recommendations