Advertisement

In Praise of an Empowerment Disclosure Regulatory Approach to Algorithms

  • Fabiana Di Porto
Editorial
  • 516 Downloads

2018 has been the year of awakening for the supporters of the free flow of data (i.e. the proposal for a Regulation on a framework for the free flow of non-personal data in the European Union of 13 September 2017). The Russiagate and Cambridge Analytica chronicles have shown the world how fragile and problematic privately managed data protection and data accumulation by a small set of big firms can be.

In times of sudden change, some issues related to the interface between competition law and data protection, which were highly debated and seemed mainly settled in the doctrinal debate, have returned with urgency. The re-combination of personal and non-personal data acquired through the web by dominant digital firms is one of those issues. Re-combination implies that data belonging to an individual are collected to form one or several (behavioural) profiles, and, most importantly, these profiles are commercially exploited for selling “targeted” products or services and/or eventually resold to other companies. Thus, although it may create economies of scale and scope, re-combination also works as leverage for big data firms’ (especially platforms’) market power, by allowing its accumulation and further exploitation. I will deal with those shortcomings and suggest possible solutions.

The European Parliament, in its Resolution of 14 March 2017, was prophetic in emphasising that, thanks to the algorithms of big data analytics, “sensitive information about persons can be inferred from non-sensitive data, which blurs the line between sensitive and non-sensitive data” (pt. 3). In other words, technical guarantees such as the anonymisation of personal data (foreseen in the General Data Protection Regulation, hereinafter GDPR, which will enter into force on 25 May 2018) may be nullified, given that non-personal data fragments, once re-combined through algorithms, increase the possibility of re-identification and therefore of profiling. Thus, the above-mentioned proposed Regulation on the free flow of data, seeking the broadest liberalisation of massive accumulation and processing of non-personal data by EU and foreign firms, would find only limited constraints in the GDPR, and therefore contribute to magnifying market power further.

Strictly connected is the well-documented limited awareness and understanding that consumers (and citizens, more broadly) tend to have of both the data they transmit to platforms (and the various legal and social implications), and their rights of access and control over the use of their data, even by processing. This is due to the incidence of well-known cognitive biases and heuristics, such as information overload, the accumulation problem, inertia and overconfidence that affect individual decision-making.1 Although consumers tend to consider privacy online a high priority, evidence shows that they spend very little time reading such statements before downloading an “app” (the so-called “no-reading” bias). That, in turn, demonstrates that the knowledge concerning the rights stemming from the GDPR (such as access, control and data portability) is far from widespread.

Also, it is a commonplace that online, “data” is the currency and therefore competition rules are unfit to catch market power, even when amounting to data super-dominance. However, online, even more than data, “profiles” are of value, and those can be achieved by leveraging other biases such as the “free meal” heuristics or the “take-it-or-leave-it” offering.2 That, consequentially, leads to what is called “asymmetric collusion”3 between consumers and big data companies, leading the former to accept highly granular and continuous surveillance in exchange of free personalised services (short-term benefit) – with the long-run effect of eroding their self-determination capacity. In other words, market power is acquired or increased either (or both) by leveraging (exploiting) the mentioned biases or the unawareness on the part of users regarding their rights of access and control of their data.

A further profile pertains to the intersection between the exercise of economic power and individual autonomy in digital markets. The advent of the fourth revolution, indeed, worsens the problem of information asymmetry on the consumer’s side, thus reducing the extent of her contractual autonomy.4 By collecting a consumer’s digital “traces”, companies using big data analytics may anticipate much of her contractual will (e.g. willingness to pay, the acceptability of quality, etc.). Examples can be observed in targeted algorithmic prices, rankings, reviews and search results. Anytime firms are allowed to manipulate such algorithmic information they produce and distribute for consumers, the ultimate effect would be one of weakening the validity and genuine nature of their choices on the markets. Whenever the ability to algorithmically condition the preferences and beliefs of consumers is exercised by super-dominant “tech giants” (the so-called GAFA(M)5), then what is at stake (and at risk) is economic democracy.

The responses provided by law and regulation are several. From a pro-competitive perspective, they range from severe divestiture of conglomeratic digital titans – a plea that is timidly rising after the “datagates” – or heavy sanctioning (as happened in both the EC 2017 Google abuse and Facebook/WhatsApp post-merger cases), to light-touch – evergreen – principle-based and self-regulation (as set forth in the proposed data free flow Regulation, see e.g. Recital 7). From the privacy perspective, rules in the GDPR are also applicable to extra-EU firms (that profile European individuals), where privacy standards may eventually be looser than in the EU. Its norms are enforceable through sanctions, which might amount to a substantial – although proportionate – part of the infringer’s turnover (although still a bearable sum for those “titans” whose turnover is often higher than the GDP of some Member States).

The impression is that data accumulation through exploitation might require further efforts in two main directions: (i) that of empowerment, so as to regain validity concerning consumers’ consensus and autonomous choices; and (ii) increased transparency of algorithmic decisions.

(i) Informational cognitive empowerment (and nudge). On 16 February 2018, the Belgian Court of First Instance of Brussels condemned Facebook for infringing its national cookies and privacy laws. The judgment, inter alia, objected that the company had not informed its users (and non-users) “in a clear, concise and intelligible manner” about its activities (i.e. processing the personal data of users and non-users for tracking purposes). What the case made clear is that the infringement of data protection rules may quickly work as a multiplier for the market power of big data firms (a contention also made by the German Bundeskartellamt – although on different legal grounds – concerning Facebook in its preliminary assessment of 12 December 2017).

Disclosure regulation could, therefore, be a valuable answer. However, that would be the case only if disclosures on profiling and reselling of personal data (and of other rights, such as data portability, access etc.) are designed in a cognitive-based fashion6 to overcome biases (see Figure 1) – that is, in a way to be salient and simplified enough to be quickly understood, so as to increase awareness of both the “if” and “how much” of individual consent. Although more contested, also informational nudge disclosures could be used to overcome emotional responses and web-based addictions. Alternatively, or in addition, users could be allowed to choose not to release their personal data (e.g. opt-ins would be more effective than opt-outs in this regard), but instead to pay for the service or app they wish to download. Educational campaigns to raise awareness of privacy and data protection, especially among teenagers, should also be a priority for both consumer, competition and data protection agencies.
Fig. 1

Ladder of disclosure intervention (based on proportionality)

(ii) Increased transparency. Greater transparency should also characterise the way algorithmic decisions affecting individual commercial choices are taken. The GDPR does contain some important rules that should be strengthened using a cognitive-based approach, such as data portability and the right to obtain human intervention in algorithmic decisions. However, as made clear by the Art. 29 of the Working Group’s Guidelines on “Automated individual decision-making and profiling” of 6 February 2018, algorithms are subject to bias and “can result in assessments based on imprecise projections, [and] impact negatively on individuals” (p. 27). Therefore, it is important to “carry out frequent assessments on the data sets … to check for any bias, and develop ways to address any prejudicial elements, including any over-reliance on correlations”. Those checks and audits require “regular reviews of the accuracy and relevance of automated decision-making including profiling … not only at the design stage but also continuously, as the profiling is applied to individuals” (p. 28).

There are two strategies that could be adopted to empower consumers in this domain. The first one is to implement “data vaults” as stores for personal big data (a proposal that the European Data Protection Supervisor made in the renowned Opinion No. 7/2015). For instance, consumers could be provided with their Internet use data through data-portability (something that, for instance, Google already allows its users to do), to be reused in apps that compare commercial offers via algorithms (it is the so-called “make it easy” strategy). Doing this would benefit consumers with their consumption history, which they could choose how to reutilise. The second one is personalised disclosures. The message (or disclosure) to be shown would be selected via a co-regulatory process, that is participated in by the industry, the regulator and consumers. Here algorithms – following the afore-mentioned Guidelines – are tested regularly to avoid biases and discrimination. Personalisation of messages would ensure higher chances of reading and understanding, while laboratory pre-testing of algorithms would prevent exploitations, abuses and discrimination. It would also ensure regular updating of disclosures. Finally, such a co-regulatory scheme, in times of disrupted confidence, could also help rebuild a tone of trust and reputation.

Footnotes

  1. 1.

    Tversky and Kahneman (1974); Ben-Shahar and Schneider (2014); Di Porto and Rangone (2015).

  2. 2.

    Sugden, Wang and Zizzo (2015).

  3. 3.

    Dow Schüll (2012).

  4. 4.

    Di Porto (2017).

  5. 5.

    Google, Apple, Facebook, Amazon and Microsoft.

  6. 6.

    Sunstein (2014).

References

  1. Article 29 Data Protection WP. Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679, 17/EN WP251rev01 of 6 February 2018, http://ec.europa.eu/newsroom/article29/document.cfm?doc_id=49826
  2. Ben-Shahar O, Schneider CE (2014) More than you wanted to know: the failure of mandated disclosure. Princeton University Press, PrincetonCrossRefGoogle Scholar
  3. Bundeskartellamt, Preliminary assessment in Facebook proceeding: Facebook's collection and use of data from third-party sources is abusive, 19 December 2017, at http://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2017/19_12_2017_Facebook.html
  4. Di Porto F, Rangone N (2015) Behavioural sciences in practice: lessons for EU rulemakers. In: Sibony AL, Alemanno A (eds) Nudge and the law. A European perspective. Hart Publishing, OxfordGoogle Scholar
  5. Di Porto F (2017) Disclosure regulation. The challenges of cognitive sciences and big data (La regolazione degli obblighi informativi. Le sfide delle scienze cognitive e dei big data), Napoli, ES (in Italian)Google Scholar
  6. Dow Schüll N (2012) Addiction by design. Princeton University Press, New JerseyGoogle Scholar
  7. European Data Protection Supervisor - EDPS, Opinion No. 7/2015, Meeting the challenges of big data, A call for transparency, user control, data protection by design and accountability, of 19 November 2015Google Scholar
  8. European Parliament, Resolution “Fundamental rights implications of big data”, P8_TA(2017)0076, of 14 March 2017, http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P8-TA-2017-0076+0+DOC+XML+V0//EN
  9. Proposal for a Regulation of the EU Parliament and the Council on a framework for the free flow of non-personal data in the European Union, COM(2017) 495 fin., of 13 September 2017Google Scholar
  10. Sugden R, Wang M, Zizzo DJ (2015) Take it or leave it: experimental evidence on the effect of time-limited offers on consumer behavior. CBESS Discussion Paper 15-19, in https://www.uea.ac.uk/documents/166500/0/CBESS+15-19.pdf/e62168a1-c908-4a37-9b13-ee1b5d852839
  11. Sunstein CR (2014) Why nudge, the politics of libertarian paternalism. Yale University Press, New HavenGoogle Scholar
  12. Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185(4157):1124–1131CrossRefGoogle Scholar

Copyright information

© Max Planck Institute for Innovation and Competition, Munich 2018

Authors and Affiliations

  1. 1.University of SalentoLecceItaly

Personalised recommendations