Skip to main content

Part of the book series: Law, Governance and Technology Series ((ISDP,volume 32))

  • 569 Accesses

Abstract

In Kranzberg’s well-known sentence “[t]echnology is neither good nor bad; nor is it neutral” (Kranzberg 1986), while the first part is a valuable warning against both anti- and pro-accounts of technology, the second part is the most intriguing. If good and evil are not terms to evaluate technology, we need to give a good deal of attention to non-neutrality as such. The point is most important since we deal with AmI focused on human technological existence rather than on “human as subject, technology as object”. In other words, if it is nonsense to take shelter in anti- or pro-technological discourses, which are concerned with the object (McStay 2014, 70), it is a good idea to look for something to make sense of human-technology relations. One of the sub-questions mentioned in the introduction to this work was precisely “Does algorithmic governmentality provide an advantageous explanation for the issue of power through technology?” Here I hold algorithmic governmentality as a hypothesis, i.e., a tentative explanation, still unspecified, about power through technology and put it specifically in relation to surveillance theories. In the first place I will outline the signs the literature points to as confirmations of Kranzberg’s non-neutrality. This outline will be followed by a review of the “surveillance” explanation of technology and that of algorithmic governmentality, the idea being to maintain an account of power through technology that will pave the way for the continuity of our study. The final paragraphs of this chapter will consider the sub-question “Is the philosophical concept of virtuality useful for the promotion of freedoms?”

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    I take liberties with philosophy in order to ground reflections that just would not fit in a strictly legal approach, in which a distinction between subject and object is inevitable. Doing this we somehow follow a Heideggerian path, for whom the distinctions between subject and object are problematic and the essence of technology is not technological: “we shall never experience our relationship to the essence of technology so long as we merely conceive and push forward the technological, put up with it, or evade it. Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it” (Heidegger 1977, 1).

  2. 2.

    See Sect. 1.2.

  3. 3.

    See for instance Reed’s references to “technology indifference”, “implementation neutrality” and “potential neutrality” (Reed 2007).

  4. 4.

    I also remark that in this study I do not deal with “net neutrality”, loosely understood as the principle that Internet traffic should not be discriminated against. See Wu for an early reference to the principle (Wu 2003).

  5. 5.

    The sound of inevitability is loud in the description of the future of ICTs. According to Google executives Schmidt and Cohen, people are supposed to live in an age of permanent memory as “the option to delete data is largely an illusion”. Moreover, an ever increasing exposition of our identities to third parties is certain as “the potential for someone else to access, share or manipulate parts of our online identities will increase, particularly due to our reliance on cloud-based data storage” (Schmidt and Cohen 2013, 33 and 54).

  6. 6.

    First, medicine has overcome the idea that ANS was something functioning exclusively in an autonomous manner, being influenced by the central nervous system; see in this sense Tortora and Derrickson (Tortora and Derrickson 2009, 545). Second, the analogy refers to human organism systems as if they were disconnected amongst themselves, which also makes no sense from a medical perspective; on this point see Jänig’s extensive study about the integrative function of the autonomic nervous system (Jänig 2006, 3). And third, IBM’s low-key metaphor does not pass the common sense test of self-awareness; as Hildebrandt remarks, we “not only have an autonomous nervous system but we actually are our autonomous nervous system” (Hildebrandt 2011, 142).

  7. 7.

    This connects to what Chamayou points out with respect to the invisible manner through which power operates. Power, he says «est précisément partout où il travaille très activement à se faire oublier […] Tout un affairement subjectif, avec des investissements énormes, pour brouiller les pistes, effacer les traces, escamoter tout sujet repérable de l’action, afin de travestir celle-ci en pur fonctionnement, une sorte de phénomène naturel […]» [is precisely everywhere it works very actively to be forgotten […] A whole subjective occupation, with huge investments to cover tracks, erase traces, make disappear any recognizable subject of the action, in order to disguise it in pure operation, a sort of natural phenomenon […]] (Chamayou 2013).

  8. 8.

    Petrović defines reification as “[t]he act (or result of the act) of transforming human properties, relations and actions into properties, relations and actions of man-produced things which have become independent (and which are imagined as originally independent) of man and govern his life” (Petrović 1983).

  9. 9.

    Not to mention other relevant issues such as those related to the life trajectory of its inhabitants, to the public debate about the infrastructure design and urbanism or to the general impact of smart cities on the environment.

  10. 10.

    As Rubinstein et al. observe about data mining: “there is human intervention in data mining even before the first automated search is run; humans will write the software, shape the database parameters, and decide on the kinds of matches that count. And the task of data mining itself is guided by some degree of human interaction” (Rubinstein et al. 2008). See also Rodotá about human presence as a fundamental component of any legitimate decision-making (Rodotá 2011, 191).

  11. 11.

    “[…] I, throughout this book, have made a very basic mistake. Code is not law, any more than the design of an airplane is law. Code does not regulate, any more than buildings regulate. Code is not public, any more than a television is public. Being able to debate and decide is an opportunity we require of public regulation, not of private action” (Lessig 2006, 324).

  12. 12.

    As Hildebrandt observes, “[s]uch a vision of law and technology would boil down to legal and technological instrumentalism (and neutralism), having no regard for the values incorporated into specific legal and technological devices. Legal instrumentalism cannot conceptualise the legal architecture of democracy and rule of law that safeguards a particular set of checks and balances between citizens, state and civil society. In a constitutional democracy law is not just instrumental for achieving policy goals, as it should always be instrumental for the protection of citizens against the state as well” (Hildebrandt 2008, 178).

  13. 13.

    See Sect. 6.3.

  14. 14.

    “From Sense and Respond to Predict and Act” is the tagline of an IBM solution to public safety. Predictive analytics is used to “anticipate what types of intervention will be needed, and where […] Front-line personnel don’t need to understand the technology to benefit from the results of predictive analytics. From a browser, they can access predictive information in a form that’s easy to understand and use […]” (IBM Corporation 2010). To a representative of the British city of Lancaster, one of IBM’s clients: “[i]f we can start to get crime report data in closer to real time, we can start providing weekly, daily, or shift level crime forecasts. Thinking back to the weather forecast comparison […] monthly crime forecasts are useful for high-level planning, but we’d like to get to a stage where we can make decisions on law enforcement deployment right down to the level of individual shifts” (IBM Corporation 2011). While the politics of fear are a commonplace in public safety issues, it is reasonable to expect predictive and preemptive actions to be successful. As Kerr points out, “where anyone can be the bad man – there is a heightened interest in preemptive predictions” (Kerr 2013).

  15. 15.

    See Sect. 2.2.2.

  16. 16.

    Dataveillance involves the individual through (a) record integration, where organizations bring together all the data they hold about a person, identifying the unique person behind files with different names, for instance married and maiden names or synchronizing name data bases with addresses, (b) front-end verification, which involves the collection of data from other personal systems in order to facilitate transactions, (c) and front-end audit, that uses the occasion of an exceptional transaction to investigate new matters relating to the person, for example when someone is stopped for a traffic offense, officers initiate on-line inquiries (Clarke 1988).

  17. 17.

    Enough to remember the example of US National Security Agency (NSA) spying capabilities, recently brought to public attention.

  18. 18.

    I thank Antoinette Rouvroy for suggesting these two last points.

  19. 19.

    See Sect. 3.1

  20. 20.

    Note that the first US dataveillance system was meant to deal with fraud in the health system, see Sect. 3.2

  21. 21.

    I thank Antoinette Rouvroy for this insight and example.

  22. 22.

    Knyrim and Trieb define and describe smart meters as: “[…] a new generation of advanced and intelligent metering devices which have the ability to record the energy consumption of a particular measuring point in intervals of fifteen minutes or even less. These so called ‘smart meters’ can also communicate and transfer the information recorded in real time or at least on a daily basis by means of any communications network to the utility company for purposes such as monitoring of the system load as well as for billing purposes (‘tele-metering’)” (Knyrim and Trieb 2011). Article 2, 28 of Directive 2012/27/EU (hereafter “Energy Efficiency Directive”) defines smart metering system or intelligent metering system as “an electronic system that can measure energy consumption, providing more information than a conventional meter, and can transmit and receive data using a form of electronic communication”.

  23. 23.

    Similarly, McStay explores an example in the US, where the company Verizon submitted an application in order to obtain patent protection for a media system capable of “triggering tailored advertisements based on whether viewers are eating, playing, cuddling, laughing, singing, fighting, talking or gesturing in from of their sets” (McStay 2014, 81).

  24. 24.

    Fortunately neither the mandatory introduction of the smart meter nor the serious sanctions were maintained in the bills.

  25. 25.

    More precisely, I recall Van Otterlo’s definition: “any methodology or set of techniques that finds novel patterns and knowledge in data and generates models (i.e. profiles) that can be used for effective predictions about the data” (van Otterlo 2013).

  26. 26.

    By 2004 the US the federal government had already recognized the widespread use of data mining by its agencies. In a document referred to by Rouvroy, the United States General Accounting Office reports that “federal efforts cover a wide range of uses”, the top six being – from the most used to the less – “improving service or performance”, “detecting fraud, waste and abuse”, “analyzing scientific and research information”, “managing human resources, detecting criminal activities or patterns” and “analyzing intelligence and detecting terrorist activities” (United States General Accounting Office 2004).

  27. 27.

    The immanent character of data behaviorism is also connected to a crisis of representation; after all, if “the data is sufficient”, there is nothing more to be represented. As pointed out by Rouvroy «[…] nous n’avons plus rien à re-présenter, le “numérique” instaurant un régime d’actualité pure, absorbant dans le vortex du temps réel à la fois le passé et l’avenir, encore et déjà disponibles, sans restes, sous forme latent […] De même, nous n’aurions plus à faire rapport de nos activités, le rapport étant simultané à l’activité, cette dernière produisant d’elle-même les données qui servent à affiner le profil de performance, y compris les projections de nos performances futures, en temps réel» [We have nothing more to re-present, the “digital” establishing a regime of pure actuality, absorbing into the vortex of real time both the past and the future, and yet already available without residue, in latent form […] Similarly, we would not have to report our activities, the report being simultaneously the activity, the latter producing for itself the data used to refine the performance profile, including projections of our future performances, in real time] (Rouvroy 2014, 10).

  28. 28.

    See Sect. 3.1.

  29. 29.

    McStay makes a similar point in relation to the ahistorical character of data mining, characterized by “(1) a transparency of history in terms of chronology of what happened when; (2) the development of a flat history where both recent and distant past are equally readily recallable (an important tool for data miners); and (3) where history (in terms of memory), along with human beings, is industrialized by means of transparency and conversion into standing-reserve” (McStay 2014, 69).

  30. 30.

    “Statistics investigates and develops specific methods for evaluating hypotheses in the light of empirical facts. A method is called statistical, and thus the subject of study in statistics, if it relates facts and hypotheses of a particular kind: the empirical facts must be codified and structured into data sets, and the hypotheses must be formulated in terms of probability distributions over possible data sets” (Romeijn 2014).

  31. 31.

    As did Anderson in announcing that the data deluge would engage the “end of theory” (Anderson 2008). I thank Hildebrandt for calling my attention for the nuance referred to above. For a critique of “data as knowledge” in Big Data see Boyd and Crawford (Boyd and Crawford 2012). See also Hildebrandt, who denotes the Big Data problem as “n = all”, meaning that the sample is taken for the entire population (Hildebrandt 2013b).

  32. 32.

    As Van Otterlo remarks, machine learning is subject to the general problems of statistics related to knowledge representation – for instance the adequacy of the sample size. Also, biases are to be taken into account, such as those concerning search and language. Moreover, models are judged based on an average of what they predict, meaning that previsions for single individuals may be wrong. Finally, the feedback loop means that once the knowledge is produced the following step is to do something with it; this action, however, can trigger people to change their behaviors and rebuilding the model becomes necessary (van Otterlo 2013, 56–58).

  33. 33.

    Who defines performative sentences such as those that indicate “that the issuing of the utterance is the performing of an action – it is not normally thought of as just saying something”. Performative utterances do not describe or report anything at all and they are neither true nor false. The uttering of the sentence “is, or is a part of, the doing of an action, which again would not normally be described as saying something [For example:] I do (sc. take this woman to be my lawful wedded wife) – as uttered in the course of the marriage ceremony […] ‘I name this ship the Queen Elizabeth’ – as uttered when smashing the bottle against the stem […]” (Austin 1962, 5–7).

  34. 34.

    See Sect. 2.2.2.

  35. 35.

    A point of clarification is necessary here. Though in early works Rouvroy and Berns referred to «gouvernementalité statistique» (“statistical govermentality”) in later works there is a shift to «gouvernementalité algorithmique» (“algorithmic governmentality”), which is explained by the fact that “algorithmic governmentality” departs from traditional statistics as seen above. For the continuity of my work, I will refer several times to their early texts where «gouvernementalité statistique» is mentioned in footnotes, which will not pose any prejudice to the general understanding about algorithmic governmentality.

  36. 36.

    «La mesure de toute chose est «dividuelle», à la fois infra- et supra-personnelle, rhizomatique, constituée d’une multitude de représentations numérisées, potentiellement contradictoires entre elles et en tout cas hétérogènes les unes aux autres» “The measure of all things is “dividual”, both infra- and supra-personal, rhizomatic, consisting of a multitude of digitized representations potentially mutually contradictory and in any case heterogeneous to each other” (Rouvroy and Berns 2010, 94).

  37. 37.

    Meaning that technology is meant to adapt to personal preferences and environments and objects are personalized. See Sect. 2.2.3.

  38. 38.

    As observed by Rouvroy and Berns, algorithmic governmentality engages the debate around the question of individualization, such debate being divided by a positive hypothesis – meaning that individuals “win” since personalized services better the identification of individual needs – and the désubjectivation hypothesis, meaning that individuality loses since individuals would become “diluted” in networks. While acknowledging the relevance of such debate, it clearly exceeds my purposes here. Regarding Rouvroy and Berns, I maintain that algorithmic govermnentality, far from engaging with individualization in a positive or negative manner is indifferent vis-à-vis the individual in the sense it is rather concerned with the governance of our digital doubles (Rouvroy and Berns 2013).

  39. 39.

    Creative futurists may provide some help here. For instance, for Tucker “[t]he big data present is one where companies use our data against us, to trick, coerce, and make inferences that benefit them at our expense. That behavior won’t change in the future, but with a better awareness of what’s going on and a willingness to experiment with the right tools we can make the fight a bit fairer […] You have all the information that you need to help you resist ever more coercive mobile messaging; you give it away to your phone all the time. The next step is to start to using it, to become smarter about you. Imagine answering a push notification on your mobile device and seeing the following message: There is an 80 percent probability you will regret this purchase […]” (Tucker 2014, 127–128).

  40. 40.

    Such a promise may resonate, says Rouvroy, in “a time where narratives have become more than ever suspicious due to the experienced difficulty, in a multicultural, globalized society […] [r]ather than understanding the biographical trajectory and exotic world view of their foreign neighbor just moving in next door, Mister and Miss Anybody are interested in knowing in advance what risk the newcomer represents for their safety and tranquility” (Rouvroy 2011, 126).

  41. 41.

    Two other examples of virtualizaton are when software is designed to “perform the functions of a particular hardware platform or operating system” and where a “physical device such as a disk drive can be made to appear as several separate devices to the operating system” (Henderson 2009, 494).

  42. 42.

    For a similar approach see Hildebrandt and Lévy’s commentaries on the relation between the possible and the real (Lévy 1998, 23–24; Hildebrandt 2013a, 224–225).

  43. 43.

    See Nöel for an analysis of different accounts of the “modern virtuals” are taken by Quéau, Lévy and Granger (Nöel 2007).

  44. 44.

    See in this sense Lévy who defines the virtual as something having potential rather than actual existence (Lévy 1998, 23–24).

  45. 45.

    See Sect. 1.3.1.

  46. 46.

    “[T]he virtual dimension of individual human personality, which is constitutive of subjectivity itself, is incompatible with the actualisation – through technological or other means – of a depoliticised, statistical governmental rationality indifferent to the causes of phenomena and chiefly oriented towards the annihilation of contingency” (Rouvroy 2011, 135).

  47. 47.

    I thank Hildebrandt the precious commentaries about the meaning of virtuality and the nuance of its relations with algorithmic governmentality.

  48. 48.

    See in particular Sect. 3.2.

  49. 49.

    See Sect. 3.3.1.

  50. 50.

    See Sect. 3.2.

  51. 51.

    Talking about the impossibility of contesting the predictions of algorithmic governmentality Rouvroy observes that «Le «gouvemement statistique» anticipe I’avenir, sans plus prêter attention à l’actuel, sauf en vue d’en prédire les débordements possibles. L’aura d’impartialité entourant la gestion statistique du «réel» pourrait donc bien enfoncer un peu plus dans l’invisibilité les injustices structurelles contemporaines, tout en mettant hors de portée du debat public les critères d’accès aux ressources et opportunités». [The ‘statistical govemment’ anticipates the future without paying attention to the actual, except for the purpose of predicting its possible excesses. The aura of impartiality surrounding the statistical management of ‘reality’ could drive a few more contemporary structural injustices into invisibility, while putting the criteria to access resources and opportunities out of reach of the public debate] (Rouvroy 2010, 15).

References

  • Anderson, Chris. 2008. ‘The End of Theory: The Data Deluge Makes the Scientific Method Obsolete’. Wired. May 23. http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory.

  • Anker, M. 2006. ‘The Ethics of Uncertainty: Aporetic Openings’. Switzerland: European Graduate School.

    Google Scholar 

  • Armstrong, Timothy K. 2006. ‘Digital Rights Management and the Process of Fair Use’. Harvard Law of Law & Technology 20 (1). http://papers.ssrn.com/abstract=885371.

  • Austin, J. L. 1962. How to Do Things with Words. Cambridge: Harvard University Press.

    Google Scholar 

  • Boyd, Danah, and Kate Crawford. 2012. ‘Critical Questions for Big Data’. Information, Communication & Society 15 (5): 662–79.

    Article  Google Scholar 

  • Chamayou, G. 2013. Théorie du drone. Paris: la Fabrique éd.

    Google Scholar 

  • Clarke, R. 1988. ‘Information Technology and Dataveillance’. Communications of the ACM 31 (5): 498–512.

    Google Scholar 

  • Cohen, J. E. 2012. Configuring the Networked Self. New Haven: Yale University Press.

    Google Scholar 

  • Cuijpers, C., and B.-J. Koops. 2013. ‘Smart Metering and Privacy in Europe’. In European Data Protection Coming of Age. Dordrecht; New York: Springer.

    Google Scholar 

  • Deleuze, G. 1988. Bergsonism. New York: Zone Books.

    Google Scholar 

  • Deleuze, G.1993. Différence et répétition. Paris: PUF.

    Google Scholar 

  • Deleuze, G., and C. Parnet. 1977. Dialogues. Paris: Flammarion.

    Google Scholar 

  • Gilliom, J. 2001. Overseers of the Poor: Surveillance, Resistance, and the Limits of Privacy. Chicago: University of Chicago Press.

    Google Scholar 

  • Greenfield, A. 2006. Everyware – The Dawning Age of Ubiquitous Computing. Berkeley: New Riders.

    Google Scholar 

  • Heidegger, M. 1977. ‘The Question Concerning Technology’. In The Question Concerning Technology, and Other Essays. New York: Harper & Row.

    Google Scholar 

  • Henderson, Harry. 2009. Encyclopedia of Computer Science and Technology. New York, NY: Facts On File.

    Google Scholar 

  • Hildebrandt, M. 2008. ‘A Vision of Ambient Law’. In Regulating Technologies, from Regulating Technologies, 175–91.

    Google Scholar 

  • Hildebrandt, M. 2011. ‘Autonomic and Autonomous Thinking: Preconditions for Criminal Accountability’. In Law, Human Agency and Autonomic Computing. Routledge.

    Google Scholar 

  • Hildebrandt, M. 2013a. ‘Profile Transparency by Design? : Re-Enabling Double Contingency’. In Privacy, Due Process and the Computational Turn : The Philosophy of Law Meets the Philosophy of Technology, 221–46.

    Google Scholar 

  • Hildebrandt, M. 2013b. ‘Slaves to Big Data. Or Are We?’ IDP. Revista de Internet, Derecho Y Politica 16.

    Google Scholar 

  • IBM Corporation. 2010. ‘Public Safety: From “Sense and Respond” to “Predict and Act”’. IBM Corporation.

    Google Scholar 

  • IBM Corporation. 2011. ‘City of Lancaster Takes a Predictive Approach to Policing’. IBM Corporation.

    Google Scholar 

  • Jänig, W. 2006. The Integrative Action of the Autonomic Nervous System: Neurobiology of Homeostasis. Cambridge University Press.

    Google Scholar 

  • Kerr, I. 2013. ‘Prediction, Pre-Emption, Presumption: The Path of Law after the Computational Turn’. In Privacy, Due Process and the Computational Turn : The Philosophy of Law Meets the Philosophy of Technology, 91–120.

    Google Scholar 

  • Knyrim, R., and G. Trieb. 2011. ‘Smart Metering under EU Data Protection Law’. International Data Privacy Law, March.

    Google Scholar 

  • Kranzberg, M. 1986. ‘Technology and History: “Kranzberg’s Laws”’. Technology and Culture 27 (3): 544.

    Article  Google Scholar 

  • Lacroix, Dominique. 2013. Le blues du Net, par Bernard Stiegler. http://reseaux.blog.lemonde.fr/2013/09/29/blues-net-bernard-stiegler/.

  • Lessig, L. 2006. Code and Other Laws of Cyberspace: Version 2.0. New York: Basic Books.

    Google Scholar 

  • Lévy, P. 1998. Becoming Virtual: Reality in the Digital Age. New York: Plenum Trade.

    Google Scholar 

  • Mann, S., J. Nolan, and B. Wellman. 2003. ‘Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments’. Surveillance & Society 1 (3): 331–55.

    Google Scholar 

  • Marx, G. T. 2004. ‘What’s New About the “New Surveillance”?: Classifying for Change and Continuity’. Knowledge Technology and Policy 17 (1): 18–37.

    Article  Google Scholar 

  • Marx, G. T. 2013. ‘Technology and Social Control – The Search for the Illusive Silver Bullet Continues’. August 20. http://web.mit.edu/gtmarx/www/techsoccon.html.

  • McStay, A. 2014. Privacy and Philosophy: New Media and Affective Protocol.

    Google Scholar 

  • Nöel, D. 2007. ‘Le Virtuel Selon Deleuze’. Intellectica 1 (45): 109–27.

    Google Scholar 

  • Petrović, G. 1983. ‘Reification’. A Dictionary of Marxist Thought. Cambridge: Harvard University Press.

    Google Scholar 

  • Reed, C. 2007. ‘Taking Sides on Technology Neutrality’. SCRIPT-ED 4 (3): 263–84.

    Google Scholar 

  • Rodotá, S. 2011. ‘Of Machines and Men: The Road to Identity: Scenes for a Discussion’. In The Philosophy of Law Meets the Philosophy of Technology: Autonomic Computing and Transformations of Human Agency. Routledge.

    Google Scholar 

  • Rojas, Raúl. 2001. Encyclopedia of Computers and Computer History. Chicago: Fitzroy Dearborn.

    Google Scholar 

  • Romeijn, J.-W. 2014. ‘Philosophy of Statistics’. In The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta, Fall 2014. http://plato.stanford.edu/archives/fall2014/entries/statistics/.

  • Rouvroy, A. 2010. ‘Détecter et prévenir : les symptômes technologiques d’une nouvelle manière de gouverner’. In L’état des droits de l’homme en Belgique : rapport 2009–2010, 9–16. Bruxelles: Aden.

    Google Scholar 

  • Rouvroy, A. 2011. ‘Technology, Virtuality and Utopia: Governmentality in an Age of Autonomic Computing’. In Law, Human Agency and Autonomic Computing: The Philosophy of Law Meets the Philosophy of Technology.

    Google Scholar 

  • Rouvroy, A. 2013. ‘The End(s) of Critique: Data Behaviourism versus Due Process’. In Privacy, Due Process and the Computational Turn : The Philosophy of Law Meets the Philosophy of Technology, 143–68.

    Google Scholar 

  • Rouvroy, A. 2014. ‘Des données sans personne : le fétichisme de la donnée à caractère personnel à l’épreuve de l’idéologie des Big data’. Conseil d’État.

    Google Scholar 

  • Rouvroy, A., and T. Berns. 2010. ‘Le nouveau pouvoir statistique’. Multitudes 40 (1): 88–103.

    Google Scholar 

  • Rouvroy, A., and T. Berns. 2013. ‘Gouvernementalité algorithmique et perspectives d’émancipation : le disparate comme condition d’Individuation par la relation?’ Réseaux.

    Google Scholar 

  • Rubinstein, I., R. D. Lee, and P. M. Schwartz. 2008. ‘Data Mining and Internet Profiling: Emerging Regulatory and Technological Approaches’. The University of Chicago Law Review.

    Google Scholar 

  • Schmidt, E., and J. Cohen. 2013. The New Digital Age: Reshaping the Future of People, Nations and Business.

    Google Scholar 

  • Smith, Daniel, and John Protevi. 2013. ‘Gilles Deleuze’. In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. http://plato.stanford.edu/archives/spr2013/entries/deleuze/.

  • Sunstein, C. R. 2013. ‘Impersonal Default Rules vs. Active Choices vs. Personalized Default Rules: A Triptych’.

    Google Scholar 

  • Tortora, G. J., and B. Derrickson. 2009. Principles of Anatomy and Physiology. Hoboken, N.J.: Wiley.

    Google Scholar 

  • Tucker, P. 2014. The Naked Future: What Happens in a World That Anticipates Your Every Move.

    Google Scholar 

  • United States General Accounting Office. 2004. ‘Report to the Ranking Minority Member, Subcommittee on Financial Management, the Budget, and International Security, Committee on Governmental Affairs, U.S. Senate’. United States General Accounting Office.

    Google Scholar 

  • United States National Research Council. 2001. Embedded, Everywhere a Research Agenda for Networked Systems of Embedded Computers. Washington, D.C.: National Academy Press.

    Google Scholar 

  • van Otterlo, M. 2013. ‘A Machine Learning View on Profiling’. In Privacy, Due Process and the Computational Turn : The Philosophy of Law Meets the Philosophy of Technology, 41–64.

    Google Scholar 

  • Wu, Tim. 2003. ‘Network Neutrality, Broadband Discrimination’. Journal of Telecommunications and High Technology Law 2: 141.

    Google Scholar 

Legal Documents

  • European Union

    Google Scholar 

  • Directive 2012/27/EU of the European Parliament and of the Council of 25 October 2012 on energy efficiency, amending Directives 2009/125/EC and 2010/30/EU and repealing Directives 2004/8/EC and 2006/32/EC [2012] L315/1

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Costa, L. (2016). The Power Through Technology. In: Virtuality and Capabilities in a World of Ambient Intelligence. Law, Governance and Technology Series(), vol 32. Springer, Cham. https://doi.org/10.1007/978-3-319-39198-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-39198-4_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-39197-7

  • Online ISBN: 978-3-319-39198-4

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics