Skip to main content

Part of the book series: Law, Governance and Technology Series ((ISDP,volume 32))

  • 574 Accesses

Abstract

Ambient Intelligence (AmI) describes a world where objects are “smart”, environments are “sensible” and technology anticipates and satisfies our needs and desires. AmI and similar technological visions have received a good deal of attention from researchers in various fields. In this chapter I briefly review the profusion of technical literature that names and describes the ensemble of technologies that make this world possible. The focus on AmI technical features and machine learning will be followed by a general discussion of societal issues, from which I will highlight two: “power through technology” and “freedoms”. The purpose of this chapter is to introduce such issues, which will be further developed in the next two chapters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Also known as the Commission of the European Communities.

  2. 2.

    Other technology descriptions are similar to those mentioned above such as Ahonen’s “ubiquitous networked society” (Ahonen et al. 2008) and Greenfield’s “everyware” (Greenfield 2006). Each of these visions has a particular approach to ICT and focuses on different aspects of a technology continuum that involves networks, communications and terminals. While acknowledging the value of these visions, I have no wish to further deepen distinctions between them.

  3. 3.

    For instance, the EC, while describing AmI systems, refer to attributes that are embraced by autonomic computing, namely: systems that know themselves, are dynamic, self-optimizing, resilient and so on (European Commission 2004).

  4. 4.

    The EC defines RFID technology as “the use of electromagnetic radiating waves or reactive field coupling in the radio frequency portion of the spectrum to communicate to or from a tag through a variety of modulation and encoding schemes to uniquely read the identity of a radio frequency tag or other data stored on it” (Article 3, a, of the Commission Recommendation of 12.5.2009 on the Implementation of Privacy and Data Protection Principles in Applications Supported by Radio-Frequency Identification, hereafter the “RFID Recommendation”).

  5. 5.

    A RFID tag or chip is a device that produces a radio signal or responds to and modulates a carrier signal received from a reader. A reader is a device that captures and identifies electromagnetic waves.

  6. 6.

    RFID initial uses were military as in the case of the Friend or Foe application, created by the British Royal Air Force in order to identify enemy airplanes during World War II (Avoine 2009, 17).

  7. 7.

    I would also mention applications such as supply chain management, identification of packages, car hands-free ignition, collection systems in toll roads, control of access to buildings, baggage handling control in airports, contactless credit cards, identity cards, passports and medical records.

  8. 8.

    Cloud computing “refers to applications delivered as services over the Internet as well as to the actual Cloud infrastructure—namely, the hardware and systems software in data centers that provide these services [moving] computing and data away from desktop and portable PCs into large data centers”. “The main technical underpinnings of cloud computing infrastructures and services”, say Dikaiakos et al. “include virtualization, service-oriented software, grid computing technologies, management of large facilities, and power efficiency” (Dikaiakos et al. 2009, 10). Cloud computing delivers services through the Internet from resource clouds where information is preserved. This means that computing resources are used with great efficiency and are easily accessible. Cloud computing is therefore a powerful instrument to enable connectivity in an AmI world without regard to how or where data is stored (Veikko et al. 2010, 78; Rader et al. 2011, 47).

  9. 9.

    P2P is a network architecture based on the partition of tasks between peers. Compared to client-server architectures, P2P has more scalability – the ability to handle growing amounts of tasks – and robustness – maintaining the availability of the system despite mal or non-functioning of one or more peers; also, P2P has the advantage of distributing the costs of the network among peers. Because P2P architecture is dynamic and distributed, it can be very effective in the deployment of AmI (Gasson and Warwick 2007, 42).

  10. 10.

    BCIs are pathways between the brain and external devices: the electrical activity of neurons is captured by electroencephalograms, which distinguish frequency spectra; then, neural activity is encoded and translated into commands that may operate devices such as high-tech prostheses, exoskeletons and typewriters (Schütz and Friedewald 2011, 183).

  11. 11.

    The iPhone 6 s for instance features two cameras, a gyroscope, an accelerometer, a barometer, a proximity sensor, an ambient light sensor and a fingerprint identity sensor (Apple 2015).

  12. 12.

    A report commissioned by the Council of Europe (CoE) has defined biometrics as “measurable, physiological or behavioral characteristics that can be used to determine or verify identity” (De Hert and Christianen 2013). Biometrics is therefore suited to human identification through the use of body features such as fingerprint, hand, iris and face. Traditional or first-generation biometrics is the paradigm to refer to mature technologies that have found large development in law enforcement or civil purposes, such as fingerprint and iris-based recognition. More recently, second-generation biometrics has been marked by two major trends: the processing of other types of traces and multimodality. New processing of traces involves, for instance, analysis of motor skills such as walking, voice and signature patterns, body signals such as electromagnetic signals produced by the heart and the brain, body odor and vein patterns, human-computer interaction such as keystroke dynamics, facial recognition and soft biometrics, meaning the use of general traits such as gender, height and weight. Second-generation biometrics is also multimodal as new systems take into account different biometrics simultaneously, differently from traditional biometrics where a single modality is deployed (Venier and Mordini 2011, 116–121).

  13. 13.

    It is worth noting that more than 10 years ago in “Minority Report” – the SCI-FI film adaptation of Dick’s novel about a future year 2054 where a specialized police department arrests people based on foreknowledge – there is a scene where the main character is recognized through his iris by a marketing computer system that offers him a beer – it seems we play harder in reality than in fiction.

  14. 14.

    For instance, the idea of embedded intelligence is connected to artificial intelligence (AI). Intelligent agents emulate human intelligence: they are entities equipped with sensors and actuators and face challenges such as learning a model for prediction, creating a representation of the world, interacting in real time, competing and cooperating with other agents (Kleiner 2005, 144). AI can help AmI to accomplish tasks such as interpreting the state of the environment; representing information and knowledge associated with the environment; modeling, simulating and representing entities in the environment; making decisions and planning actions; learning about the environment, interacting with human beings and acting on the environment (Ramos, Augusto, and Shapiro 2008, 16–17).

  15. 15.

    The US National Science Foundation provides a comprehensive list of techniques and technologies around the themes of data and knowledge management (DKM), data and knowledge analytics (DKA) and computational scientific discovery (CSD) (National Science Foundation 2014); such techniques and technologies are at the core of multiple and varied definitions of the Big Data hype (Podesta et al. 2014).

  16. 16.

    Fayyad et al., for instance, point out that fields such as machine learning – with which we deal below – and pattern recognition provide data mining techniques that should be dealt with under the overall process of knowledge discovery (Fayyad, Piatetsky-shapiro, and Smyth 1996).

  17. 17.

    Hildebrandt observes that automated profiling is different from non-automated in three aspects: it is not done by organisms but by machines programmed to reveal correlations in great amounts of data, it is not a simple query amongst predefined categories but rather the discovering of new knowledge and it cannot be verified by human beings as we have no access to their logic of production and use (Hildebrandt 2008b, 58).

  18. 18.

    Automated profiling is grounded in data processing related to both facts – “e.g. network account details or some facts filled on a web page” – and actions – “for example the path walked in front of a surveillance camera or click behaviour on a web page”. Often it is not necessary to distinguish actions from facts, all data being combined in one record (van Otterlo 2013, 43).

  19. 19.

    See Sect. 3.3.1.

  20. 20.

    As Hildebrandt observes “smart things that know about your habits, life style, desires and preferences, about the risks you may run and about the opportunities you may nourish on. [These] [s]mart things require real time sophisticated profiling, based on data-mining processes that generate new knowledge by detecting unexpected patterns in data bases” (Hildebrandt 2008a, 187–188).

  21. 21.

    Talking about autonomic computing, Hildebrandt observes that it requires modelling of human beings themselves, i.e., “our every movement, biological state, interactions, moods and responses to what happens in the environment. Instead of waiting for our deliberate input on how we want our coffee, what room temperature we like, which music fits our mood, which is the best moment to wake up, how we prioritise and respond to incoming information, the environment profiles our keystroke behavior and correlates the patterns it exhibits to our health, our mood, our productivity and – for instance – to the moment we need a shot of strong black or mild and milky coffee” (Hildebrandt 2011, 143).

  22. 22.

    As far as AmI environments are concerned, the modeling of contexts is necessary; in order to “make the environment adaptive to the inferred preferences of the subject, the context itself will have to be profiled. This concerns data like room temperature, volume of the audio-set, amount of light and/or the presence of certain objects and even, to complicate matters, other subjects that have an equal ‘right’ to personalised services in the particular environment” (Gasson et al. 2005, 22).

  23. 23.

    Emphasis on miniaturization is an AmI trend. “Smart dust”, one of the many RFID applications, uses tags smaller than a grain of sand that may be spread on the floor for surveillance purposes in the battlefield or at home – for instance, gluing RFID on fingertips to use them as a virtual keyboard (Ducatel et al. 2001, 9). Gasson and Warwick point out that today, structures smaller than 100 nm have already been created, “for comparison: the visible light has wavelengths from 400 to 700 nm” (Gasson and Warwick 2007, 23).

  24. 24.

    In IBM propaganda, “[i]t’s as if the autonomic nervous system says to you, Don’t think about it—no need to. I’ve got it all covered. That’s precisely how we need to build computing systems—an approach we propose as autonomic computing. It’s time to design and build computing systems capable of running themselves, adjusting to varying circumstances, and preparing their resources to handle most efficiently the workloads we put upon them. These autonomic systems must anticipate needs and allow users to concentrate on what they want to accomplish rather than figuring how to rig the computing systems to get them there” (Horn 2001, 7–8).

  25. 25.

    As van den Berg notes: “[…] context: where is the user and who else is there? Is there interaction with other people or not? […] activity: what is the user doing and how can the technology provide him with support in the activity (if that is what the user would want)? […] circumstances: what other experiences has the user just had or is he anticipating? What mood is he in – is he tired or energetic, does he want to be entertained or left alone […] history: what did the user want in similar previous situations and how did he respond to what the technology offered? […]” (van den Berg 2010, 53).

  26. 26.

    As Greenfield illustrates “[y]ou walk in a room, and something happens in response: The lights come on, your e-mails are routed to a wall screen, a menu of options corresponding to your new locations appears on the display sewn into your left sleeve […] Whether or not you walk into the room in pursuance of a particular goal, the system’s reaction to your arrival is probably tangential to that goal […] nor does anything in this interplay between user and system even correspond with the other main mode we see in human interaction with conventional computing systems: information seeking”. It is “largely a matter of voice, touch and gesture, interwoven with the existing rituals of everyday life”. The very idea of user is at stake since ubiquitous systems are designed to be “ambient, peripheral and not focally attended to in the way that something actively “used” must be” (Greenfield 2006, 27; 32 and 70).

  27. 27.

    See Sect. 3.3.1.1.

  28. 28.

    As Ahonen et al. note “[d]ependency grows when a technology is widely used. Users become dependent when they do not remain indifferent to the consequences of technology use. This can be joyful when the technology works or frustrating when it does not” (Ahonen et al. 2008, 136).

  29. 29.

    I specially thank André Silva for calling my attention to these point and examples.

  30. 30.

    In this context see also Greenfield, who advances that ubiquitous technologies may be triggered inadvertently – for instance, unintentional engagement with a system such as a person communicating their location to everybody when they only wanted to make it available to close relatives – and unwillingly – as in the frustrated refusal to submit to a system (Greenfield 2006, 66). The last case could be illustrated where life-logging devices “decide” to take a picture of someone based on the information captured by its sensors, disregarding the will of the person.

  31. 31.

    In this sense Gutwirth suggests “any behavior will be easier to check because it leaves more and more electronic traces. The result is that freedom suffers: individuals adapt themselves much easier if they know their actions can be checked or retraced” (Gutwirth 2001, 85).

  32. 32.

    See Koskela for the idea of surveillance as an emotional experience (Koskela 2002).

  33. 33.

    See Sect. 1.2.

  34. 34.

    In this context the increasing availability of technologies that will enable AmI and similar visions is a determinant. Take the example of RFID passive tags, whose costs dropped significantly in recent years, making it easier to tag objects. Greenfield pointed out that in 2006 the price of standard passive tags stood at about fifty US cents; in 2013 it had dropped to 7–15 US cents (Greenfield 2006, 98). A further example is IPv6, the latest version of the Internet Protocol, which offers an exceptional increase in numbering capacity, making it possible to connect an enormous quantity of objects. IPv6 addresses increased from 32 bits (in the preceding version of the Internet Protocol, the IPv4) to 128 bits. While IPv4 numbers about 4.3 × 109 addresses, IPv6 numbers about 3.4 × 1038 addresses (IETF 1998). By comparison, this means that for every 7.3 billion people alive in 2015 (United Nations 2015), there are about 4.78 × 1028 addresses.

  35. 35.

    In the EU “smart growth” is the buzzword used to refer to “strengthening knowledge and innovation” as drivers of future growth through the use of ICTs (Communication from the Commission of the European Communities, “Europe 2020”). Technology markets for “smart cities” such as Songdo are expected to grow on a global basis from US $8 billion in 2010 to exceed US $39 billion in 2016, according to a 2011 study that examined more than 50 smart city projects (ABI Research 2011).

  36. 36.

    The Guardian has recently reported on two of the last acquisitions led by Facebook and Google revealing that the latter is prone to investing in advanced robotics, machine-learning, distributed sensors and digital mapping. This circumstance, coupled with the obvious dominance of Google in the market of Internet services and products, and Facebook on social networking, demands a good deal of attention: “[i]n the last 18 months, for example, Google has bought at least eight significant robotics companies, and laid out £400 m to buy the London-based artificial intelligence firm Deepmind. Facebook, for its part, bought Instagram, a photo-sharing network, for $1bn and paid […] $19bn in cash and shares for Whatsapp, a messaging company […] And in the last few weeks, both companies have got into the pilotless-drones business. Google acquired Titan Aerospace, a US-based startup that makes high-altitude drones, which cruise near the edge of the Earth’s atmosphere, while Facebook bought a UK-based company, Ascenta, which is designing high-altitude, solar-powered drones that can fly for weeks – or perhaps longer – at a time” (Naughton 2014).

  37. 37.

    In a report that established a research agenda for Networked Systems of Embedded Computers (EmNets) – a paradigm vision where ICTs are embedded in a wide range of devices linked together through networks – the US National Research Council argues that “[t]here are few, if any, ethically neutral technologies. Powerful technologies such as computing […] have the potential to be utterly pervasive in people’s lives [and] bring a corresponding array of ethical, legal, and policy issues” (United States National Research Council 2001, 34).

  38. 38.

    Foucault argues, “I don’t want to say that the State isn’t important; what I want to say is that relations of power, and hence the analysis that must be made of them, necessarily extend beyond the limits of the State. In two senses: first of all because the State, for all the omnipotence of its apparatuses, is far from being able to occupy the whole field of power relations, and further because the State can only operate on the basis of other, already existing, power relations” (Foucault 1980, 122).

  39. 39.

    I refer here to one of the meanings of autonomy pointed out by Honneth: “[a]utonomy […] means a right to self-determination which is guaranteed to human subjects insofar as they can be obstructed in their individual decision making by either physical or psychical influences” (Honneth 1995, 265).

  40. 40.

    Other perspectives on trust exist such as Ahonen et al., who are concerned with trust in technology. To Ahonen et al. trust is an issue in the sense that people’s confidence in technology may somehow be downgraded in intelligent, unpredictable environments: “[i]f AmI technologies are perceived as unpredictable and non-transparent at the same time, achieving trust in them will be much more difficult than achieving trust in current technologies which are not transparent either, but are sufficiently predictable” (Ahonen et al. 2008, 148–149). Nevertheless, not trusting in technology suggests a mere usability or reliance issue, meaning that once people become aware of the functioning of a certain technology they take the risk of using it. The point is nevertheless disputable and authors such as Durante contest that trust is only related to human interactions (Durante 2010; Durante 2011).

  41. 41.

    LBSN applications allow users to view the location of their friends or other unknown users in proximity. This model of service has a wide array of variations from social networks to specific-interest targets like food outlets or pubs, from leisure to professional applications, or commercial and governmental uses.

  42. 42.

    As Michael and Michael observe, the vision of technology as a means to enhance human control misses a few points: for example, the fact that technology does not necessarily provide people with control over their environment. It also disregards the fact that these technologies, as they are commonly designed, allow others to control what the user experiences. Finally it completely overlooks the link between trust and freedom in the moral and metaphysical senses (Michael and Michael 2010).

  43. 43.

    It is Sen who makes the point, deconstructing the idea of the self-interested rational person – an idea that echoes in a certain line of thought in economics but also in political and legal thinking. In rational choice theory (RCT), doing things that do not favor one’s own well-being is irrational, except to the extent that doing good to others enhances one’s own well-being. Sen explores the contradictions of such theory, and I highlight two of his arguments that reasonably avoid RCT. First, self-love is not the sole driver of human actions; people’s behavior is also motivated by sympathy, generosity and public spiritedness. Second, refusing the egoistic paradigm – i.e. the notion that rationality demands that one must act single-mindedly according to one’s own goals – does not mean to embrace the idea that one must promote the goals of others, no matter what they are. The point here is that being considerate of the desires and pursuits of other people does not necessarily mean being irrational (Sen 2009, 32, 189 and 193).

  44. 44.

    Economic rooted privacy divide refers particularly to a certain stratification of rights. Angwin’s inquiries on the costs of privacy in the digital age were recently reported in The New York Times, from which I quote: “Last year, I spent more than $2200 and countless hours trying to protect my privacy. Some of the items I bought—a $230 service that encrypted my data in the Internet cloud; a $35 privacy filter to shield my laptop screen from coffee-shop voyeurs; and a $420 subscription to a portable Internet service to bypass untrusted connections—protect me from criminals and hackers. Other products, like a $5-a-month service that provides me with disposable email addresses and phone numbers, protect me against the legal (but, to me, unfair) mining and sale of my personal data” (Angwin 2014).

  45. 45.

    For a comprehensive account of the debates and a critical view on genetic engineering see Sandel (Sandel 2007).

  46. 46.

    To Ahonen et al. “[i]n general, it seems that AmI will narrow some gaps, widen others and create new ones. Physical access to AmI equipment and infrastructure is likely to improve, since AmI applications will form an intrinsic part of our everyday lives and at least the basic infrastructure is bound to be available to the majority of people […]. On the other hand, there will still be a percentage of the population that will not have access to AmI applications and an even greater percentage that will have access only to basic infrastructure and not to more sophisticated technologies, thus excluding them from the full benefits of the AmI environment. [Also] the digital divide in an AmI environment can arise from profiling: profiling is a prerequisite for many applications, which will provide more opportunities for companies and other organisations to target specific groups, while excluding and discriminating against other people on the basis of their profiles. Digital divides will persist as a function of income, education and age as well as gender and race/ethnicity. […] As long as the gap between developing and developed nations in general does not close, the digital divide will also widen, especially as new technologies emerge to which the underdeveloped societies will not have access or will not be able to use” (Ahonen et al. 2008, 154–155).

  47. 47.

    We cannot but remark a similarity with the expression deus ex machina, which has been used in literature to prevent authors from referring to a device as the “god from the machine”; the expression reproaches the recourse to contrive artificial solutions – for instance the appearance of a god – to resolve the plot of a play. In a similar vein see Andrejevic, who points to a tendency to undervalue individual comprehension as opposed to “knowledge” produced from data mining (Andrejevic 2013).

References

  • ABI Research. 2011. ‘Smart City Technologies Will Grow Fivefold to Exceed $39 Billion in 2016’. July 6. https://www.abiresearch.com/press/smart-city-technologies-will-grow-fivefold-to-exce.

  • Ahonen, P., P. Alahuhta, B. Daskala, P. De Hert, R. Lindner, I. Maghiros, A. Moscibroda, W. Schreurs, and M. Verlinden. 2008. Safeguards in a World of Ambient Intelligence. Springer.

    Google Scholar 

  • Alcañiz, M., and B. Rey. 2005. ‘New Technologies For Ambient Intelligence’. In Ambient Intelligence. IOS Press.

    Google Scholar 

  • Amoore, L. 2006. ‘Biometric Borders: Governing Mobilities in the War on Terror’. Political Geography 25 (3): 336–51.

    Article  Google Scholar 

  • Andrejevic, Mark. 2013. Infoglut: How Too Much Information Is Changing the Way We Think and Know. New York: Routledge.

    Google Scholar 

  • Angwin, J. 2014. ‘Has Privacy Become a Luxury Good?’ The New York Times, March 3. http://www.nytimes.com/2014/03/04/opinion/has-privacy-become-a-luxury-good.html.

  • Apple. 2015. ‘IPhone 6 s – Technical Specifications’. Apple. https://www.apple.com/iphone-6s/specs/.

  • Ashton, K. 2009. ‘That “Internet of Things” Thing’. RFID Journal, July 22. http://www.rfidjournal.com/articles/view?4986.

  • Avoine, G. 2009. ‘Sécurité de la RFID: comprendre la technique sans être un technicien’. In La sécurité de l’individu numérisé – réflexions prospectives et internationales, 300. Paris: L’Harmattan.

    Google Scholar 

  • Barocas, Solon, and Andrew D. Selbst. 2016. ‘Big Data’s Disparate Impact’. California Law Review 104. http://papers.ssrn.com/abstract=2477899.

  • Birnhack, M., and N. Ahituv. 2013. ‘Privacy Implications of Emerging and Future Technologies’. PRACTIS.

    Google Scholar 

  • Calders, T., and I. Žliobaitė. 2013. ‘Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures’. In Discrimination and Privacy in the Information Society, edited by B. Custers, T. Calders, B. Schermer, and T. Zarsky, 43–57. Studies in Applied Philosophy, Epistemology and Rational Ethics 3. Springer Berlin Heidelberg.

    Google Scholar 

  • Clifford, S., and Q. Hardy. 2013. ‘Attention, Shoppers: Store Is Tracking Your Cell’. The New York Times, July 14. http://www.nytimes.com/2013/07/15/business/attention-shopper-stores-are-tracking-your-cell.html.

  • Cohen, J. E. 2012. Configuring the Networked Self. New Haven: Yale University Press.

    Google Scholar 

  • Custers, B. 2013. ‘Data Dilemmas in the Information Society: Introduction and Overview’. In Discrimination and Privacy in the Information Society, edited by B. Custers, T. Calders, B. Schermer, and T. Zarsky, 3–26. Studies in Applied Philosophy, Epistemology and Rational Ethics 3. Springer Berlin Heidelberg.

    Google Scholar 

  • De Hert, P., and K. Christianen. 2013. ‘Report on the Application of the Principles of Convention 108 to the Collection and Processing of Biometric Data’. Council of Europe.

    Google Scholar 

  • de Mul, J., and B. van den Berg. 2011. ‘Remote Control : Human Autonomy in the Age of Computer-Mediated Agency’. In Law, Human Agency, and Autonomic Computing: The Philosophy of Law Meets the Philosophy of Technology. Routledge.

    Google Scholar 

  • Dikaiakos, M. D., D. Katsaros, P. Mehra, G. Pallis, and A. Vakali. 2009. ‘Cloud Computing: Distributed Internet Computing for IT and Scientific Research’. IEEE Internet Computing 13 (5): 10–13.

    Article  Google Scholar 

  • Ducatel, K., M. Bogdanowicz, F. Scapolo, J. Leitjen, and J-C. Burgelman. 2001. ‘That’s What Friends Are For. Ambient Intelligence (AmI) and the IS in 2010’. In Innovations for an E-Society. Challenges for Technology Assessment, 314. Teltow: Institut für Technikfolgenabschätzung und Systemanalyse and VDI/VDE-Technologiezentrum Informationstechnik.

    Google Scholar 

  • Durante, M. 2010. ‘What Is the Model of Trust for Multi-Agent Systems? Whether or Not E-Trust Applies to Autonomous Agents’. Knowledge, Technology and Policy 23 (3–4): 347–66.

    Article  Google Scholar 

  • Durante, M. 2011. ‘Rethinking Human Identity in the Age of Autonomic Computing: The Philosophical Idea of the Trace’. In The Philosophy of Law Meets the Philosophy of Technology: Autonomic Computing and Transformations of Human Agency. Routledge.

    Google Scholar 

  • European Commission. 2004. ‘Ambient Intelligence’. March 11. http://ec.europa.eu/information_society/tl/policy/ambienti/index_en.htm.

  • European Group on Ethics in Science and New Technologies to the European Commission. 2005. ‘Ethical Issues Relating to the Use of ICT Implants in the Human Body’. European Communities. http://www.gleube.eu/polemics-3/the-use-of-ict-implants-in-the-human-body-46.htm.

  • Fayyad, Usama, Gregory Piatetsky-shapiro, and Padhraic Smyth. 1996. ‘From Data Mining to Knowledge Discovery in Databases’. AI Magazine 17: 37–54.

    Google Scholar 

  • Foucault, M. 1980. Power/knowledge: Selected Interviews and Other Writings, 1972–1977. Edited by C. Gordon. New York: Pantheon Books.

    Google Scholar 

  • Foucault, M. 1997. ‘The Subject and Power’. In Essential Works of Foucault: 1954–1984, edited by P. Rabinow. New York: The New Press.

    Google Scholar 

  • Fusco, S. J., R. Abbas, K. Michael, and A. Aloudat. 2012. ‘Location-Based Social Networking and Its Impact on Trust in Relationships’. IEEE Technology and Society Magazine 31 (2): 1–10.

    Article  Google Scholar 

  • Future of Identity in the Information Society (FIDIS). 2007. ‘Emerging Technologies for AmI’. http://www.fidis.net/resources/fidis-deliverables/hightechid/d122-study-on-emerging-ami-technologies/doc/5/multiple/.

  • Gaggioli, A. 2005. ‘Optimal Experience in Ambient Intelligence’. In Ambient Intelligence. IOS Press.

    Google Scholar 

  • Gasson, M. 2012. ‘Human ICT Implants: From Restorative Application to Human Enhancement’. In Human ICT Implants: Technical, Legal and Ethical Considerations, edited by M. Gasson, E. Kosta, and D. M. Bowman, 11–28. Information Technology and Law Series 23. T.M.C Asser Press.

    Google Scholar 

  • Gasson, M., and K. Warwick. 2007. ‘Study on Emerging AmI Technologies’. Future of Identity in the Information Society (FIDIS).

    Google Scholar 

  • Gasson, M., K. Warwick, Wim Schreurs, and Mireille Hildebrandt. 2005. ‘Report on Actual and Possible Profiling Techniques in the Field of Ambient Intelligence’. European Commission.

    Google Scholar 

  • Greenfield, A. 2006. Everyware – The Dawning Age of Ubiquitous Computing. Berkeley: New Riders.

    Google Scholar 

  • Gutwirth, S. 2001. Privacy and the Information Age. New York: Rowman & Littlefield Publishers, Inc.

    Google Scholar 

  • Harwig, R. 2006. ‘Foreword’. In True Visions the Emergence of Ambient Intelligence. Berlin: Springer-Verlag.

    Google Scholar 

  • Hildebrandt, M. 2008a. ‘A Vision of Ambient Law’. In Regulating Technologies, from Regulating Technologies, 175–91.

    Google Scholar 

  • Hildebrandt, M. 2008b. ‘Profiling and the Rule of Law’. Identity in the Information Society 1 (1): 55–70.

    Google Scholar 

  • Hildebrandt, M. 2011. ‘Autonomic and Autonomous “Thinking”: Preconditions for Criminal Accountability’. In Law, Human Agency and Autonomic Computing. Routledge.

    Google Scholar 

  • Honneth, A. 1995. The Fragmented World of the Social Essays in Social and Political Philosophy. Edited by C. W. Wright. Albany: State University of New York Press.

    Google Scholar 

  • Hoque, M.E., L-P Morency, and R. W. Picard. 2011. ‘Are You Friendly or Just Polite? – Analysis of Smiles in Spontaneous Face-to-Face Interactions’. In ACII’11 Proceedings of the 4th International Conference on Affective Computing and Intelligent Interaction. Springer-Verlag Berlin, Heidelberg.

    Google Scholar 

  • Horn, P. 2001. ‘Autonomic Computing: IBM’s Perspective on the State of Information Technology’. www.research.ibm.com/autonomic/manifesto/.

  • IETF. 1998. ‘Internet Protocol, Version 6 (IPv6) Specification’. December. http://tools.ietf.org/html/rfc2460.

  • Issenberg, S. 2012. ‘The Definitive Story of How President Obama Mined Voter Data to Win A Second Term’. MIT Technology Review. December 19. http://www.technologyreview.com/featuredstory/509026/how-obamas-team-used-big-data-to-rally-voters/.

  • ISTAG. 2005. ‘Ambient Intelligence: From Vision to Reality’. In Ambient Intelligence. IOS Press.

    Google Scholar 

  • Kerr, I. 2013. ‘Prediction, Pre-Emption, Presumption: The Path of Law after the Computational Turn’. In Privacy, Due Process and the Computational Turn : The Philosophy of Law Meets the Philosophy of Technology, 91–120.

    Google Scholar 

  • Kleiner, A. 2005. ‘Game AI: The Possible Bridge between Ambient and Artificial Intelligence’. In Ambient Intelligence, 143–55. IOS Press.

    Google Scholar 

  • Koskela, H. 2002. ‘“Cam Era” – the Contemporary Urban Panopticon.’ Surveillance & Society 1 (3): 292–313.

    Google Scholar 

  • Lindwer, M., D. Marculescu, T. Basten, R. Zimmermann, R. Marculescu, S. Jung, and E. Cantatore. 2003. ‘Ambient Intelligence Visions and Achievements: Linking Abstract Ideas to Real-World Concepts’. In Design, Automation & Test in Europe Conference & Exhibition. Vol. 1. Los Alamitos, California: IEEE Computer Society.

    Google Scholar 

  • McLuhan, M. 1965. Understanding Media: The Extensions of Man. New York: McGraw-Hill.

    Google Scholar 

  • Michael, K. 2013. ‘Wearable Computers Challenge Human Rights’. Uberveillance, July. http://uberveillance.com/blog/2013/7/24/wearable-computers-challenge-human-rights.

  • Michael, M. G., and K. Michael. 2010. ‘Towards a State of Uberveillance’. IEEE Technology and Society Magazine 29 (2): 9–16.

    Article  Google Scholar 

  • National Science Foundation. 2014. ‘Critical Techniques and Technologies for Advancing Big Data Science & Engineering (BIGDATA’. National Science Foundation. http://www.nsf.gov/pubs/2014/nsf14543/nsf14543.htm.

  • Naughton, J. 2014. ‘Why Facebook and Google Are Buying into Drones’. The Guardian, April 20, sec. World news. http://www.theguardian.com/world/2014/apr/20/facebook-google-buying-into-drones-profit-motive.

  • Nicolelis, M. 2011. Beyond Boundaries: The Neuroscience of Connecting Brains with Machines – and How It Will Change Our Lives. 1st ed. New York, NY: Times Books.

    Google Scholar 

  • Picard, R. W. 2010. ‘Emotion Research by the People, for the People’. Emotion Review 2 (3): 250–54.

    Article  Google Scholar 

  • Podesta, John, Penny Pritzker, Ernest J. Moniz, John Holdren, and Jeffrey Zients. 2014. ‘Big Data: Seizing Opportunities, Preserving Values’. Washington, D.C.: The White House. http://purl.fdlp.gov/GPO/gpo64868.

  • Poullet, Y. 2011. ‘Internet et sciences humaines ou « comment comprendre l’invisible ?’

    Google Scholar 

  • Rader, M., A. Antener, R. Capurro, M. Nagenborg, L. Stengel, W. Oleksy, E. Just, et al. 2011. ‘ETICA Evaluation Report’. ETICA.

    Google Scholar 

  • Ramos, C., J. C. Augusto, and D. Shapiro. 2008. ‘Ambient Intelligence—the Next Step for Artificial Intelligence’. IEEE Intelligent Systems Magazine, April.

    Google Scholar 

  • Riva, G. 2005. ‘The Psychology of Ambient Intelligence: Activity, Situation and Presence’. In Ambient Intelligence. IOS Press.

    Google Scholar 

  • Rouvroy, A., and Y. Poullet. 2009. ‘The Right to Informational Self-Determination and the Value of Self-Development: Reassessing the Importance of Privacy for Democracy’. In Reinventing Data Protection?, edited by S. Gutwirth, Y. Poullet, P. De Hert, C. Terwangne, and S. Nouwt, 45–76. Dordrecht: Springer Netherlands.

    Chapter  Google Scholar 

  • Sandel, M. J. 2007. The Case against Perfection: Ethics in the Age of Genetic Engineering. Cambridge, Mass.: Belknap Press of Harvard University Press.

    Google Scholar 

  • Schütz, P., and M. Friedewald. 2011. ‘Technologies for Human Enhancement and Their Impact on Privacy’.

    Google Scholar 

  • Sen, A. 2009. The Idea of Justice. Cambridge: Belknap Press of Harvard Univ. Press.

    Google Scholar 

  • Surie, D. 2012. ‘Egocentric Interaction for Ambient Intelligence’. Dissertation, Umeå University.

    Google Scholar 

  • Timmermans, J., V. Ikonen, B. C Stahl, and E. Bozdag. 2010. ‘The Ethics of Cloud Computing: A Conceptual Review’. In, 614–20. IEEE Computer Society.

    Google Scholar 

  • United Nations. 2015. ‘World Population Prospects: The 2015 Revision, Key Findings and Advance Tables’. United Nations. http://esa.un.org/unpd/wpp/Publications/.

  • United States National Research Council. 2001. Embedded, Everywhere a Research Agenda for Networked Systems of Embedded Computers. Washington, D.C.: National Academy Press.

    Google Scholar 

  • van den Berg, B. 2010. The Situated Self. Nijmegen: Wolf Legal Publishers.

    Google Scholar 

  • van Otterlo, M. 2013. ‘A Machine Learning View on Profiling’. In Privacy, Due Process and the Computational Turn : The Philosophy of Law Meets the Philosophy of Technology, 41–64.

    Google Scholar 

  • Veikko, I., M. Kanerva, P. Kouri, B. Stahl, and K. Wakunuma. 2010. ‘Emerging Technologies Report’. European Commission.

    Google Scholar 

  • Venier, S., and E. Mordini. 2011. ‘Second-Generation Biometrics’. Privacy and Emerging Fields of Science and Technology: Towards a Common Framework for Privacy and Ethical Assessment.

    Google Scholar 

  • Weiser, M. 1991. ‘The Computer for the 21st Century’. Scientific American 265 (3).

    Google Scholar 

  • Weiser, M., R. Gold, and J. S. Brown. 1999. ‘The Origins of Ubiquitous Computing Research at PARC in the Late 1980s’. IBM Systems Journal 38 (4): 693–96.

    Google Scholar 

  • Zheng, Y., and B. C. Stahl. 2012. ‘Evaluating Emerging ICTs: A Critical Capability Approach of Technology’. In The Capability Approach, Technology and Design, edited by I. Oosterlaken and J. Van den Hoven, 57–76. Springer.

    Google Scholar 

Legal Documents

  • European Union

    Google Scholar 

  • Commission of the European Communities, ‘Commission Recommendation of 12.5.2009 on the Implementation of Privacy and Data Protection Principles in Applications Supported by Radio-Frequency Identification’ COM (2009) 3200 final.

    Google Scholar 

  • Commission of the European Communities, ‘Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – Internet of Things – An Action Plan for Europe’ COM (2009) 278 final.

    Google Scholar 

  • Commission of the European Communities, ‘Communication from the Commission Europe 2020, A Strategy for Smart, Sustainable and Inclusive Growth’ COM (2010) 2020 final.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Costa, L. (2016). A World of Ambient Intelligence. In: Virtuality and Capabilities in a World of Ambient Intelligence. Law, Governance and Technology Series(), vol 32. Springer, Cham. https://doi.org/10.1007/978-3-319-39198-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-39198-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-39197-7

  • Online ISBN: 978-3-319-39198-4

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics