Skip to main content

Emotion-Oriented Systems and the Autonomy of Persons

  • Chapter
  • First Online:

Part of the book series: Cognitive Technologies ((COGTECH))

Abstract

Many people fear that emotion-oriented technologies (EOT) – capable of registering, modelling, influencing and responding to emotions – can easily affect their decisions and lives in ways that effectively undermine their autonomy. In this chapter, we explain why these worries are at least partly founded: EOT are particularly susceptible to abuse of autonomy, and there are ways of respecting the autonomy of persons that EOT are unable to accomplish. We draw some general ethical conclusions concerning the design and further development of EOT, contrasting our approach with the “interactional design approach”. This approach is often thought to avoid infringements of user autonomy. We argue, however, that it unduly restricts possible uses of EOT that are unproblematic from the perspective of autonomy, while at the same time it allows for uses of EOT that tend to compromise the autonomy of persons.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Cf. http://emotion-research.net/aboutHUMAINE

  2. 2.

    This characterization is supposed to be compatible with a variety of views on autonomy. More elaborate characterizations and helpful introductions to the notion of autonomy can be found in Christman (1989, 2003), Dworkin (1988, Chap. 1), Friedman (2003, Chap. 1), Feinberg (1989) and Oshana (2006, chap. 1).

  3. 3.

    The capacity/condition distinction is discussed, e.g., in Christman (1989, p 5), Feinberg (1989, 28 ff.) and Oshana (2006, pp 6–9).

  4. 4.

    Cf. Christman (1991, 16 ff.), Velleman (1989, part I) and Meyers (1989) who speak of a competency for ‘self-discovery’.

  5. 5.

    Joseph Raz (1997) claims, e.g., that we are active rather than passive if we are responsive to reasons, i.e. if we can rationally make sense of our beliefs and desires (feelings, etc.).

  6. 6.

    Cf. Christman (1991) and Young (1980).

  7. 7.

    Some philosophers have distinguished this mode of self-reflection from the other two modes, because at this point the notions of ‘identification’ and ‘authenticity’ come into play. In self-evaluation a person is said to ‘identify’ with certain desires and to ‘make them her own’. The most prominent defender of this idea is Harry Frankfurt (1971). According to his ‘hierarchical model’ a person acts autonomously if she is moved to action by desires she identifies with (she desires to be effective in action). It is important to note that in order to be autonomous, a person must not only satisfy this so-called ‘authenticity condition’ but also have the general capacities or competencies we describe. Cf. Christman (2003).

  8. 8.

    Cf. Oshana (2006, p 78); for more general discussions of the relationship between autonomy and rationality: Christman (1991, 13 ff.), Haworth (1986, Chap. 2) and Young (1980, 567 ff.).

  9. 9.

    Cf. Ekstrom (1993).

  10. 10.

    We want to avoid an ‘external’ or ‘substantive’ account of rationality according to which a person must be able to understand the correct reasons and/or have true beliefs in order to count as autonomous (cf. Benson, 1987). For a discussion of ‘internal’ vs. ‘external’ rationality, see Christman (1991, 13 ff.).

  11. 11.

    Cf. Dworkin (1976, 25 ff.), Dworkin (1988, p 18) and Christman (1991, 18 f.).

  12. 12.

    See Dworkin (1976) and the critical discussion by Oshana (2006, Chap. 2).

  13. 13.

    The ‘principle of respect for autonomy’ figures prominently in Beauchamp and Childress (1994); see also Childress (1990).

  14. 14.

    DeCew (2006) provides a helpful overview of the concept of ‘privacy’; an elaborate discussion of the relationships between autonomy and privacy is to be found in Rössler (2001).

  15. 15.

    One might argue that, on our account, the autonomy of Laura and her husband is also compromised, because John does not inform them about his feelings. But we can easily explain why our account of (respect for) autonomy does not yield this result: Most importantly, one reason why John keeps distance might exactly be that he does not want to interact with Laura and his friend in ways that would compromise their autonomy. John’s autonomy is undermined because Paul compromises his ability to resolve the situation on his own, while Laura and her husband do not face a problem and thus do not have to resolve anything (yet).

  16. 16.

    Cf., e.g., the work of Aaron Ben-Ze’ev, Ronald de Sousa, Sabine A. Döring, Peter Goldie, Patricia Greenspan, Bennett Helm, Martha Nussbaum, David Pugmire, Amelie Rorty, Robert Solomon, Holmer Steinfath, Michael Stocker, Christine Tappolet, Gabrielle Taylor, Bernard Williams and Richard Wollheim.

  17. 17.

    This formulation might strike many as way too strong. But note that we always speak from the perspective of autonomy only.

  18. 18.

    Cf. Darwall (2006).

  19. 19.

    This condition is meant to exclude too far-fetched worst case scenarios.

  20. 20.

    As should be clear, these problems relate to the two general duties we have set out in Sect. 3.

  21. 21.

    Cf. Beauchamp and Childress (1994, Chap. 3).

  22. 22.

    In passing, we want to mention an important qualification: Whether some action undermines a person’s procedural independence cannot always be answered without reference to the person’s actual capacities. For example, a father does not respect his child’s procedural independence if he repeatedly tells her that the best thing to do in life is to become a check-out girl. The child is not in a position to evaluate and critically assess these claims from the background of a stable self-conception, and thus, her father interferes with her procedural independence. By contrast, a father does not disrespect autonomy if he tells his well-educated and self-reflective daughter that the best thing to do in life is to become a check-out girl. She will most probably laugh at him. This suggests a principle that could be labelled in a provocative way the ‘low autonomy, high respect’ principle. The less autonomous a person actually is, the more other persons should respect her autonomy – they are all the time in danger of undermining her procedural independence. This idea fits well intuitions concerning, for example, the treatment of children. In discussing the ethicality of persuasive systems, one needs to make use of something like ‘normality conditions’ and assume that a person fulfils to some degree the conditions of self-reflection and rationality.

  23. 23.

    For further discussions of persuasive systems from the perspective of autonomy, see Baumann and Döring (2006). More on persuasive systems and the role of emotions in section WP8; for the ethicality of persuasive systems see Guerini and Stock (2006); see also the discussion in Goldie and Döring (2005a) (CyberDoc).

  24. 24.

    Problems of privacy are also discussed by, e.g., Höök and Laaksolahti (2008) and Reynolds and Picard (2004).

  25. 25.

    See also the discussion of ‘Semi-Intelligent Information Filters’ (SIIF) in chapter “The Ethical Distinctiveness of Emotion-Oriented Technology” by Sabine Döring et al.

  26. 26.

    This system has been developed by Picard et al., http://affect.media.mit.edu/projects.php?id=2145 (accessed January 28, 2008). The other two systems – ‘iNerve’ and ‘iPanic’ – are only invented for purposes of discussion.

  27. 27.

    Cf. Döring and Goldie (2005b).

  28. 28.

    Höök and Laaksolahti (2008); the interactional approach has been formulated by Boehner et al. (2005).

  29. 29.

    See Lindström et al. (2006) and Höök and Laaksolahti (2008).

  30. 30.

    A different approach to come up to problems of privacy is to be found in Picard (2004).

  31. 31.

    An overview of the fields of application is given by Schröder et al. (2006).

  32. 32.

    It is important to note that some might want to distinguish between emotions and mere affective states, the former being more complex intentional states. Although we welcome such attempts, this move is not helpful in discussions of EOT, because ‘emotion’ or ‘affect’ is generally used as umbrella terms in this connection.

  33. 33.

    Such worries are also mentioned in Picard and Klein (2002).

  34. 34.

    See also chapter “Principalism: A Method for the Ethics of Emotion- Oriented Machines” by Sheelagh McGuinness.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Holger Baumann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Baumann, H., Döring, S. (2011). Emotion-Oriented Systems and the Autonomy of Persons. In: Cowie, R., Pelachaud, C., Petta, P. (eds) Emotion-Oriented Systems. Cognitive Technologies. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15184-2_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15184-2_40

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15183-5

  • Online ISBN: 978-3-642-15184-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics