Skip to main content

Motives for Unconditional Cooperation

  • Chapter
  • 139 Accesses

Part of the book series: Theory and Decision Library ((TDLA,volume 33))

Abstract

Having pointed out the need for moral motives in the conventionalist theory of norms, we should ask ourselves, “Which moral motives?” In this chapter and the next we will take a closer look at the reasons agents might have for compliance with a norm in those situations where doing so is not straightforwardly rational. As we have seen this may be the case when the game is the last of an iteration, when non-compliance can be hidden from the group, or when the game is an iterated n-person prisoners’ dilemma. From now on, I will restrict the discussion to those situations where individual rationality precludes the stability of successful cooperation. This means that the attention will be focused primarily, but not exclusively, on the one-shot prisoners’ dilemma. In what follows I will refer to the reasons agents might have for compliance in situations where compliance is not straightforwardly rational, as cooperative virtues. Whether) or not these reasons really are virtuous is a question I will take up later in this work.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Margolis (1981) calls it the set of group-preferences. It is the ordering of alternatives as seen from the perspective of the entire group. According to Margolis, this implies that altruism extends to the entire group. Rescher (1975), on the other hand, allows for the possibility that one can be altruistic toward some and nonaltruistic toward others.

    Google Scholar 

  2. Both these assumptions are highly problematic from a theoretical point of view, especially, the interpersonal comparison of utility. I am in no position to specify exactly how this can be brought into the concept of utility. However there are some intuitive considerations which suggest that the assumption of interpersonal comparisons is not as awkward a notion as it is sometimes maintained. First, the reasons for action other people have are not completely private phenomena. Usually we have a good idea about what moves a particular person. For example, it is because we can imagine what we would experience in a similar situation. Secondly, because of this we can compare the interests and preferences óf others with our own in terms of strength and importance. Admittedly, this does not amo t to a justification b of interpersonal comparisons of utility, but it does offer some intuitive kground for both Rescher’s and Margolis’ assumptions.

    Google Scholar 

  3. The distinction that I present below is similar to the distinction that Schtntdtz (1995, 98102) makes between concern and respect.

    Google Scholar 

  4. Spinoza (1979, part III, prop. 27).

    Google Scholar 

  5. Hume (1984, Book II, sect vi-xi).

    Google Scholar 

  6. Williams (1981). My representation of Williams’ position does not dq justice to him. It heuristic e h r name) to brin out the character of a sacrificing serves a os here, character purpose namely bring g altruist’s reasons for action.

    Google Scholar 

  7. Frankfurt (1971). Another author who made a similar proposal is Dworkin (1981). An overview of the whole discussion can be found in Christman (1989).

    Google Scholar 

  8. I abstract here from the problem that occurs when cooperation within the group is detrimental to the interests of third parties due to the nature of the collective good, or collective bad, that is produced. For example, cooperation between the members of a criminal organization will generate extra profits for all members of this organization that could not be achieved in the absence of such cooperation. The damage to society as a whole, because of this, is of course considerably higher.

    Google Scholar 

  9. It could be argued that this points to a problem within the theory of rational choice. A theory which cannot prescribe the optimal strategy pair in this simple game is defective. There are several developments within the theory of rational choice which deal with this. One (dominant) strand of thought is to look for a refined equilibrium-selection. The requirement of stability in fact is a refinement of the class of Nash-equilibria, be it one that transcends the strict assumptions of standard game theory. Others take issue with the axioms of expected-utility theory. It goes too far beyond the issues I am discussing here to go any deeper into this problem.

    Google Scholar 

  10. On an alternative view, altruists are modeled as actors with a discount factor on their own benefits. Altruists are actors who think their own interests are much less iti4portant than those of others. If that is an adequate model of the process of interests comparison, altruists could encounter prisoners’ dilemmas in mirror image! See Ullmann-Margalit (1977, 48), for a discussion of this possibility.

    Google Scholar 

  11. Apart from all this there is something funny about sympathetic altruism. Suppose two sympathetic altruists interact and they know each other to be thus disposed. Each desires the pleasure of the other since their own pleasure depends on this. Why would they weigh the interests of the other only once? Row knows Column’s pleasure depends on hers, which in turn depends on Column’s pleasure, etc.

    Google Scholar 

  12. It is important to realize what a strong and implausible — if not impossible — assumption this is. If r=1, it means people value their own interests as much as thoselof strangers. They are indifferent between equally large benefits of their own and of others. This is one of the objections Williams (1971) raises against utilitarianism when he argues that utilitarianism destroys the integrity of persons.

    Google Scholar 

  13. This is what I alluded to when I said that sacrificing altruists may! have some moral conviction which generates their second-order reason for altruism.

    Google Scholar 

  14. Scheffler (1982) sees this as a possible way out of the objection Williams (1971) raises. A utilitarian could, and maybe even should, make it her ambition to view the world through a utilitarian point of view. Such a person would make the utilitarian project into her own. Then there cannot be a conflict between the demands of morality and considerations from the integrity of a person. Scheffler misses the point here: Williams’ point i not that people in fact have projects other than the utilitarian one, rather, that it is impossible for them to make such a project theirs.

    Google Scholar 

  15. See also Lyons (1970) and Regan (1980).

    Google Scholar 

  16. Edward McClennen pointed out to me, in a personal communication, that there are many instances in which there is a coincidence of (Pareto-)efficiency and the satisfaction of the preferences of others. However, in the prisoners’ dilemma there are no lesslthan three Pareto-optimal outcomes, none of them the result of the dominant strategy pairs. So if one wants to press for this argument in the context of prisoners’ dilemma, a distinction should at least be made between efficient outcomes, i.e., Pareto-optimal improvements in comparison to a certain baseline, and Pareto-optimality proper, i.e., those outcomes for which it is true that nobody could obtain greater benefits without worsening the position of others. If this distinction is in place, the efficiency objection is more plausible, but Only for a limited number of contexts, namely the standard prisoners’ dilemmas.

    Google Scholar 

  17. This objection appeals to an intuitive criterion. The idea is that something is a virtue, in a specific context, if it is a praiseworthy quality from an impartial point of view. This echoes Hume’s definition of a virtue, which I will discuss in chapter four, as well as other definitions of the concept of virtue. As we shall see they lead to the same conclusion in the case of altruism in the context of collective goods problems.

    Google Scholar 

  18. For a discussion, see Wilke (1983).

    Google Scholar 

  19. It suggests but does not prove this conclusion. Ideally, other motivational factors which could influence the result (e.g., a possible positive correlation between risk-seeking and altruism) should be excluded from the experiment. De Vries did some additional tests to isolate those factors. I will not discuss them here.

    Google Scholar 

  20. Apart from these considerations, it seems that sympathetic altruism, as opposed to sacrificing altruism, is not a cooperative virtue for an additional reason. The idea that compliance, in the context of collective goods problems, could be explained if agents were sympathetic altruists is awkward. For such agents do not overcome the suboptimality problems related to the consumption and production of public goods; they merely avoid them. They do not experience the problematic character of such situations. This restates the remarks I made at the end of section 3.1: to be cooperatively virtuous does not mean that you necessarily have preferences other than self-interested ones, as the idea of sympathetic altruism suggests. It means that you have a different response towards the actions that would follow from them. Sympathetic altruists do not perceive the dilemmatic character of collective goods problems. They do not respond differently to a prisoners’ dilemma; they just never get entangled in one.

    Google Scholar 

  21. The first appearance of this distinction in Elster’s work, as far as I am aware, is Elster (1985). It returns, in different terms, in Elster (1989a).

    Google Scholar 

  22. I could have used an alternative term, namely “deontological” motives. After all, I am talking just of one sort of process, namely the actions of the agent. However, I feel that the term deontological has been associated with strict ethics of duty and this connotation is one that is unwanted in the description of the cooperative virtues.

    Google Scholar 

  23. Assuming what is not true, namely that this sonata sounds identical when I or the world’s greatest violinist play it.

    Google Scholar 

  24. Elster (1983, sect II-9) has forcefully argued this point. Gauthier (1986, ch. 9) makes similar points.

    Google Scholar 

  25. On an alternative reading this assumption expresses that my picking a flower is a sufficient condition for all others to pick flowers as well. This assumption is an operator swindle. (2a) assumes that existence implies universality. In formula, something like the following is assumed: 3x: Px—>b’y, x#y: Py, where P is a predicate that stands for picking flowers and x and y are a variable individuals.

    Google Scholar 

  26. See for example Jacobs (1985).

    Google Scholar 

  27. Den Hartogh (1985, ch. 17) offers a detailed and insightful discussion of this point. He argues that from the observation that the marginal utility of my contribution to a collective good can be infinitesimally small, it does not follow that a well-informed utilitarian will not contribute to the good, like Olson (1965, 64) has claimed. After all, this does depend not only on the relative amount of my contribution, but also on the total amount of good produced. The amount of my contribution is therefore not what is at issue in Olson’s argument.

    Google Scholar 

  28. Den Hartogh goes on to show that Olson’s conclusion rests on two mistakes. First, the tracing problem: it is often impossible to identify the exact effects of my contribution on the collective good. But from this it does not follow either that the effect of my contribution is negligibly small or absent. Secondly, a sorites paradox is involved. It is true that my contribution alone will not create a healthy and sustainable environment. Nor will the second and the third contribution, etc. But since the contribution of a large number does create a sustainable environment, it does not follow that this observation justifies my not contributing.

    Google Scholar 

  29. I agree with Den Hartogh on all points here. A society consisting of well-informed everyday Kantians could, in principle, produce sufficient amounts of collective goods. What I am maintaining is that everyday Kantians do not consider the expected actions of others and because of that may fail to achieve coordination and, hence, cooperation. In other words, there is nothing in the disposition of everyday Kantians that requires them to be well-informed.

    Google Scholar 

  30. An excellent discussion of the problems connected to IRU is given by Lyons (1970) and is repeated, though with far more detail, in Regan (1980).

    Google Scholar 

  31. This shows again, be it from a totally different perspective, that identification of the “better for all” clause with just optimality is fishy. In the standard prisoners’ dilemma there are three optimal outcomes, namely the symmetrical one of joint cooperation and the two asymmetrical outcomes. However, an everyday Kantian would not consider the two asymmetrical ones because her principle tells her to restrict her choices to those combinations of strategies where all agents adopt her strategy choice. This means that the only optimal outcome everyday Kantians will consider is that of joint cooperation.

    Google Scholar 

  32. And similarly, it is the reason that IRU is not a plausible description of the cooperative virtues.

    Google Scholar 

  33. This is not an unproblematic interpretation, though I find it the most (plausible one. See also Paton’s introduction to Kant (1948).

    Google Scholar 

  34. It is not really surprising that Elster believes his principle is a form of Kantianism. Many philosophers, including very famous ones, have interpreted Kant as asking what state of affairs would be the most desirable one. For example, John Stuart Mill argues that Kant’s theory relies on a covert appeal to consequentialist principles. In Mill’s view the categorical imperative demands that an action be morally prohibited if “the consequences of [its] universal adoption would be such as no one would choose to incur.” Mill (1979, 4). This interpretation is incorrect given the exact formulation of the categorical imperative, and implausible given the whole of Kant’s moral theory.

    Google Scholar 

  35. The risk of this outcome is interpreted as the utility achieved when following the “security” strategy in relation to the possible gain following a cooperative strategy: U(d)/U(c)= 0.1.

    Google Scholar 

  36. The tree depicted here should be “read” from left to right: first, it is A’s choice and after she has made her choice and B knows her choice, it is B’s turn. The outcomes at the end of the tree give first the payoff to A and then the payoff to B.

    Google Scholar 

  37. See for a discussion on this problem Nell (1975, ch. 2) and O’Neill (1989, ch. 5).

    Google Scholar 

  38. The concept of autonomous effects was first introduced by Kavka (1978). The term strategic intentions, denoting intentions which have autonomous effects, is from Robins (1997).

    Google Scholar 

  39. For an overview of rational voting, see Brennan and Lomasky (1993).

    Google Scholar 

  40. However, this need not be a conscious, rational reaction to the impossibility to realize one’s outcome-oriented desires. Elster (1983) describes some mechanisms, which are a-rational in nature.

    Google Scholar 

  41. Liebrand and others, (1986).

    Google Scholar 

  42. McClintock and Liebrand (1988). This effect is observed in numerous other experiments. See Dawes (1980) and Liebrand, Poppe, and Wilke (1989) for a representative sample of all this material.

    Google Scholar 

  43. Liebrand, Poppe, and Wilke (1989).

    Google Scholar 

  44. Pruitt and Kimmel (1977).

    Google Scholar 

  45. Rapoport, Chamah, and Orwant (1965).

    Google Scholar 

  46. Pruitt and Kimmel (1977, 383) refer to no less than eleven independent experiments.

    Google Scholar 

  47. Dawes (1980).

    Google Scholar 

  48. See Dawes (1980, 187).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Verbeek, B. (2002). Motives for Unconditional Cooperation. In: Instrumental Rationality and Moral Philosophy. Theory and Decision Library, vol 33. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-9982-5_3

Download citation

  • DOI: https://doi.org/10.1007/978-94-015-9982-5_3

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-6026-6

  • Online ISBN: 978-94-015-9982-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics