According to “debunking arguments,” our moral beliefs are explained by evolutionary and cultural processes that do not track objective, mind-independent moral truth. Therefore (the debunkers say) we ought to be skeptics about moral realism. Huemer counters that “moral progress”—the cross-cultural convergence on liberalism—cannot be explained by debunking arguments. According to him, the best explanation for this phenomenon is that people have come to recognize the objective correctness of liberalism. Although Huemer may be the first philosopher to make this explicit empirical argument for moral realism, the idea that societies will eventually converge on the same moral beliefs is a notable theme in realist thinking. Antirealists, on the other hand, often point to seemingly intractable cross-cultural moral disagreement as evidence against realism (the “argument from disagreement”). This paper argues that the trend toward liberalism is susceptible to a debunking explanation, being driven by two related non-truth-tracking processes. First, large numbers of people gravitate to liberal values for reasons of self-interest. Second, as societies become more prosperous and advanced, they become more effective at suppressing violence, and they create conditions where people are more likely to empathize with others, which encourages liberalism. The latter process is not truth tracking (or so this paper argues) because empathy-based moral beliefs are themselves susceptible to an evolutionary debunking argument. Cross-cultural convergence on liberalism per se does not support either realism or antirealism.
Moral realists believe that there are facts in virtue of which (at least some of) our moral beliefs are objectively and non-relatively true or false (Tersman 2006). Realists typically also hold that moral knowledge is possible, and that some (or many) of our ethical views are in fact true.
Skeptics about moral realism often appeal to “debunking arguments” to undermine the justification of our moral beliefs. Debunking arguments come in several varieties (Sauer 2018, chapter 1) which roughly conform to the following schema:
Causal premise. S’s belief that p is explained by X.
Epistemic premise. X is [a non-truth-tracking] process.
S’s belief that p is unjustified. (Kahane 2011, p. 106)
There is ongoing debate about exactly what epistemic principles underly debunking arguments, and how such arguments should be formulated (e.g., Bogardus 2016; White 2010). Nichols (2014) favors “process” over “best-explanation” debunking arguments: S’s belief that p is unjustified when it is the product of an “epistemically defective” process. On Vavova’s (2018) account, beliefs are rendered unjustified if we discover that they were (decisively) shaped by an “irrelevant influence”: “An irrelevant influence for me with respect to my belief that p is one that (a) has influenced my belief that p and (b) does not bear on the truth of p” (p. 136).
According to some popular evolutionary debunking arguments (EDAs), our core moral beliefs reflect our basic evaluative tendencies, which in turn were implanted in us by natural selection for the purpose of increasing inclusive fitness (Street 2006; see also Joyce 2006, chapter 6). Natural selection favors evaluative tendencies that increase inclusive fitness, and cares nothing for whether they lead to judgments that align with objective moral truths (if such truths existed). This is the epistemically defective cause/process—or “irrelevant influence”—that undermines our belief in objective moral truth.
While EDAs have become popular in recent years, the argument from disagreement is, as Sauer (2018, p. 99) notes, “arguably the most common challenge to metaethical moral realism.” Cultures—and to some extent individuals within cultures—seem to disagree about fundamental moral principles. Antirealists often claim that we would not expect such disagreement if everyone had the potential ability to perceive objective moral truth (Mackie 1977, pp. 36–37). Realists often counter that moral disagreement is more superficial than it first appears, or that disagreement—at least about “normatively significant core issues” (Sauer 2018, p. 100)—does not (or would not) persist under ideal conditions. The mere fact that cultures disagree about morality does not mean that there can be no objective truth of the matter, since cultures disagree about many matters of objective fact (Railton 1986, p. 195).
The argument from disagreement is essentially empirical—in light of an observation (moral disagreement) we should reject realism. Huemer (2016) turns the argument from disagreement on its head, claiming that cultures are converging on certain moral beliefs, which supports realism.
Huemer (2016) advances an “empirical case” not just for moral realism, but for the claim that liberalism is the correct moral theory. Liberalism is a “broad ethical orientation [that] (1) recognizes the moral equality of persons, (2) promotes respect for the dignity of the individual, and (3) opposes gratuitous coercion and violence” (p. 1987). “[W]hile this broad orientation is mostly uncontroversial today,” he says, “human history has been dominated by highly illiberal views” (p. 1987).
He says that the standard debunking arguments, which purport to explain our moral beliefs by appealing to non-truth-tracking forces such as natural selection and culture, cannot explain the historical drift toward liberalism. Debunking arguments “lack credibility because they afford no explanation for the most important fact about the history of moral thought: the spread of liberalism across the world over the course of human history, especially recent history” (p. 2007). The trend occurred far too rapidly to be explained by biological evolution, and
[p]urely cultural accounts of the source of morals leave us at a loss to explain why the culture itself has moved in a given direction over time. At first glance, it may seem that many explanations are possible—for instance, perhaps changing technologies or changing forms of economic organization have somehow necessitated different values. But the list of potential explanations dwindles as we try to take into account the entire phenomenon: it is not just, for example, that slavery was abolished in the United States. It is that societies around the world have been liberalizing with respect to many different issues—slavery, war, torture, execution, democracy, women’s suffrage, segregation, and so on—and this has been going on for centuries. It is very difficult to come up with explanations for this broad phenomenon that don’t require us to posit large coincidences. (p. 2007)
Liberal realism, however, “can offer a plausible account of the data,” i.e., data “concern[ing] the development of moral values over the course of human history” (p. 1988)—what some philosophers describe as “moral progress.”
Huemer’s argument is an inference to the best explanation, which can be spelled out as follows:
Across cultures, people have been converging on liberal practices and values on a wide range of disparate issues.
Moral realism predicts convergence on practices and values.
Moral antirealism predicts divergence in practices and values.
There are two candidate hypotheses to explain the historical convergence on liberalism (1).
The debunking hypothesis: For each issue in question, people have become more liberal for “[p]urely cultural” reasons that have nothing to do with recognizing objective moral truth.
The moral realist hypothesis: Over time, people have increasingly adopted liberal practices and values because liberalism is true.
In light of (2) and (3), convergence on liberal practices and values is best explained by moral realism (4b) rather than by a non-realist debunking explanation (4a).
Therefore, liberal realism is true.
Huemer may be the first philosopher to explicitly make the inference from moral convergence to the truth of moral realism. However, many realists have said that realism predicts some degree of convergence, and attributed convergence (insofar as it has occurred) to our perception of objective truth. Parfit (2011, p. 538) writes: “[T]hough humanity’s earliest moral beliefs were in several ways distorted by evolutionary forces, those distortions are being overcome, so that true moral beliefs are becoming more and more widely held.” Brink (1989, pp. 208–209) says that, given the difficulty in acquiring moral knowledge and the fact that many moral disputes “depend on complex nonmoral issues,” realists should not necessarily expect perfect convergence. Nevertheless, there has been “significant convergence of moral belief [and] moral progress over time….[The] relevant changes in moral consciousness have all been changes in the same direction.” Smith (1994, pp. 188–189) says that “there has been considerable moral progress” and that “we should…be quite optimistic about the possibility of an agreement about what is right and wrong being reached under more idealized conditions” (conditions which, in his view, may or may not be established at some point). Singer (1981) famously argues that the perception of moral truth via reason is driving an expansion of our circle of moral concern.
This paper challenges step (3) of Huemer’s argument—the idea that moral antirealism predicts divergence in practices and values. As noted, this is a claim that antirealists themselves generally accept. The lack of an explanation for the convergence on liberal moral beliefs is a serious lacuna in standard debunking arguments. It will be argued here that there are non-truth-tracking processes that tend to push cultures toward liberalism. We do not need to posit “large coincidences” to explain why many cultures have drifted toward liberalism on a variety of issues.
Structure of the argument
The question is whether the following principle of inference is epistemically reliable: If cultures converge on a set of moral beliefs X, conclude that X is objectively, non-relatively true (i.e., true in realist terms). A debunking account of moral progress should show that this is not epistemically reliable, because the process that drives cross-cultural moral convergence does not track moral truth (conceived in realist terms).
There are various ways of understanding the notion of tracking the truth. For example, a debunker may claim that a targeted belief is insensitive, i.e., the believer would still have believed that p if p had been false (Nozick 1981). But there are serious concerns about this kind of counterfactual when the truth in question is said to be metaphysically necessary (Clarke-Doane 2015, pp. 87–92). Furthermore, sensitivity does not seem to be a requirement for justified belief or knowledge (Bogardus 2016; Vogel 1987; White 2010). Alternatively, a debunker can claim that it is not necessary to assume the truth of the targeted belief in order to explain why we hold it (see Harman 1977). In other words, an “evolutionary [or other naturalistic] explanation makes objective morality redundant” (Ruse and Wilson 1986, p. 187; cf. Joyce 2006, p. 220). This second approach is the one taken here.
The aim of this paper is to show that the phenomenon of cross-cultural moral convergence is susceptible to a naturalistic explanation. In regard to cross-cultural convergence on liberalism, antirealism and liberal realism are (largely) predictively equivalent (though they might make different predictions about how that outcome will come about). Whether the debunking or the realist account is more convincing will depend to some extent on one’s prior metaethical views. The mere fact of convergence per se does not support realism or antirealism. However, some reasons will be given for why the naturalistic, debunking story may give a more plausible account of the facts (Sects. 7.2–7.3).
In outline, the argument of this paper is as follows:
Early human societies were structured like those of chimpanzees. Foraging bands were dominated by an alpha male who ruled tyrannically in pursuit of his own narrow self-interest. A couple hundred thousand years ago, hunter–gatherers overthrew their alphas and established strong egalitarian norms (at least among adult men) to prevent anyone from attaining a position of dominance. Although they were often violently aggressive toward outgroups, within societies they were quasi-socialist.
With the advent of agriculture, hierarchies reemerged, and highly illiberal social systems were established. Hierarchical, militaristic, agricultural societies proliferated as a result of competition among groups. Egalitarian hunter–gatherers who came into contact with these societies were exterminated or absorbed.
For reasons of self-interest, most people in hierarchical, oppressive societies do not like being abused for the benefit of those in power, or living with the constant threat of violence from within or outside their group. They may acquiesce to mistreatment or violence for a number of reasons, but when the opportunity arises to resist they often take it. For the same reason that hunter–gatherers overthrew their tyrannical alphas and established egalitarian and cooperative norms, people in modern hierarchical societies have, when possible, frequently pushed back against abuse and demanded more liberal treatment. In large hierarchical societies where rulers preside over armies and police forces it is more difficult for the masses to push back against their oppressors than it is for a couple dozen hunter–gatherers to overthrow their alpha. For this reason, the liberalizing process has been slow and tortuous in agricultural societies, but the trend is evident.
As societies become more liberal—less violent within the group, more cooperative with other groups, and so on—this reinforces psychological dispositions, such as empathy and aversion to violence, that in turn drive further liberalism.
The main drivers of the trend toward liberalism, namely, the pursuit of self-interest and empathic concern for others, do not track objective moral truth.
The debunking argument outlined in (1)–(5) does not disprove realism, but it blocks a seemingly strong argument against antirealism.
The claim in (6) has already been addressed above. It is worth making some preliminary comments about (1)–(5).
(1) has been defended (with some variations) by a number of anthropologists (e.g., Cashdan 1980; Knauft 1991; Service 1975), and perhaps most persuasively by Boehm (1999, 2012). The evidence Boehm gives will be briefly reviewed below. (2) has been defended by anthropologists including Boehm, Diamond (1987), Turchin (2009, 2010), and Turchin and Gavrilets (2009). Of course, almost no anthropological theory about the distant past will be without controversy. Nevertheless, (1) and (2) are, if not definitively proved, highly plausible accounts of our history. If they are correct, then the trend toward liberalism has in part been a historical return to liberalism. (It is a return only in part because there are important differences between hunter–gatherer and modern liberalism, as shall be discussed.) It is reasonable to suppose that some of the same forces that led hunter–gatherers to converge on liberalism (described in ) are responsible for the contemporary convergence on liberalism.
(4) is difficult to test directly, but it is based on two well-established principles of psychology. First: desensitization. Repeated exposure to a stimulus tends to reduce its stimulatory effect. By the same token, lack of exposure to a stimulus increases its stimulatory effect (Mrug et al. 2016). Second, the presence of external threats promotes tribalism and increases hostility toward outgroups (Buchanan and Powell 2016, 2018).
According to (5), the interrelated processes that drive cross-cultural moral convergence—viz., the pursuit of narrow self-interest and empathizing—do not track moral truth. That is, we do not need to assume the existence of objective moral truth to give a complete explanation of moral beliefs that are generated by these processes. First, the pursuit of narrow self-interest. Moral beliefs are sometimes created merely to advance the self-interests of their creators. For example, throughout history powerful people have often promoted the idea that we have a moral obligation to respect social hierarchies and grant special privileges to ruling classes. Ambitious politicians spread the idea that certain classes of people ought to be despised in order to create conflicts that help them solidify their own power. People whose self-interest is served by opposite moral beliefs have sometimes successfully promoted those beliefs. When different people or groups compete to spread moral beliefs that serve their respective self-interests, which one prevails is determined by sociological facts (who has the most social/political clout). To explain the resulting token moral beliefs we do need to appeal to moral truth. Second, empathizing. Empathy-based moral beliefs are vulnerable to a strong EDA (spelled out in more detail in Sect. 7.1). According to this EDA, the explanation for our empathic tendencies appeals only to natural selection, not to objective moral truth.
Naturalistic explanations for moral progress
An important element of (what is called) “moral progress” is the apparent trend toward more “inclusivist” morality (Buchanan and Powell 2016). Most traditional moralities are “exclusivist”—they ascribe much greater moral worth to in- than outgroup members. Over time, many societies have adopted moral outlooks that to some extent dissolve ingroup/outgroup distinctions and affirm the moral value of everyone.
Buchanan and Powell (2016) offer a “naturalistic theory” to explain the trend toward inclusivist morality. They suggest that “exclusivist moral psychology is…an ‘adaptively plastic’ trait” (p. 998). In the ancestral environment of the Pleistocene, outgroups could present either a threat or an opportunity. Outgroups could have posed threats “relate[d] to the transmission of infectious disease, competition over scarce resources, external physical dangers, or beliefs and practices that are dissonant with in-group values and thus imperil group cohesion” (p. 997). They also could have provided opportunities for trade or mate exchange. On Buchanan and Powell’s account, our moral psychology evolved to respond adaptively to signs of either threat or opportunity. If outgroups pose a threat (in terms of disease, competition, etc.) we develop an exclusivist morality. Otherwise we develop a more inclusivist morality.
According to Buchanan and Powell, in “circumstances that mimic the harsh conditions” (p. 997) that prevailed most of the time in the ancestral environment, people regard outgroups as dangerous threats and develop exclusivist moral outlooks. When societies become more prosperous, and conditions less harsh, this triggers our moral psychology to become more inclusivist.
Buchanan and Powell are surely right that inclusivist moralities—and liberalism in general—will not flourish among people who perceive each other as dangerous threats. But their account cannot explain all elements of the trend toward inclusivist morality (or liberalism in general). Consider, for example, the treatment of women. An important aspect of moral progress is the dramatic improvement in the treatment of women. But the reason for female oppression was never that men regarded women as a dangerous outgroup carrying disease or posing a violent threat. Rather, for much of history, men regarded women as people who had less power and whose preferences they did not need to take into consideration.
Moral progress is often driven by people forcefully asserting their own rights. Oppressed people do not always passively wait for the moral circle of their oppressors to expand. To gain recognition as moral equals they protest, elicit sympathy, acquire political influence, and engage in all manner of wheeling and dealing. Part of the explanation for the spread of liberalism is that victimized people have fought for better treatment. Blacks played a major role in the civil rights movement, women in women’s liberation, colonized people in overthrowing empires. Railton (1986, pp. 194–195) makes a similar point when he says that “in the long haul, barring certain exogenous effects, one could expect an uneven secular trend toward the inclusion of the interests of (or interests represented by) social groups that are capable of some degree of mobilization.” Commenting on Railton, Buchanan and Powell (2018, pp. 82–83) observe that societies often develop moral concern for groups that are not “capable of some [seriously threatening] degree of mobilization.” Both Railton and Buchanan and Powell may be right. Oppressed people can and often do employ various means of encouraging their oppressors to expand their moral circle, but this is not always necessary. Animals are incapable of protesting on their own behalf, but throughout the world people have become increasingly concerned with animal welfare (see Buchanan and Powell 2018, p. 83).
According to Hopster (2019), “[i]n a world with a global traffic of goods and information, in which societies depend on each other for resources and share mutual goals, it should not come as a surprise that they also come to adopt a roughly shared moral outlook.” As people participate in an increasingly global culture, “social conformity” leads everyone to adopt common values. A society’s values are influenced by its historical experience and its social and material conditions. Since many societies shared important experiences (e.g., world wars) and are converging on common ways of life, this also drives moral development in similar directions. In Hopster’s view, it is to some extent an accident that the global culture is (currently) relatively liberal. While there may be something to this theory, it will be argued here that convergence on liberalism is not an accident, and the impetus to move in this direction is coming from within every human society.
This paper argues that the trend toward liberalism can be explained by two related sociological phenomena that do not involve recognition of moral facts. First, basic liberal values—moral equality, respect for the dignity of individuals, opposition to “gratuitous coercion and violence”—have always been appealing to the majority of people for reasons of self-interest, at least within groups. Throughout history going back to the days of nomadic foragers, when majorities within populations have gained political influence they frequently established relatively liberal norms in order to protect themselves from exploitation by the strong or powerful. Second, as a byproduct of successfully implementing self-protective liberal policies, people have become conditioned to be less aggressive and more empathic, and this has led to an expansion of the moral circle and the strengthening of liberal moral intuitions.
The ancient origins of liberalism
Huemer (2016) claims that “human history has been dominated by highly illiberal views” (p. 1987), and “[t]he vast majority of governments in history have been dictatorial” (p. 1992). This may not be entirely correct.
There is reason to believe that until the beginning of sedentary living and agriculture, certain liberal moral norms were literally universal among humans, and political organization was egalitarian. If hunter–gatherer bands can be described as having a “government,” then the majority of governments for the vast majority of history have been quasi socialist, not dictatorial. The main evidence for this comes from studies of hunter–gatherers in the twentieth century. All nomadic foraging societies that have ever been observed—from the Amazon to the Outback—actively prohibit any political hierarchy, at least among men. There is no recorded group of nomadic foragers that will tolerate an adult male issuing direct orders to another adult male (see the extensive documentation of this fact in Boehm 1999, 2012). All nomadic foragers have strict rules requiring that food—particularly meat from large game animals—be distributed among all members of the group.
This means, as one anthropologist put it, that “male status differentiation in human evolution is U-shaped” (Knauft 1991, p. 397). Chimpanzee (Pan troglodytes) society is strictly hierarchical, with a tyrannical alpha ruling because he (perhaps along with his coalition partner) can physically dominate every other individual (Goodall 1986). The common ancestor of chimps and humans is presumed to have had a similar social structure (Boehm 1999). At some point, however, our hunter–gatherer ancestors overthrew their alphas and instituted what Boehm (1999) calls a “reverse dominance hierarchy”: Men who would otherwise have been subordinate to an alpha banded together to prevent anyone from achieving a position of dominance. It was only in relatively recent history—when we moved away from nomadic hunting and gathering—that we reestablished traditional hierarchies, and adopted ideologies that affirmed the moral (and sometimes divine) legitimacy of rulers.
The upshot is that, until fairly recently, human societies had converged on what were essentially liberal moral norms—at least within groups—for reasons of self-interest among the majority. Certainly, there were important differences in the moral outlook of hunter–gatherers across the world. But some core moral norms were universal. Within groups, people were more or less recognized as moral equals, everyone was granted a fairly high degree of respect, and coercion was used mainly to protect people from being bullied, cheated, or pushed around by others. Women typically had less political influence than men, and their treatment varied among hunter–gatherer societies. But they were afforded many of the same protections as men. Women certainly were not subjugated to nearly the extent that they were in more recent history after the transition to agriculture (Boehm 1999; Hayden et al. 1986).
Hunter–gatherer liberalism typically did not extend beyond the ethnolinguistic group. Between-group relations were characterized by extreme illiberalism. Neighbors were often in a permanent state of war governed by few or no rules. Whole communities—particularly the men—were sometimes exterminated. In a survey of twentieth-century studies on hunter–gatherers, Keeley (1996, Table 6.2) found that the percentage of male deaths due to warfare ranged from 8.3% for the Gebusi (Papua New Guinea) to 59.0% for the Jivaro (South America). However, as shall be argued below, even violent hunter–gatherers regarded peace as a desirable condition for reasons of pure self-interest, namely, they did not like living under the threat of attack. They simply lacked mechanisms to establish peace. The trend toward less war was driven not by recognition of the objective, mind-independent truth that war is morally bad. Rather, it was driven by institutional and technological innovations that allowed people to escape something that, for reasons of self-interest, they never liked in the first place.
The origins of modern illiberal hierarchies
When hunter–gatherers became sedentary, hierarchies eventually reemerged. The exact reason why sedentarism—which is usually accompanied by food storage—can lead to a breakdown in egalitarianism is not entirely clear, but it is probably related to the resulting increase in population size (cf. Boehm 1999, pp. 143–144). When populations become much larger than a mobile foraging band, it is no longer possible for coalitions of the majority to surveil and collectively control everyone’s behavior, or for the group to make collective decisions. After reaching a certain size, societies become nonfunctional without hierarchical leadership. The larger the population, the more extensive the hierarchy must be. Turchin and Gavrilets (2009, Fig. 2) find that, among six historical empires, every order of magnitude increase in population size over time was accompanied by the addition of another level of hierarchy. They report: “It appears that an acephalous tribe is the largest social scale a human group can achieve without the benefit of centralized organization” (p. 172).
When agricultural societies first developed around 10,000 years ago, they quickly exterminated or absorbed all mobile foragers with whom they had contact. This happened virtually everywhere that agriculture arose. It was not because agriculturalists were necessarily better off than foragers in terms of health or wellbeing—indeed, the lot of the average farmer 10,000 years ago was probably worse in many ways. Diamond (1987) calls the transition to agriculture “the worst mistake in the history of the human race” partly for that reason. But agriculture can sustain a much larger population than foraging. Because of the surplus it produces, it can support classes of people not directly involved in food production—classes that can specialize in things like bureaucracy, tool/weapon making, and warfare. Mobile foraging bands—even coalitions of bands—are no match for armies fielded by agricultural populations (Diamond 1987; Turchin and Gavrilets 2009).
The development of large-scale hierarchical communities led to intense intergroup competition. Small agricultural societies can defeat mobile foragers. Bigger or more organized agricultural societies can defeat smaller or less well-organized ones. Turchin has developed mathematical models suggesting that, given some realistic assumptions, the trend toward larger societies—including eventually massive empires—was the inevitable outcome (Turchin 2009, 2010; Turchin and Gavrilets 2009). People were not necessarily choosing these increasingly illiberal forms of social organization because they preferred them, or believed them to be aligned with moral truth. It’s just that, wherever they developed, aggressively militaristic and hierarchical groups exterminated or absorbed their neighbors.
The trend toward liberalism
To say that there has been a trend toward “liberalism” can be misleading, since commitment to core liberal values takes different forms. It is important to be clear on what kind of liberalism we are talking about. The most basic distinction is between what we can call “within-” and “between-society” liberalism. Within-society liberalism applies the principles of liberalism—viz., moral equality, respect for the dignity of individuals, opposition to gratuitous coercion and violence—within a single society. Between-society liberalism applies those principles to outsiders. Within-society liberalism can be further divided into what we can call “narrow” and “broad” varieties. Narrow within-society liberalism applies to some restricted class(es) of people, generally those wielding political power—e.g., men or a dominant ethnic group. Broad within-society liberalism (theoretically) applies to everyone within a society. (Between-society liberalism can also be divided into narrow and broad varieties, but that distinction is not relevant for the present purposes.)
Each of these types of liberalism can vary in terms of its intensity. People can be deemed to have greater or lesser moral worth. They can be more or less respected and protected from gratuitous coercion and violence.
Nomadic foragers were characterized by intense narrow within-society liberalism (among men), more or less intense broad within-society liberalism (extending to women), and, among most groups, very low levels of between-society liberalism. In the modern era, arguably we seem to have been converging on fairly intense broad within-society liberalism, and only moderately intense between-society liberalism. That is to say, across cultures people generally support—if not in practice then in principle—the idea that within their society liberal principles apply to everyone: all citizens are equal under the law, minorities and less powerful individuals should be protected, resources should be distributed to meet everyone’s basic needs, people should resolve disputes nonviolently, and so on. No society lives up to these ideals perfectly, but the gap between performance and ideals is closing steadily. This is why we can say that there has been convergence on intense broad within-society liberalism. We can only say there has been convergence on moderately intense between-society liberalism because it is not a mainstream idea in any society that outsiders should be treated as genuine equals. No mainstream politician says that their country’s resources should be distributed to the people of the world based on need, regardless of who or where they are. The US government spends less than 1% of the federal budget on foreign aid. A 2017 survey of likely voters in the US found that 57% think the government spends too much on foreign aid, 27% think it spends the right amount, and just 6% think it spends too little (Rasmussen Reports 2017). The US is hardly an exception. Sweden—famous for its generous social welfare programs that benefit citizens and residents—spends only a slightly higher percentage of its federal budget on foreign aid (Sida 2019).
It is not surprising that, throughout the history of civilization, people low on the hierarchy within groups—plebeians, peasants, and the like, who invariably comprise the majority of the population—have often supported more within-society liberal policies. Oppressed majorities have always wished to have their moral worth and dignity recognized, and to be free from excessive coercion by their rulers, each other, or anyone else. But it is also not surprising that progress toward liberalism has been slow. As the previous section argued, for much of history civilizations have been engaged in continual, vicious conflict with each other, with the strongest and most unified exterminating, brutally subjugating, or absorbing their neighbors. Only when local powers were evenly matched could temporary, peaceful equilibria sometimes be reached. It is easy to see that, under those conditions, it would have been difficult for people to launch revolutions against oppressive rulers—much more difficult than it was for coalitions of a dozen hunter–gatherers to keep down would-be alphas. Nevertheless, there has been a tendency for majorities, when they have gained political power, to move society in a liberal direction, at least when it comes to their own treatment.
Clearly, we do not need to appeal to the existence of objective moral facts to explain why there has been a fair amount of cross-cultural convergence on narrow within-society liberalism that is restricted to politically influential classes of people. The potentially difficult questions are (a) why has there been some cross-cultural convergence on broad within-society liberalism? and (b) why there has been an increase in between-society liberalism?
Broad within-society liberalism
There are at least three forces that push cultures in the direction of greater broad within-society liberalism. First, people tend to empathize with those with whom they have close contact. Our tendency to empathize means that moral norms prescribing better treatment of people within society can easily take root. Second, when a subgroup within a society (e.g., men) successfully pursues narrow within-society liberalism, this can create a revolutionary atmosphere that inspires other groups (e.g., women) to demand the same benefits for themselves, which can lead to increasingly broad within-society liberalism. Third, pacification is an almost inevitable concomitant of a society becoming more prosperous and advanced. Prosperous people have less reason to resort to criminality and violence. Consequently, they have less need to adopt an aggressive stance to protect themselves from attack. As a society advances, policing becomes more effective, which further reduces violence. As a result of living in a peaceful society, our impulses become more pacific, our sensitivity to violence increases, and we adopt increasingly extreme antiviolence moral norms. These three forces are considered here in turn.
Humans clearly have a capacity to dehumanize perceived enemy outgroups. Under the right conditions, empathy toward an outgroup can effectively be set to zero. Many hunter–gatherers across the world had a practice of butchering and eating members of enemy tribes (Keeley 1996, pp. 103–106). Ancient civilizations in the Americas, Europe, and Asia perpetuated killings—often in brutal ways—of defeated enemies on as large a scale as their technology allowed. Examples of people committing atrocities against outgroup members in recent history are too well known to require discussion. These observations may inspire a very cynical view of human nature. We might assume that treating members of an outgroup as worthy of moral concern requires us to overcome our natural impulses.
However, humans have a natural inclination—shared with our primate relatives (de Waal 1996)—to empathize with each other. Like other primates, we tend to empathize most strongly with close ingroup members, particularly our family and friends with whom we have established special bonds. But most people (besides psychopaths) respond empathically to cues of suffering and distress in others, at least to some extent, unless this reaction is suppressed by feelings of intense fear or hatred (cf. Buchanan and Powell 2016). Even under conditions of war people can fail to dehumanize their enemies, and enter into cooperative, friendly relationships. During World War I soldiers on the front lines developed a culture of “live and let live” in which they purposefully avoided killing each other. During the famous Christmas truce of 1914, large numbers of British, French, and German soldiers left the trenches to fraternize and sing carols with each other in no man’s land. (The live-and-let-live culture dissolved in December 1915 when the Germans introduced gas attacks.) The point is that our tendency to regard outgroup members as inhuman monsters worthy of death is hardly automatic or inevitable. Yes, under certain conditions we are capable of turning empathy off and engaging in extreme brutality. But this is only a possible, not the default, mode of human interaction, even vis-à-vis outgroup members. We do not have records of the inner psychological conflicts of waring hunter–gatherers, but there is some evidence that they were capable of empathizing with their enemies. In rare cases they may even have let adult male captives live, although “the ethnographic accounts indicate that such acts of mercy were at least unusual, if not exceptional” (Keeley 1996, p. 213, n. 8). Our tendency to empathize does not invariably lead to broad within-society liberalism. But it does make us psychologically prepared to move in that direction.
Nichols (2004) provides evidence that “affect-backed norms”—norms that “prohibit an action that is emotionally upsetting” (p. 128)—tend to be taken especially seriously and are more likely to be passed on from generation to generation than affectively neutral norms. He finds, for example, that disgust-backed etiquette rules from the Middle Ages—i.e., rules prohibiting actions that people find disgusting—are much more likely to have survived to the present day than non-disgust-backed etiquette rules (Nichols 2002). Since harm to other people can trigger an emotionally upsetting empathic response, harm-prohibiting norms tend to be particularly enduring. People take harm-prohibiting norms very seriously, judging them to be moral as opposed to conventional rules (Turiel et al. 1987).
According to Nichols’s (2004) theory, the fact that an action is emotionally upsetting (e.g., spitting while eating in public, killing people) does not automatically compel us to prohibit it. Rather, once norms are established, those that do prohibit emotionally upsetting actions (e.g., don’t spit while eating in public, don’t kill people) have a special advantage in cultural transmission. Following this logic, we would not expect empathy (such as it is) toward members of oppressed classes of people within a society to automatically lead those in power to adopt norms to protect them from abuse. But feelings of empathy toward a class of people can potentially lead those in power to adopt norms that take their interests into account. Once broad within-society liberal norms gain a foothold, they may be especially resilient—likely to be passed on and to inspire strong commitment.
Consider the status of women. Collectively, men can dominate women by physical strength. Since the beginning of civilization, men have used their strength to arrange society according to their own preferences, often with little regard for the preferences of women. But most men have also cared about women—at least those with whom they had personal relationships. So it is not surprising that they would, in some places, heed women’s demands for more favorable treatment, even equality.
The practice of slavery can be undermined by our tendency to form bonds and empathize with people. In the United States slaves were often abused, and some Whites seemed to have genuinely viewed Africans as a dangerous enemy population. But many Whites who had contact with slaves, including slaveholders, formed bonds with them. White and black children played together. Some slaves were closely integrated in white households. As W. E. B. Du Bois (1903/1986, p. 382) said, there could be “something of kindliness, fidelity, and happiness” in the relationship between slaves and slaveholders. White slaveholders such as Thomas Jefferson and George Washington even helped lay the groundwork for emancipation. (Washington supported the abolition of slavery in principle, and his will stipulated that his slaves should be freed upon the death of his wife. Jefferson spearheaded some antislavery legislation and clearly supported eventual emancipation.) Many white non-slaveholders saw slavery as nothing but abuse of a completely harmless people. Harriet Beecher Stowe’s Uncle Tom’s Cabin—the worldwide best-selling book of the nineteenth century after the Bible—is believed to have played an important role in ending slavery and triggering the US Civil War. The book emphasized the abuse of slaves while portraying them as childishly helpless and nonthreatening. In the preface, Stowe explained her intention to “awaken sympathy and feeling for the African race” (Stowe 1852, p. vi), and the sympathy she awoke did indeed lead many people to reject slavery. Buchanan and Powell (2016, p. 1002) note that a particularly effective tactic of abolitionists was to distribute drawings of the terrible conditions of slaves being transported on the Middle Passage.
The line of argument so far has assumed that we can debunk a moral belief by showing that it is caused by empathy. The moral realist might object. The realist could argue that our tendency to empathize is one of the ways that we detect moral truth. I.e., the realist could argue that empathy is a truth-tracking emotion—in fact, Marshall (2018) makes exactly this argument. When we empathize with slaves, this is a way of perceiving the wrongness of slavery. But empathy-based moral beliefs are clearly susceptible to an evolutionary debunking argument. There is a large literature on the evolutionary purpose of empathy. The general conclusion of evolutionary biologists is that, as Sapolsky (2017) puts it, empathy “is a state on a continuum with what occurs in a baby or in another species” (pp. 652–653). “Lots of animals display building blocks of empathic states” (p. 655) for clear evolutionary reasons. Each species is endowed with the building blocks of empathy that conferred a fitness advantage in its ancestral environment. De Waal expounds:
Evolutionary theory postulates that altruistic behavior evolved for the return-benefits it bears the performer….Empathy is an ideal candidate mechanism to underlie so-called directed altruism, i.e., altruism in response to another’s pain, need, or distress. Evidence is accumulating that this mechanism is phylogenetically ancient, probably as old as mammals and birds….The dynamics of the empathy mechanism agree with predictions from kin selection and reciprocal altruism theory. (de Waal 2008, p. 279)
If evolutionary biologists are right that the “dynamics” or our empathic responses are explained by kin selection and reciprocal altruism theory, the onus is on the realist to show why empathy is nevertheless a moral-truth-tracking emotion.
The contagiousness of revolutions
When one group within a society successfully fights for better treatment, this can inspire others to follow suit. The moral realist might argue that this is because when group A wins better treatment for itself, this can help group B recognize the objective truth that it is entitled to the same benefit. However, the debunker can argue that the pursuit of narrow self-interest is a better explanation of cases like this. If group A fights for its own rights, and B follows, why, according to the moral realist, did A fail to recognize that the same rights should have been extended to B in the first place? It seems that A was just concerned with its own interests. If A was motivated by self-interest, why suppose that B was motivated by recognition of moral truth?
Take the American Revolutionary War, which was a fight for narrow within-society liberalism, i.e., liberalism for the white population. The colonists sought to expel the British, who had treated them illiberally—who had exploited and disrespected them. They rallied around slogans expressing the value of freedom and their right to be treated as equals vis-à-vis other nations. The famous phrase “all men are created equal” appeared in the Declaration of Independence and was included with variations in some state constitutions as well. Although slaves had been left out of the revolution, the colonists’ success at securing their own rights inspired many slaves to demand the same rights for themselves. At a public reading of the Massachusetts Constitution in 1780, the slave Bett (later known as Elizabeth Freeman) heard the assertion that “All men are born free and equal.” She sued for her freedom in a Massachusetts court, claiming that slavery violated the state constitution, and won. (Presumably she won because there was already a strong abolitionist movement in Massachusetts that made the court receptive to her arguments.) In other states, the struggle for freedom did not, of course, succeed so quickly. But the American Revolution helped trigger fiercer resistance to slavery.
The French Revolution was fought under the slogan “liberty, equality, fraternity.” But when the revolutionary men took power, they decided that women were not to be treated as political or social equals, or to have the same liberties as men. Naturally, many women objected. The “Declaration of the Rights of Man and of the Citizen of 1789” was followed 2 years later by Olympe de Gouges’ “Declaration of the Rights of Woman and the Female Citizen.” A number of more or less militant feminist organizations were established, most notably the Society of Revolutionary Republican Women. Feminists were unsuccessful in their struggle for female equality, and in 1793 the National Convention banned women’s clubs and organizations altogether. Gouges was executed during the Reign of Terror. But the seeds for women’s equality had been planted. If men could demand equality then so could women, although it was many years before woman gained the political clout to be successful.
Huemer (2016) notes that there has been a significant, worldwide decline in violence over time. Although “many factors…may have contributed to this decline,” he says, “one is of particular interest here: there has been a dramatic shift in human values over history” (p. 1988). In 1300 CE, the murder rate in Europe was 35 per 100,000 people per year. Today it is 3 per 100,000. “Again, many factors may have contributed to the decline—among them changing attitudes toward murder. Men of the past perceived many more things as reasons for killing” (p. 1990).Footnote 1 Huemer notes that, in 1804, former US Treasury Secretary Alexander Hamilton was killed in a duel with Vice President Aaron Burr, fought over some insulting remarks made about Burr by Hamilton. “Such behavior on the part of respected men would be unthinkable today” (p. 1990).
An alternative explanation of the trend toward lower murder rates is that, although people have always strongly opposed murder, we have just become better at preventing it. It is true that our attitudes toward killing in some specific circumstances have changed—dueling, for example, has become counter-normative. But this may be because law enforcement has become more effective and it is no longer necessary (in most parts of the world) for people to settle grievances on their own. There is reason to think that no one ever enjoyed living in a society where killing was common, or where anyone could be pressured into dueling. The trend toward less killing and less dueling was a matter of people becoming better at protecting their own interests, not recognizing objective moral truth.
Evidence suggests that murder rates fell precipitously as soon as people made the transition from hunting and gathering to living in primitive states. For example, an analysis of pre-Columbian Native American skeletons suggests that city dwellers were much less likely to have died violently than hunter–gatherers. Only 2.7% of Incan, Aztec, and Mayan skeletons showed signs of violent trauma compared with 13.4% of hunter–gatherer skeletons (Pinker 2011, p. 51). We do not know the circumstances of these violent deaths, but it seems reasonable to surmise that the states had lower rates of what we would call “murder.” Why would murder be less common in states than in hunter–gatherer societies? Is it because people who make the transition from hunting and gathering to agriculture are suddenly more likely to discover the objective moral truth that murder is wrong?—or that it is wrong to kill people under a wider range of conditions? This seems unlikely. A better explanation is probably that states, which were controlled by powerful central governments, were simply better at keeping the peace.
Over time, governments have become increasingly better at preventing crime. This has sometimes resulted in murder rates falling dramatically without there being any meaningful change in values. Since the 1990s, the murder rate in the US has decreased by about 50%. No one would claim that Americans have adopted stronger antimurder values today than they had 25 years ago. The reason for the decline in murder is due primarily to better policing and social controls (cf. Pinker 2011, chapter 3). (To be clear, Huemer would not deny this. The point is that it is not necessary to invoke a change in values to explain even an enormous reduction in murder rates.)
When governments are not effective at keeping the peace and protecting people’s rights, so-called “cultures of honor” are liable to develop. People take the law into their own hands, not because they necessarily want to do so, but because they have to if they are to survive and flourish. Nisbett and Cohen (1996) argue that, when law enforcement is lax, herdsmen are particularly likely to develop cultures of honor because cattle can easily be appropriated. A culture of honor developed among the herders who settled in the American South—elements of that culture have persisted to the present day. Cultures of honor lie on a continuum, and as law enforcement becomes increasingly effective, most people readily cede responsibility for defending themselves to the government.
We can conclude that people do not and never did enjoy living in violent societies where things like murder and dueling are common. We do not need to appeal to the recognition of objective, mind-independent moral truth to explain why people support social changes that reduce violence.
When a type of stimulus is rarely encountered we tend to become more sensitive to it (see Mrug et al. 2016). As society becomes less violent, it is inevitable that our sensitivity will increase and we will respond to violence with more and more abhorrence. In Europe in the Middle Ages people would attend public executions where criminals were tortured to death—drawn and quartered or burned at the stake, sometimes for minor crimes. In later years the methods of execution became less gruesome, and the practice of torture was completely rejected. Instead of being publicly burned, criminals were publicly hanged. As people became more sensitive to violence they objected to public hangings, and hangings began to be conducted behind closed doors. Eventually the death penalty was completely abolished in many countries.
As discussed, hunter–gatherers are generally low on between-society liberalism. This is reflected in their intense intergroup warfare. Ethnographic evidence suggests that warfare is at least a biyearly occurrence for 65–70% of hunter–gatherer groups. Ninety percent “engage in war at least once a generation, and virtually all the rest report a cultural memory of war in the past” (Pinker 2011, p. 52).
Huemer (2016, p. 2000) argues that the “most simple and natural” explanation for why “human beings become increasingly reluctant to go to war” is “[b]ecause war is horrible”—i.e., it is an objective moral truth that war is horrible. An alternative explanation, however, is that war is horrible in the sense of being very unpleasant. Although it is possible to find prominent thinkers of the past celebrating war and martial values (Huemer provides quotes by Nietzsche, Henry Adams, and Emile Zola), most people who experience war do not like it. People go to war for a variety of reasons, and not only from necessity. But war itself is, in general, highly unpleasant. People do not like living in societies where they are constantly under threat of attack.
Even though people may dislike war, it can be difficult to establish peace even if peace is desired by all parties to a conflict. Hunter–gatherers can agree not to attack each other, but what will stop one side from reneging? If each side knows that the other is liable to attack in spite of a truce, they will be tempted to respond preemptively—thus they fall into the “Hobbesian trap.” Therefore, high levels of warfare may not tell us much about the actual preferences of the people involved. Yanomamö society, for example, is notoriously violent—around 42% of male deaths are due to warfare (Keeley 1996, Table 6.2). Do the Yanomamö live this way because they fail to recognize that engaging in constant, brutal violence is objectively morally wrong? It seems that their beliefs about moral truth have little to do with it. Rather, they simply fell deep into the Hobbesian trap. A Yanomamö warrior explained to an anthropologist: “We are tired of fighting. We don’t want to kill anymore. But the others are treacherous and cannot be trusted” (Wilson 1978/2004, pp. 119–120; see Pinker 2011, p. 46).
When warring groups are brought under the control of a central authority, it becomes possible for them to lay down their arms without fear of being wiped out by their enemies. Given the assurance that a central authority will punish aggressors, people are no longer tempted to launch preemptive attacks. As Pinker notes:
The various “paxes” that one reads about in history books—the Pax Romana, Islamica, Mongolica, Hispanica, Ottomana, Sinica, Britannica, Australiana (in New Guinea), Canadiana (in the Pacific Northwest), and Praetoriana (in South Africa)—refer to the reduction in raiding, feuding, and warfare in the territories brought under the control of an effective government. (Pinker 2011, p. 55)
In regard to Pax Australiana, an Auyana man of New Guinea reported: “life was better since the government had come” because “a man could now eat without looking over his shoulder and could leave his house in the morning to urinate without fear of being shot” (Thayer 2004, p. 140; see Pinker 2011, p. 55). When large states are established, fighting among small groups is forcibly suppressed to everyone’s benefit. Chiefs, kings, and emperors have every reason to maintain order—to prevent villages or gangs from raiding or exterminating their neighbors. The vast majority of people welcome such law enforcement.
In recent history, multiple forces have helped reduce warfare between states. States that are connected by trading relationships have less to gain—and more to lose—by fighting each other (Pinker 2011, p. 165). International organizations, particularly the United Nations, have taken the role of world governments, suppressing belligerent or so-called “rogue” states. Perhaps most important, though, was the development of nuclear weapons. Large nation states armed with (or capable of developing) nuclear weapons cannot seriously fight each other without risking their mutual destruction. Serious, direct military confrontations between major powers ended abruptly with the development of nuclear weapons.
The abandonment of warfare in many parts of the world has, predictably, been accompanied by a rejection of martial values—values that no longer serve any purpose. In order to explain the adoption of more liberal values with respect to war, we do not need to appeal to people’s recognition of the moral truth that war is horrible. We need only assume that, in Mackie’s words, the values that prevail in different societies “reflect ways of life” (Mackie 1977, p. 37).
This paper has provided a naturalistic explanation for cross-cultural convergence on liberalism. The alternative explanation is the liberal realist one, which says that people converged on liberalism because they recognized its objective correctness (Huemer 2016). If both realism and antirealism predict convergence then the fact of convergence per se does not support either metaethical view. However, if we determine that the naturalistic account is superior, it would suggest that our liberal beliefs were produced by non-truth-tracking processes and are therefore not likely to correspond to objective truth.
The word “murder” by definition refers to killing that is counter-normative and is viewed negatively by the moral community in question. Presumably Huemer is referring to the various types of extrajudicial killing, about which people may have a range of positive or negative attitudes.
Boehm, C. (1999). Hierarchy in the forest: The evolution of egalitarian behavior. Cambridge, MA: Harvard University Press.
Boehm, C. (2012). Moral origins: The evolution of virtue, altruism, and shame. New York: Basic Books.
Bogardus, T. (2016). Only all naturalists should worry about only one evolutionary debunking argument. Ethics, 126, 636–661.
Brink, D. O. (1989). Moral realism and the foundation of ethics. Cambridge, UK: Cambridge University Press.
Buchanan, A., & Powell, R. (2016). Toward a naturalistic theory of moral progress. Ethics, 126, 983–1014.
Buchanan, A., & Powell, R. (2018). The evolution of moral progress: A biocultural theory. New York: Oxford University Press.
Cashdan, E. A. (1980). Egalitarianism among hunters and gatherers. American Anthropologist, 82, 116–120.
Clarke-Doane, J. (2015). Justification and explanation in mathematics and morality. In R. Shafer-Landau (Ed.), Oxford studies in metaethics (Vol. 10, pp. 80–103). Oxford: Oxford University Press.
de Waal, F. B. M. (1996). Good natured: The origins of right and wrong in humans and other animals. Cambridge, MA: Harvard University Press.
de Waal, F. B. M. (2008). Putting the altruism back into altruism: The evolution of empathy. Annual Review of Psychology, 59, 279–300.
Diamond, J. (1987). The worst mistake in the history of the human race. Discover, 8(5), 64–66.
Du Bois, W. E. B. ( 1986). The souls of black folk. In N. Huggins (Ed.), Writings (pp. 357–547). New York: Library of America.
Goodall, J. (1986). The chimpanzees of Gombe: Patterns of behavior. Cambridge, MA: Harvard University Press.
Harman, G. (1977). The nature of morality: An introduction to ethics. New York: Oxford University Press.
Hayden, B., Deal, M., Cannon, A., & Casey, J. (1986). Ecological determinants of women’s status among hunter/gatherers. Human Evolution, 1, 449–473.
Hopster, J. (2019). Explaining historical moral convergence: The empirical case against realist intuitionism. Philosophical Studies. https://doi.org/10.1007/s11098-019-01251-x.
Huemer, M. (2016). A liberal realist answer to debunking skeptics: The empirical case for realism. Philosophical Studies, 173, 1983–2010.
Joyce, R. (2006). The evolution of morality. Cambridge, MA: MIT Press.
Kahane, G. (2011). Evolutionary debunking arguments. Noûs, 45, 103–125.
Keeley, L. H. (1996). War before civilization: The myth of the peaceful savage. New York: Oxford University Press.
Knauft, B. M. (1991). Violence and sociality in human evolution. Current Anthropology, 32, 391–409.
Mackie, J. L. (1977). Ethics: Inventing right and wrong. New York: Penguin Books.
Marshall, C. (2018). Compassionate moral realism. Oxford: Oxford University Press.
Mrug, S., Madan, A., & Windle, M. (2016). Emotional desensitization to violence contributes to adolescents’ violent behavior. Journal of Abnormal Child Psychology, 44, 75–86.
Nichols, S. (2002). On the genealogy of norms: A case for the role of emotion in cultural evolution. Philosophy of Science, 69, 234–255.
Nichols, S. (2004). Sentimental rules: On the natural foundations of moral judgment. Oxford: Oxford University Press.
Nichols, S. (2014). Process debunking and ethics. Ethics, 124, 727–749.
Nisbett, R. E., & Cohen, D. (1996). Culture of honor: The psychology of violence in the South. Boulder, CO: Westview Press.
Nozick, R. (1981). Philosophical explanations. Cambridge, MA: Harvard University Press.
Parfit, D. (2011). On what matters (Vol. 2). Oxford: Oxford University Press.
Pinker, S. (2011). The better angels of our nature: Why violence has declined. New York: Viking.
Railton, P. (1986). Moral realism. The Philosophical Review, 95, 163–207.
Rasmussen Reports. (2017). Most see U.S. foreign aid as a bad deal for America. Retrieved July 14, 2019 from http://www.rasmussenreports.com/public_content/politics/general_politics/march_2017/most_see_u_s_foreign_aid_as_a_bad_deal_for_america.
Ruse, M., & Wilson, E. O. (1986). Moral philosophy as applied science. Philosophy, 61, 173–192.
Sapolsky, R. M. (2017). Behave: The biology of humans at our best and worst. New York: Penguin Press.
Sauer, H. (2018). Debunking arguments in ethics. Cambridge, UK: Cambridge University Press.
Service, E. R. (1975). Origins of the state and civilization: The process of cultural evolution. New York: Norton.
Sida. (2019). Development cooperation budget. Retrieved July 14, 2019 from https://www.sida.se/English/About-us/Budget/.
Singer, P. (1981). The expanding circle: Ethics and sociobiology. New York: Farrar, Straus and Giroux.
Smith, M. (1994). The moral problem. Oxford: Blackwell.
Stowe, H. B. (1852). Uncle Tom’s cabin; or, life among the lowly (Vol. 1). Boston: John P. Jewett.
Street, S. (2006). A Darwinian dilemma for realist theories of value. Philosophical Studies, 127, 109–166.
Tersman, F. (2006). Moral disagreement. Cambridge, UK: Cambridge University Press.
Thayer, B. A. (2004). Darwin and international relations: On the evolutionary origins of war and ethnic conflict. Lexington, KY: University Press of Kentucky.
Turchin, P. (2009). A theory for formation of large empires. Journal of Global History, 4, 191–217.
Turchin, P. (2010). Warfare and the evolution of social complexity: A multilevel-selection approach. Structure and Dynamics, 4, 1–37.
Turchin, P., & Gavrilets, S. (2009). Evolution of complex hierarchical societies. Social Evolution & History, 8, 167–198.
Turiel, E., Killen, M., & Helwig, C. C. (1987). Morality: Its structure, functions, and vagaries. In J. Kagan & S. Lamb (Eds.), The emergence of morality in young children (pp. 155–244). Chicago: University of Chicago Press.
Vavova, K. (2018). Irrelevant influences. Philosophy and Phenomenological Research, 96, 134–152.
Vogel, J. (1987). Tracking, closure, and inductive knowledge. In S. Luper-Foy (Ed.), The possibility of knowledge: Nozick and his critics (pp. 197–215). Totowa, New Jersey: Rowman & Littlefield.
White, R. (2010). You just believe that because... Philosophical Perspectives, 24, 573–615.
Wilson, E. O. ( 2004). On human nature: With a new preface. Cambridge, MA: Harvard University Press.
I am especially grateful to Guy Kahane and Andreas Mogensen for extensive feedback on multiple drafts of this paper. I received very helpful comments from Maximilian Kiener, Neven Sesardić, audiences at the University of Oxford, and two anonymous reviewers.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Cofnas, N. A debunking explanation for moral progress. Philos Stud 177, 3171–3191 (2020). https://doi.org/10.1007/s11098-019-01365-2
- Evolutionary debunking arguments
- Moral realism
- Argument from disagreement
- Moral progress