Your morals are bad and you should feel bad (not really): Jonathan Haidt’s The Righteous Mind.
To study a fuzzy, unwieldy topic like moral reasoning you have to start with a definition. How do morals differ from suggestions, standards, conventions, guidelines, rules, and laws? One common intuition is that morals are unique because they are all about reducing harm. American children, for example, make meaningful distinctions between rules that reduce harm (e.g., “don’t kill”) and rules that are simply social conventions (e.g., acceptable clothing).
But harm reduction is both a red herring and a wild goose chase, because drawing that distinction between harm-reducing rules and social conventions is unique to American children. American culture is individualistic, and the American sense of “self” is clearly bounded and delineated. In contrast, many other cultures are sociocentric, where the “self” is something fuzzier, more ambiguous, and more collective—they see themselves not as isolated individuals but as part of a larger cultural superorganism. And when you ask kids in sociocentric cultures, they don’t distinguish between harm-reducing rules and “arbitrary” social conventions: to them, everything is moral.
In fact, Americans are so wedded to the belief that “morals = harm reduction” that they can be morally dumbfounded when something violates a taboo but causes no harm. Most Americans agree that it’s wrong to eat the family dog after it’s run over, but can’t muster an explanation for why, and reading transcripts of people trying to do so is The Office-level embarrassment porn. Clearly, there is some deep, lizard-brain “this is wrong” flare being fired off, but when the base assumption that immoral acts must cause harm isn’t met, they’re left unable to explain why the act is wrong.
*Americans in general are psychologically weird. Most psychology studies test Western, educated people from industrialized, rich, and democratic countries (WEIRDs), which raises questions about whether findings from those studies are broadly generalizable.
• • •
Haidt uses moral dumbfounding and cultural differences to isolate two important aspects of moral reasoning:
1. Emotions precede rationalizations. Moral decisions are made rapidly, subconsciously, and via emotional processing. Only after that “quick and dirty” analysis does conscious reasoning come online to generate explanations—even if it feels like it, we don’t actually weigh cold, hard facts to reach a conclusion. Instead, the tail wags the dog; the decision comes first and the rationalization second (this is probably not unique to morality: in Thinking, Fast & Slow, Kahneman argues that most decisions are made that way). Moral dumbfounding happens when people are unable to generate a plausible rationalization for that immediate emotional response.
2. Morals are group-centered, not self-centered. Humans evolved not only as individuals, but in groups, tribes, and societies, and we must take a group-level perspective on moral reasoning.
For an example, consider the purpose of religion. Religious communes are about 6x more likely to survive 20 years than secular communes (40% vs. 6%). One factor in that survival rate is that the more sacrifices a commune requires from its members (e.g., abstaining from food or drink), the longer it tends to last. The specific rules and sacrifices don’t matter, as long as there are rules. Thus, the purpose of religion—and perhaps morals in general—may be to “manufacture” binding ties to your neighbors, who you would otherwise have no reason to care about or feel invested in (unlike, say, your family). Even when the rules seem arbitrary, religion and morality can generate “groupishness,” acting as a subconscious reminder that “we’re on the same team and play by the same rules and make the same sacrifices.” People in sociocentric cultures more easily see, or more easily accept, that even rules which don’t seem harm-reducing to individuals may still affect the group.
Putting these ideas together, we get Haidt’s definition of morality: “moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible” (emphasis mine).
• • •
Having defined what constitutes a “moral”, Haidt turns to the six universal “moral foundations” that underlie moral judgments and moral rules:
- Care/Harm: A desire to reduce harm or to provide comfort, which may have evolved from the need to care for offspring that can’t fend for themselves (and, just maybe, is what makes baby animals so cute)
- Fairness/Cheating: A desire for fair outcomes (on a personal level), which may have evolved from the implicit requirements of the social contract and reciprocal altruism.
- Loyalty/Betrayal: Similar to fairness, but at the level of families, groups, or tribes. May have evolved from the need to form group coalitions
- Authority/Subversion: Obedience to authority, which may have evolved from primal social hierarchies (e.g., the “alpha male”), which only work if members don’t subvert. Cool side-note: this moral foundation may be the reason some languages ended up with formal and informal verb conjugations.
- Sanctity/Degradation: Desire for cleanliness (particularly food and water). Probably this began as solely related to food contamination, but has been “recruited” for anything that triggers disgust. It probably underlies mistreatment of racial, gender, nationality, religious, or political “outgroup” members.
- Liberty/Oppression: A desire for liberty, broadly construed. Primarily used to distinguish liberals from conservatives.
By Haidt’s theory, any moral rule or judgment “taps” one of these six foundations. Different moralities exist because individuals can emphasize different foundations: your neighbor may care about “loyalty” violations but be relatively indifferent to obedience to authority, whereas your sister might emphasize fairness but not liberty. Political differences, too, may fall out of how the moral foundations are weighted: Haidt’s research suggests that liberals tend to weight care & fairness foundations most heavily, with others a distant second. Conservatives, in contrast, have a flatter moral profile where each of the six foundations are weighted about the same. Their divergent political beliefs fall out of those differently valued foundations.*
*One issue with this simple distinction is that applications of the “universal” foundations are fungible. Haidt says, for example, that both supporters and opponents of affirmative action may do so by appealing to the fairness foundation: the pro side is emphasizing equality of outcome and the con side is looking at equality of opportunity. That makes sense, but the problem is that there’s no a priori reason to predict that outcome—shouldn’t divergent beliefs/behaviors require different foundations? Certainly we want to predict those divergences, rather than explain them away after the fact.
• • •
One of the Haidt’s primary points is that the experience of morality is universal, even if the beliefs themselves aren’t. Or, in other words, the “other side” isn’t amoral, but just operating according to a different morality (a very hands-across-america, “let’s come together” sort of argument). He seems to want this idea of universality to be revelatory, but it strikes me as rather intuitive. Aren’t political differences obnoxious and intractable precisely because we know our opponents are invoking a deeply held but repellent-to-us morality (demented and sad, but a morality)? Moral debates are always going to have more juice than deciding where to get brunch because in the former case, both sides actually care.
But on a more concrete level it’s also the case that the cognitive experience of morality is universal. Moral transgressions set off an automatic (and likely subconscious) emotional alarm, and it’s that “alarm” that’s universal. From the perspective of what’s happening in the brain, the specific content of the rules triggering that response are, essentially, window dressing—like the way we may speak different languages even with the same universal, underlying neural architecture. In the brain (but not in the world), it doesn’t matter whether the rule is “don’t kill,” “don’t eat duck meat,” or “don’t touch yourself at night”—they all generate that same emotional response.
That perspective begins to clear up why we can’t apply “logic” to morals or debate morality with the intent of changing minds. By the time we come up with the “reasons” for a moral rule to exist, that visceral response has already happened and it’s too late.
We are left with the million-dollar question: how does a “rule” get linked to that emotional response and transmute into a moral? And once it has, can it be undone?
Therein lies my main criticism: Haidt rides his Stratocycle up to the edge of Snake River Canyon but doesn’t make the jump. He defines morals, gives us a purpose for them (cooperation), and outlines how moral reasoning operates, but never actually delivers on the book’s subtitle: why good people are divided by politics and religion.
Saying that we differ because we have different moral foundations doesn’t actually answer the question, it just pushes it a layer deeper. Why do I have different foundations than my neighbor? Why do I think X is immoral but they don’t? When and how do “rules” become morals? How are morals transmitted? If morality is like language, in that the morals themselves aren’t hardwired but the mental machinery to learn them is, could there be a critical period for learning morals, just like there is for language? How flexible and contextual are my moral beliefs?
On the one hand, these are unfair questions because moral psychology is in its infancy. Like emotion, morals were seen for decades as fuzzy, unquantifiable, and unstudy-able. The descriptive work that Haidt does is fine as a first step, but it’s not revelatory, it’s not explanatory, and it leaves the compelling questions about the nature of morality just as open as they were before.