righteous and smug

righteous_mind_cover

What is the nature of morality? How do we develop a personal moral code, and how does that affect decisions we make?

Studying a fuzzy topic like moral reasoning requires a definition: how do morals differ from conventions, laws, and rules? A common intuition is that morals are unique because they are based on reducing harm. American children, for example, distinguish between rules that reduce harm (e.g., “don’t kill”) and rules that are just social conventions (e.g., acceptable clothing).

Though appealing, harm reduction is a red herring. Drawing a distinction between harm-reducing rules and social conventions is unique to American children. American culture is individualistic*, with a sense of “self” that’s clearly bounded and delineated. Many other cultures, in contrast, are sociocentric: the “self” is something fuzzier, more ambiguous, more collective. And when you ask kids in sociocentric cultures, they don’t distinguish between harm-reducing rules and “arbitrary” social conventions: to them, everything is moral.

In fact, Americans are so invested in “morals = harm reduction” that they can be morally dumbfounded when something violates a taboo but causes no harm. Most Americans think it’s wrong to eat the family dog after it dies, but can’t explain why, once it’s pointed out no one is being harmed by it. The act triggers some kind of deep lizard-brain “this is wrong” reaction, but when the base assumption that immoral acts cause harm isn’t met, they can’t come up with an alternative. In contrast, people in sociocentric cultures do not get morally dumbfounded by rules that don’t seem to cause harm.

*Americans in general are psychologically eccentric. The majority of psychology studies test Western, educated people from industrialized, rich, and democratic countries (so-called WEIRDs), raising questions about whether findings from those studies are broadly generalizable.

•     •     •

For Haidt, cultural differences in moral reasoning and moral dumbfounding highlight two critical aspects of moral reasoning:

1. Emotions precede rationalizations. Moral decisions are made rapidly, subconsciously, and via emotional processing. Even if it feels like it, we don’t actually weigh cold, hard facts to reach a conclusion. Instead, the tail wags the dog: the decision comes first and the rationalization second (this is probably not unique to morality: in Thinking, Fast & Slow, Kahneman argues that most decisions are made that way). Moral dumbfounding happens when people are unable to generate a plausible rationalization for that immediate emotional response.

2. Morals are group-centered, not self-centered. Humans evolved not only as individuals, but in groups, tribes, and societies, and moral reasoning may require group-level, rather than individuated thinking.

Consider religion. Religious communes are about 6x more likely to survive 20 years than secular communes (40% vs. 6%). One factor in that survival rate is that the more sacrifices a commune requires from its members (e.g., abstaining from food or drink), the longer it tends to last. The specific rules and sacrifices don’t matter, as long as there are rules. Thus, the purpose of religion—and perhaps morals in general—may be to “manufacture” binding ties to your neighbors, who you would otherwise have no reason to care about or feel invested in (unlike, say, your family). Even when the rules seem arbitrary, religion and morality can generate “groupishness,” acting as a subconscious reminder that “we’re on the same team and play by the same rules and make the same sacrifices.” People in sociocentric cultures more easily see, or more easily accept, that even rules which don’t seem harm-reducing to individuals may still affect the group.

Haidt’s definition of morality puts these ideas together: “moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible” (emphasis mine).

Having defined what constitutes a “moral,” Haidt next describes six universal “moral foundations” that underlie moral judgments and moral rules:

  • Care/Harm: A desire to reduce harm or to provide comfort, which may have evolved from the need to care for offspring that can’t fend for themselves
  • Fairness/Cheating: A desire for fair outcomes (on a personal level), which may have evolved from the implicit requirements of the social contract and reciprocal altruism.
  • Loyalty/Betrayal: Similar to fairness, but at the level of families, groups, or tribes. May have evolved from the need to form group coalitions
  • Authority/Subversion: Obedience to authority, which may have evolved from primal social hierarchies (e.g., the “alpha male”), which only work if members don’t subvert. Cool side-note: this moral foundation may be the reason some languages ended up with formal and informal verb conjugations.
  • Sanctity/Degradation: Desire for cleanliness (particularly food and water). Probably this began as solely related to food contamination, but has been “recruited” for anything that triggers disgust. It probably underlies mistreatment of racial, gender, nationality, religious, or political “outgroup” members.
  • Liberty/Oppression: A desire for liberty, broadly construed. Primarily used to distinguish liberals from conservatives.

By Haidt’s theory, any moral rule or judgment “taps” one of these six foundations, and different moral codes exist because individuals can emphasize different foundations: your neighbor may care about “loyalty” violations but be relatively indifferent to obedience to authority, whereas your sister might emphasize fairness but not liberty. Political differences, too, may fall out of how the moral foundations are weighted: Haidt’s research suggests that liberals tend to weight care & fairness foundations most heavily, with others a distant second. Conservatives, in contrast, have a flatter moral profile where each of the six foundations are weighted about the same. Their divergent political beliefs fall out of those differently valued foundations.*

If that’s the case, a political opponent isn’t “amoral,” but operating according to a different moral calculus. Haidt seems to think this is revelatory, but is it really? Aren’t political and religious differences obnoxious and intractable precisely because we know our opponents are invoking a deeply held but repellent-to-us morality? We may call them “amoral,” but I’d be surprised if most people meant it literally.

*I think there’s a major issue with this: namely, the behavioral consequences of the moral foundations are fungible. Haidt claims that both supports and opponents of affirmative action may appeal to the fairness foundation, with the pro side emphasizing equality of outcome and the con side equality of opportunity. The explanation is fine, but there’s no a priori reason to predict that outcome. Shouldn’t divergent beliefs/behaviors depend on different moral foundations? What predictive value is there if appeals to the same moral foundation can produce opposing behavioral outcomes?

•     •     •

Haidt’s major point is simply that the experience of morality is universal, even if the beliefs themselves differ. When a moral transgression sets off some automatic emotional “alarm,” that response is the same for all of us. From the perspective of what’s happening in the brain, the specific content of rules that trigger the response are irrelevant—much like how we speak different languages even while having the same underlying grammar and neural architecture. A noun is a noun in Spanish or English, and a moral rule is a moral rule, regardless of whether it’s “don’t kill” or “don’t touch yourself at night.” We can’t apply “logic” or debate morality—by the time we’re coming up with reasons for a moral rule, that visceral automatic response has already happened.

 

But this leaves us with a million-dollar question: how does a “rule” get linked to that emotional response, or a moral foundation, and transmute into a moral? And, as a side question: can that be undone?

knievel_jump

This is Haidt’s failing: he rides the Stratocycle up to the edge of Snake River Canyon but doesn’t make the jump. He defines morals, gives them a purpose, describes how moral reasoning operates, but doesn’t actually deliver on the book’s subtitle: why good people are divided by politics and religion. Saying that we differ because we rely on different moral foundations is not actually an answer to the question; it simply pushes the question a layer deeper: why do I have different moral foundations than my neighbor? Why do I think X is immoral but they don’t? When and how do “rules” become morals? How are morals transmitted? If morality is like language, in that the morals themselves aren’t hardwired but the mental machinery to learn them is, could there be a critical period for learning morals, just like there is for language? How flexible and contextual are my moral beliefs?

On the one hand, moral psychology is a new science and so these are perhaps questions that just haven’t been answered yet. But ultimately none of this is explanatory—even if we have a sort of taxonomy of morality, it doesn’t explain much of anything about how that taxonomy evolves.

Advertisements

One thought on “righteous and smug

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s