Self-awareness and free will are illusory—or at least we’re less free and less aware than we think, according to Incognito.
Freud argued that a child’s feeling of self-omnipotence, of total self-awareness, was just one of the many youthful follys which we must grow out of. But that belief in our ability to access, assess, and control what’s going on in our brain isn’t unique to children. Self-awareness is a seductive illusion: some early psychologists trained themselves to introspect, believing that by attending closely to their thoughts and perceptions, they could unlock how the mind works. Introspection was mothballed as a scientific approach—see here for more—but the underlying idea persists: it really seems like we’re in control of our own brain, thoughts, and actions.
Eagleman punctures that illusion through two organizing metaphors for consciousness and free will. First, he says, consciousness is like watching the nightly news: it’s a summary of the major goings-on in the depths of our cortices, a bird’s-eye-view of some parts of brain activity. Second, free will is like the rider on the elephant of behavior: we can steer it, but can’t totally control it. Consider the following points:
Lies perceptions tell us. Most people have seen optical illusions, which seem rare and notable, but they aren’t limited to vision or a scientist’s sleight of hand. Take time, for example. It seems straightforward, but our perception of time is not absolute. If a person presses a button that turns on a light after a short delay, they can be fooled into thinking that the light preceded the button press, if the delay is later removed. Time seems to slow down when we’re falling or sick or threatened, and speeds up as we age. There’s even evidence that our conscious experience of vision lags about 1/10th of a second or so behind reality, suggesting that we’re never really seeing things in real time, as they happen (see the bizarre and confusing flash-lag effect).
Why does it matter if perception isn’t totally accurate? There’s an intuitive folk psychological belief that the goal of perception is to passively replicate the outside world, as though the vision was akin to a high-speed, high-resolution camera. To understand perceptual errors like optical illusions, though, it’s crucial to understand that perception is not passive—we do not “record” the world, interrogate it for meaning, then act. Rather, perception and action are inextricably bound; perception is what makes actions (like navigating and interacting with the outside world) possible. Perception is part of action, not a precursor to action. This concept is fundamental to ecological psychology and embodied cognition.
For an example, think about an outfielder catching a fly ball. Intuitively, we might believe—and some cognitive psychologists might agree—that tracking a batted ball is a complex and incredibly rapid ballet of perceiving objects, estimating velocities, and performing rapid calculations of trajectories, flight paths, routes to run, and split-second coordination of body and limbs. But it turns out there’s no prediction necessary: outfielders can catch a fly ball by following a path that turns the curved flight of the ball into a perceptual straight line. Without millisecond physics, this puts the fielder on an intersection path with the ball, allowing them to catch it. It’s also, as it turns out, how some birds of prey hunt.
Perception is an integral part of how we interact with the world, rather than a passive process of “representing” the world. Seen from that vantage point, it is clear why we cannot trust our perceptions (or, more precisely, our interrogations of our perceptions) to be veridical, or represent the actual processes by which the brain guides behavior.
Our implicit egotism. Humans are so routinely and fundamentally self-absorbed they don’t even have to think about being narcissistic, it just happens. Subconscious egotism affects even major life choices: spouses, for example, are more likely to share a first initial than would be expected by chance—something LBJ had sussed out decades ago. We like people more when we’re told they share our birthday. And the yet more ludicrous: people born on the 2nd day of the month are statistically overrepresented in towns like Two Rivers, Twin Lakes, and the like, and people named Dennis are overrepresented among dentists (people named Proctor are overrepresented among…). We seem to like, endorse, and seek out what’s familiar to us, usually without ever realizing we’re doing it.
• • •
To sum up: perceptions are misleading, even major decisions can be affected by subconscious trifles, and we’re less consciously “in control” of our actions than we think. There’s a scene in The Silence of the Lambs where Clarice is belittled and insulted by Lecter, and asks him if he’s “strong enough to point that high-powered perception” at himself—implying that he won’t like what he sees. But maybe Lecter didn’t do it because he’s clever enough to realize that he simply can’t analyze himself.
The (non)existence of free will is a millennia-old debate and one of the fundamental questions of philosophy. It may not even be answerable in any meaningful way, and so Eagleman cleverly runs around it: whether free will exists or not, our conscious perception of having total free will is an illusion.
He’s almost certainly right: empirical evidence for the tenuous nature of free will is actually decades old. Back in the 1960s, Benjamin Libet asked people to press a button, and use a nearby clock to note when they first consciously intended to press the button. Not surprisingly, people report their intent to act about 200 milliseconds before they actually press the button. More shocking, though, is that an identifiable, synchronous pattern of electrical activity in the brain preceded the button press by 500 milliseconds. In other words, the “decision” to press the button was put in motion by the brain well before people were consciously aware of having made it.
• • •
If decisions are made before we’re aware of our intention, it does not paint a promising picture of our ability to know and control our behavior. That’s where Eagleman is headed with all this: reckoning with the fallibility of free will has deep consequences. If free will and self-awareness are tenuous, how can we reconcile structuring our behavior, social interactions, and social institutions—e.g., the criminal justice system—as though they are not?
The American criminal justice system relies on assignment of blame. People can be found not guilty or receive lesser sentences if “we” as a society believe they are not at fault or in control of their actions, whether due to mental or physical illness or some external mitigating circumstances. And yet assignation of blame assumes that we are all free to make choices. Can “blameworthiness” be reconciled with the realization that we are merely riders on the elephant, our experience of free will and control at least partly an illusion?
Meanwhile, brain-based evidence is being used to exonerate defendants or mitigate punishments, sometimes with unexpected consequences. For example, “experts” can run a brain scan on a defendant, find some abnormality, diagnose them with Condition X, and argue that due to said brain abnormality, they are not responsible for their actions. Hence, they can’t be blamed. At first gloss, this seems reasonable, and it sometimes can be—brain tumors can cause almost any kind of behavior. But there’s flawed reasoning at the very core of the idea: if we assume that the brain underlies behavior, then there will always be a brain difference between someone who commits a crime and someone who doesn’t. It may be only a single neuron, but it has to be there, or the behaviors would not have differed. Follow this logic to its conclusion: with suitably precise methods for mapping the brain, every defendant can be exculpated by brain evidence. If so—and this is the crux of Eagleman’s argument—then current knowledge and experts tell us only whether we, at this exact moment in time, can identify or classify some blip on an MRI. That information doesn’t tell us whether someone was “in control” of their actions—it only tells us that we’re currently able to identify a brain anomaly (partially) explaining those actions.
In other words, any brain-based distinctions we’re drawing right now—however well-intentioned and scientifically valid—are essentially arbitrary. Years from now, they’ll look as backwards as Melvin Belli’s “psychomotor epilepsy” defense of Jack Ruby or the Twinkie defense looks to modern eyes, even as more precise measurements provide new avenues for exculpation. Ultimately, all this points to a conclusion: brain science—and in particular, the knowledge that free will is not absolute—cannot coexist with a justice system based on blameworthiness, and that incompatibility will likely only grow.
• • •
Until now, Eagleman has presented a compelling argument, and he should have just stopped there. As a brain scientist, he predictably runs off the rails when describing his solution: an “evidence-based” justice system in which the goal is neither punishment nor rehabilitation, but the reduction of future crimes. For example, he says, actuarial-like tables are good predictors of whether people who commit certain crimes will recidivate, so they could be used for sentencing. Trading on the modern craze for neuroplasticity, he talks about cognitive training methods that could be used to improve impulse control (thereby hypothetically reducing criminal behavior and sounding like something out of dystopic novels). In short, sentencing would be “individualized” and based on empirical evidence, to achieve the goal of crime reduction by whatever means, be that prison, training, rehabilitation, or other.
I agree that a justice system based on vengeance, punishment, and archaic, arbitrary, and criminalizing morality is dehumanizing and morally backwards; the US justice system is so deeply and fundamentally flawed and destructive that it could hardly be made worse. But what he’s suggesting is deeply flawed, and it’s important to be clear about that because the argument has the veneer of calm and collected rationality that can be quite seductive.
Most importantly, it’s only concerned with sentencing. “Evidence-based” sentences could reduce the frequency and number of people jailed for drug crimes, and thereby reduce some of the long-term life consequences of being convicted of a crime. But that won’t undo racism embedded in arrest rates, convictions, or jury composition. And in fact, initial forays into the evidence-based sentencing indicate that rather than being an “unbiased” policy, it in fact further entrenches existing discriminatory policies, making it not so much a solution as a fresh coat of paint on a bad system.
Less importantly, it’s logically two-faced. Eagleman points out flaws in modern brain-based evidence used for exoneration, but then argues that we should rely on other “modern scientific evidence” to improve sentencing. How are we meant to adjudicate between what kinds of evidence are good or bad?
Two-thirds of this book is very good, providing a hugely important framework for challenging folk intuitions of how the brain works, how free will works, and how self-awareness works, and the consequences of those illusions. Don’t let the last third ruin it.