Self-awareness and free will are illusory—or at least, according to Incognito, we’re less free and less aware than we think. A century ago, Freud argued that children’s feeling of self-omnipotence was a youthful folly to be grown out of. But the belief in our total ability to access, assess, and control what our brain is doing isn’t unique to children. So seductive is self-awareness that some early psychologists trained themselves to introspect, in the belief that by paying close attention to their own thoughts and perceptions, they could unlock how the mind works. And although introspection is mostly mothballed as a scientific approach (see here for more), the underlying idea lives on: it really seems like we’re in control of our own brains, thoughts, and actions.
Eagleman aims to puncture that illusion through two organizing metaphors for consciousness and free will. First, he says, consciousness is like watching the nightly news: a summary of the major goings-on in the depths of our cortices, a sort of bird’s-eye-view of brain activity. Second, free will is like the rider on the elephant of behavior: we can steer it, but can’t totally control it. Consider the following points:
1. Lies perceptions tell us. Probably most people have seen a visual illusion of some sort. Illusions often seem rare and notable, but they’re more than the exception—and they aren’t limited to vision or a scientist’s sleight of hand. Time, for example, seems straightforward, but our perception of it is not absolute. If a person presses a button that turns on a light after a short delay, they can be fooled into thinking the light going on preceded the button press, if that button-light delay later is removed. Time seems to slow down when we’re falling or sick or threatened, and speeds up as we age. There’s even evidence that our conscious experience of vision lags about 1/10th of a second or so behind reality, suggesting that we’re never really seeing things in real time, as they happen (see the bizarre and confusing flash-lag effect).
An intuitive folk psychology belief is that the goal of perception is to passively replicate the outside world, like a high-speed, high-resolution camera. But to understand why our perceptions are error-prone, it’s critical to understand that perception is not passive, like we “perceive” something, interrogate it for meaning, then act. Perception is an integrated part of our navigation of and interaction with the outside world—it’s part of action, not precursor to action (a concept fundamental to ecological psychology and embodied cognition).
For an example, think about how an outfielder catches a fly ball. We might believe that tracking a batted ball requires rapid, complex calculations: identifying the ball, estimating its velocity, acceleration, and trajectory millisecond-by-millisecond, and then calculating the landing spot and the best way there. But there’s no prediction necessary: outfielders instead follow the path that turns the curved flight of the ball into a (perceptual) straight line, an approach that puts them on an “intersection path” with the ball with few real-time computations and predictions. It’s more like an action plan: follow this path until you meet the ball (which is also how some birds of prey hunt). When perception is treated as something that actively guides how we interact with the world rather than something that merely “represents” the world, illusions lose some of their luster.
2. Our implicit egotism. Humans are so routinely and fundamentally self-absorbed they don’t even have to think about being narcissistic, it just happens. Subconscious egotism affects even major life choices: spouses, for example, are more likely to share a first initial than would be expected by chance (a fact LBJ was onto decades ago). We like people more when we’re told they share our birthday. And the truly ludicrous: people born on the 2nd day of the month are statistically overrepresented in towns like Two Rivers, Twin Lakes, and the like, and people named Dennis are overrepresented among dentists (people named Proctor are overrepresented among…). We seem to like, endorse, and seek out what’s familiar to us, usually without ever realizing we’re doing it.
In short, perceptions are misleading, even major decisions can be affected by subconscious trifles, and we’re less consciously “in control” of our actions than we think. There’s a scene in The Silence of the Lambs where Clarice is belittled and insulted by Lecter, and asks him if he’s “strong enough to point that high-powered perception” at himself—implying that he won’t like what he sees. But maybe Lecter’s just smart enough to know that, really, he can’t analyze himself.
The (non)existence of free will is a millennia-old debate and one of the fundamental questions of philosophy. It may not even be answerable in any meaningful way, so I appreciate that Eagleman does an end-run around it: whether it exists or not, our conscious perception of free will is illusory. That’s not a novel view for brain scientists, a pessimistic lot when it comes to the idea of human decision making and self-control—but I wonder where the general public falls on it.
Empirical evidence for the uncertain nature of free will is decades old. Back in the 1960s, Benjamin Libet asked people to press a button, and use a nearby clock to note when they first consciously intended to press the button. Not surprisingly, people report their intent to act about 200 milliseconds (⅕ second) before they actually press the button. More shocking, though, is that an identifiable, synchronous pattern of electrical activity in the brain preceded the button press by 500 milliseconds. In other words, the “decision” to press the button was put in motion by the brain well before people were consciously aware of having made it. Again, we come back to Eagleman’s metaphor of consciousness as the nightly news: like the bird’s eye view of brain happenings.* If decisions are made before we’re aware of our intention, it does not paint a promising picture of our ability to know and control our behavior.
*One might question why consciousness and self-awareness exist at all, if the brain seems to run on autopilot most of the time. Eagleman’s answer is that it allows something like a “manual override” on automatic behaviors.
• • •
Here’s where Eagleman is headed with all this: if free will and self-awareness are not absolute—in fact are tenuous—then how can we reconcile modelling our behavior, social interactions, and social institutions (like the criminal justice system) on the belief they are?
Decisions of guilt in the American criminal justice system rest on an assignment of blame. People might be found not guilty or receive lesser sentences if “we” collectively believe they are not at fault or not in control of their actions, whether due to mental illness, physical illness (the driver who crashes during a seizure), or mitigating circumstances. But here the rubber meets the road: any conception of “blame” for some criminal behaviors rests on the belief in our own free will to act and make choices, which he’s spent 150 pages or so convincing us is anything but clear-cut. Can “blameworthiness” ever be reconciled with the realization that we are always riders on the elephant, not just in special circumstances?
Brain-based evidence is parading into courtrooms, sometimes with unexpected consequences. Usually, this means that an “expert” uses a brain scan to identify an abnormality in a defendant, diagnose them with Condition X, and argue that they are not responsible for their actions. The problem here is that if we take a non-Cartesian-dualism view that the brain underlies behavior, then there will always be a brain difference between someone who commits a crime and someone who does not. It may be a single neuron, but it has to be there, or the behavior would not have been different. As brain scans become more precise and we continue to clarify the link between specific brain features and specific behaviors, we can easily imagine a future where, applying modern standards, every criminal could be exculpated by brain-based evidence. And if that’s the case—and this is the crux of Eagleman’s argument—then current knowledge and experts tell us only whether we, at this exact moment in time, have a name or classification for whatever brain blip shows up on an accused’s MRI. That information doesn’t tell us whether someone was “in control” of their actions—it only tells us that we’re currently able to identify a brain anomaly explaining those actions.
In other words, any brain-based distinctions we’re drawing right now—even if they are well intentioned and backed by up-to-date scientific evidence—are arbitrary and capricious. Years from now, they’ll look as backwards as Melvin Belli’s “psychomotor epilepsy” defense of Jack Ruby or the Twinkie defense looks to modern eyes. Brain science and “blameworthiness” will continue to diverge, becoming more and more incompatible as time marches on.
Eagleman’s solution is an “evidence-based” justice system in which the goal is neither punishment nor rehabilitation, but the reduction of future crimes. For example, he says, actuarial-like tables are good predictors of whether people who commit certain crimes will recidivate, so they could be used for sentencing. Trading on the modern craze for neuroplasticity, he talks about cognitive training methods that could be used to improve impulse control (thereby hypothetically reducing criminal behavior and sounding like something out of dystopic novels). In short, sentencing would be totally “individualized” and based on empirical evidence, all to achieve the goal of crime reduction by whatever means, be that prison, training, rehabilitation, or other.
• • •
On the one hand, I agree that a justice system based on vengeance, punishment, and archaic, arbitrary, and criminalizing morality is dehumanizing and morally backwards; moreover, the US justice system is so deeply and fundamentally flawed that it could hardly be made worse. While recognizing that this is not my area of expertise, it raises two questions for me.
1) Doesn’t it only change sentencing? “Evidence-based” sentencing could conceivably reduce the number of people kept in jail for drug crimes and/or victims of mandatory minimums, or potentially reduce some of the long-term life consequences of being convicted of a crime. But even if fewer people end up in jail, that won’t undo embedded racism in arrest rates, convictions, or jury composition; evidence-based sentencing seems like a patient who comes in with cancer and tuberculosis and is given some antibiotics. OK, the tuberculosis might be gone but we haven’t done anything about the cancer. On top of that, initial US forays into evidence-based sentencing indicate that rather than being an “unbiased” policy, it in fact further entrenches existing discriminatory policies, making it not a solution at all.
2) It’s logically two-faced. Eagleman begins by pointing out flaws in modern brain evidence often used for exculpation or exoneration of defendants. But he then simultaneously argues that we should rely on other “modern scientific evidence” to improve sentencing/outcomes. I guess this assumes that use of brain scans is predictably bad, whereas other evidence might turn out wrong, but it’s not predictable, but I’m not clear on what separates “good” and “bad” evidence.
Eagleman never calls it a cure-all, but I’m not sure it cures much of anything. I agree that assigning “blame” as a foundational plank of the justice system is almost certainly incompatible with our knowledge of brain and behavior—I’m just not sure we collectively know what to do with that information yet.