Technology might be an ultimately uncontrollable manifestation of ego and hubris, if Westworld and Jurassic Park taught me anything (what was Michael Crichton’s problem with theme parks, anyway?). Or, robots might take our jobs, thereby freeing us all to live utopian lives of total Maslowian self-actualization in a future of “fully automated luxury communism.” On the third hand, Americans spend about as much time on housework now as they did a century ago, despite advancements like ultra-suck vacuums, veg-o-matics, and frost-free refrigerators. So maybe we aren’t headed in either direction.
The Glass Cage wavers somewhere between an anti-automation polemic and a critique not of automation itself, but our unquestioning acceptance of it. Why do we see automation as inevitable and as progress, and why do we treat its side effects as minor speed bumps and annoyances rather than fundamental flaws with the whole undertaking? Nicholas Carr roots around in automation’s dark underbelly, touching on economic issues (does automation eliminate jobs, or replace them with new ones?) and practical ones (how do we automate morality? Was it right for the robot in Robot & Frank to abet the aging cat-burglar’s robbery attempt, just so he’d stick to his diet?).
But those concerns are secondary to his chief complaint: that automation has deleterious cognitive consequences—placing humanity in that eponymous mental glass cage, making us dumber because the world no longer challenges us. I’ll summarize his argument, then explain why a bad understanding of how the brain works underlies his misplaced concern.
• • •
As autopilot handles more and more of commercial flights, pilots have less and less to do. Consequently, they become “deskilled” at manual flight: studies in flight simulators show that a pilot’s manual-flight performance in emergencies is directly correlated with how recently they’d manually flown a plane. That’s not surprising, since virtually all skills atrophy with disuse. But it’s emblematic of two things that make automation a challenge: a) changes in skills and abilities extend beyond what gets automated, and b) we’re not good at predicting those consequences.
Most automation plans assume that automating some part of a task does not affect performance on other parts of the process. That belief is called the substitution myth, because it never works out that way. For example, the switch from paper to digital medical records sounds about as simple as possible, but it turns out that doctors reading handwritten charts use subconscious heuristics and cues that are lost on a computer screen (e.g., handwriting subtly informs the reader what or how many specialists a patient has seen). Copying and pasting from existing charts is so easy that charts are soon filled with regurgitated and deindividuated, overused phrases. That’s just one example, but we keep finding that there’s no way to “substitute” an automated task for a manual one with no downstream consequences: automation inevitably and unpredictably affects how we perform other parts of a task.
There are several explanations for why this happens. One is based on the Yerkes-Dodson law, which holds that people perform best when they are moderately engaged in a task of medium difficulty—not so easy they get bored and tune out, and not so hard they get stressed or panicked. Contrast that with a typical argument for automation (e.g., autopilot), which holds that automation should reduce “mental clutter,” allow people to focus more intently on what’s still done manually, and thereby perform better. That assumption sounds great in principle—after all, how does someone get worse when the job gets easier—but in practice, reducing manual tasks to the point of triviality can lead to boredom and attention lapses rather than hyper-focus.
Automation can also influence our understanding of tasks. Computer-assistance programs for mammogram reading improve detection of simple cases but reduce detection of complex ones. Automatic traction control in modern cars gives a driver no incentive to know how to control a car without it. That support programs help novices but hurt experts turns out to be a common consequence of user-support programs. In fact, they may make it difficult for people to develop expertise in the first place, because the computer assistance makes it difficult to get a conceptual understanding of the task—people end up knowing what buttons to press at certain prompts, but not why they’re doing it.
In short, when stuff gets automated, Carr says, we lose knowledge—such as how to fly a plane or read a mammogram—and we change how we interact with the world, for example by having less conceptual understanding of problems and tasks.
It’s possible to read these critiques and think that Carr’s not finding problems with automation itself but with badly-done automation. For example, why not use different fonts or text colors on digitized medical records? He even talks about how “adaptive automation” (also called neuroergonomics) might reduce some unexpected consequences of automation by building systems to intentionally fail on occasion, with the aim of ensuring that the user stays engaged and attentive. To the extent that I agree with Carr, it’s not that “automation is bad,” it’s that it’s worth pondering how several centuries into the “automation era” we’re not good at predicting the downstream consequences of even something as simple as paper-to-computer records.
But none of that makes automation a “glass cage.” Carr’s problem—and I think this a common “folk psychology” intuition—is that he imagines the brain as something like a static knowledge warehouse; something where the “goal” is to learn as many new things as possible and forget as few things as possible. Knowledge is growth and forgetting is failure; thus “losing” knowledge—like how to fly a plane—means a step backwards for humankind.
But that view is wrong. Here’s why: the brain is an adaptive organ that interacts with and responds to demands from the outside world. It’s responsive and malleable. We constantly learn new things to meet current needs and—just as importantly—”unlearn” and forget old things that are now unnecessary or unfruitful (like, say, that throwing a tantrum is an effective way to get what you want). We don’t accrue knowledge for its own sake; from the brain’s “perspective” knowledge is useful only if it helps us interact with the world. Put another way, the proper metaphor for memory is not an ever-expanding information landfill, but a manicured tree where old and unneeded knowledge and skills are pruned away. More is not better, useful is better.
That distinction is subtle but important: Carr sees “loss” of knowledge as a cost of automation, but if we recognize that the brain is dynamic and adaptive, it’s easy to see how forgetting to fly a plane manually is neither a cost, nor unique to automation. Rather, the “loss” of knowledge is instead a direct consequence of adaptability. Being able to adapt to new situations demands the flexibility to “unlearn” or disregard old things; new information must be able to displace old information. We don’t want to remember everything, because the longer it’s been since we’ve used a skill, the less likely it becomes we’ll need it in the future. Forgetting needn’t be a “failure” of memory at all, but a strategy for managing information overload: one reason aging leads to a cognitive slowdown is that older adults have more accumulated knowledge to sift through to find an answer than do young adults.
So while Carr’s right that we lose skills when tasks get automated, he misses how that’s a product of learning, not a failure of memory. The relevant question here is not “what do we know how to do?” but “what can we learn how to do?” There’s no reason to think automation affects our ability to learn. Deskilled pilots are a problem, but only because manual flight is still needed sometimes; in contrast, no one’s worried that we’ve forgotten how to hunt dodo because it’s no longer useful and we could relearn it if we needed to. The world changes and the brain adapts. It always has—Carr’s argument could have been made 2 million years ago: “sure, we’re making superior Achuelean bifaces now, but we don’t know how to make Olduwan choppers anymore!” He’s recognized that the world shapes how the mind works, but he’s so temporally provincial about it that he doesn’t realize it’s been happening for millions of years.
But when you think that “more is better” like Carr, you end up concerned that automation really is a glass cage, that Idiocracy is our future, that we’ll end up quasi-lobotomized simpletons if the world doesn’t “challenge” us. But think of the brain as adaptive and you realize how it’s always changing and we’re always learning; you recognize that while smartphones make us think differently than our parents, that doesn’t make us dumber. Yes, automation changes the brain, but everything changes the brain; that’s the whole point. It’s a good thing. If we didn’t forget how to fly manually when planes went to autopilot, we’d be worse off—it would mean we’ve stopped learning.
• • •
This would be a better book if Carr didn’t work so hard to fortify himself in the trivially true but practically meaningless middle ground. I’d rather read a guns-blazing polemic that’s off-target (e.g., Amusing Ourselves to Death, which hoes similar ground) than precision calibrated inoffensive centrism like “we can grant power to our tools that might not be in our best interest” (a sentence that would earn multiple weasel words tags on Wikipedia). There’s actually little to disagree with because he’s unwilling to say more than “automation has consequences, some are unexpected and some are bad.” Which, of course, is true…but only trivially.