Automation is not a Glass Cage

tcg_coverTechnology might be an ultimately uncontrollable manifestation of man’s ego and hubris, if Westworld, Jurassic Park, and Michael Crichton’s weird vendetta against theme parks taught me anything. Alternatively, robots may do all of our jobs, freeing us to live utopian lives of total Maslowian self-actualization in a future of fully automated luxury communism. On the third hand, Americans spend about as much time on housework now as they did a century ago, despite advancements in vacuum sucking and vegetable slicing technology.

Nicholas Carr’s The Glass Cage is a spiritual descendant of classic anti-automation polemics. Why, Carr asks, do we seem to treat automation as inevitable and as progress; why do treat its side effects as annoyances rather than fundamental flaws? Why do we so rarely consider the economic (does automation eliminate jobs, or replace them with new ones?) and even practical (how can we automate morality) consequences of automation?

But these concerns are secondary to Carr’s primary focus: automation has deleterious cognitive consequences, dumbing us down by making the world less challenging, placing all humanity in that eponymous mental glass cage. I’ll summarize his argument, then explain why it’s based on a bad understanding of how the brain works.

•     •     •

As autopilot handles more and more of commercial flights, pilots have less and less to do. Consequently, they become “deskilled” at manual flight: studies in flight simulators show that a pilot’s manual-flight performance in emergencies is directly correlated with how recently they’d manually flown a plane. That’s not surprising, since virtually all skills atrophy with disuse. But it’s emblematic of two things that make automation a challenge: a) changes in skills and abilities extend beyond what gets automated, and b) we’re not good at predicting those consequences.

Most automation plans assume that automating some part of a task does not affect performance on other parts of the process, a belief called the substitution myth. For example, the switch from paper to digital medical records sounds about as simple as possible, but it turns out that doctors reading handwritten charts use subconscious heuristics and cues that are lost on a computer screen (e.g., handwriting subtly informs the reader what or how many specialists a patient has seen). Copying and pasting from existing charts is so easy that charts are soon filled with deindividuated, overused phrases. Automation inevitably and unpredictably affects how we perform other parts of a task.

There are several explanations for why this happens. One is based on the Yerkes-Dodson law, which holds that people perform best when they are moderately engaged in a task of medium difficulty—not so easy they get bored and tune out, and not so hard they get stressed or panicked. Contrast that with a common argument holding that automation should reduce “mental clutter,” allowing people to focus more intently on what’s still done manually, and thus perform better. It sounds straightforward—after all, how does someone get worse when the job gets easier—but in practice, reducing manual tasks to the point of triviality can lead to boredom and attention lapses rather than hyper-focus.

Automation can also influence our understanding of tasks. Computer-assistance programs for mammogram reading improve detection of simple cases but reduce detection of complex ones. Automatic traction control in modern cars gives a driver no incentive to know how to control a car without it. That support programs help novices but hurt experts turns out to be a common consequence of user-support programs. In fact, they may make it difficult for people to develop expertise in the first place, because the computer assistance makes it difficult to get a conceptual understanding of the task—people end up knowing what buttons to press at certain prompts, but not why they’re doing it.

In short, when stuff gets automated, Carr says, we lose knowledge—such as how to fly a plane or read a mammogram—and we change how we interact with the world, losing conceptual understanding of what we’re doing.

the literal glass cage
the literal glass cage

It’s possible to read these critiques and think that Carr’s not finding problems with automation itself but with badly-done automation. For example, why not use different fonts or text colors on digitized medical records? He even talks about how “adaptive automation” (also called neuroergonomics) might reduce some unexpected consequences of automation by building systems to intentionally fail on occasion, with the aim of ensuring that the user stays engaged and attentive. To the extent that I agree with Carr, it’s not that “automation is bad,” it’s that it’s worth pondering how several centuries into the “automation era” we’re not good at predicting the downstream consequences of even something as simple as paper-to-computer records.

But none of that makes automation a “glass cage.” Carr’s problem—and I think this a common “folk psychology” intuition—is that he imagines the brain as something like a static knowledge warehouse; something where the “goal” is to learn as many new things as possible and forget as few things as possible. Knowledge is growth and forgetting is failure; thus “losing” knowledge—like how to fly a plane—means a step backwards for humankind.

But that view is wrong. Here’s why: the brain is an adaptive organ that interacts with and responds to demands from the outside world. It’s responsive and malleable. We constantly learn new things to meet current needs and—just as importantly—”unlearn” and forget old things that are now unnecessary or unfruitful (like, say, that throwing a tantrum is an effective way to get what you want). We don’t accrue knowledge for its own sake; from the brain’s “perspective” knowledge is useful only if it helps us interact with the world. Put another way, the proper metaphor for memory is not an ever-expanding information landfill, but a manicured tree where old and unneeded knowledge and skills are pruned away. More is not better, useful is better.

That distinction is subtle but important: Carr sees “loss” of knowledge as a cost of automation, but if we recognize that the brain is dynamic and adaptive, forgetting to fly a plane manually is neither a cost, nor unique to automation. Rather, the “loss” of knowledge is instead a direct consequence of adaptability. Being able to adapt to new situations demands the flexibility to “unlearn” or disregard old things; new information must be able to displace old information. We don’t want to remember everything, because the longer it’s been since we’ve used a skill, the less likely it becomes we’ll need it in the future. Forgetting needn’t be a “failure” of memory at all, but a strategy for managing information overload—one reason aging leads to a cognitive slowdown is that older adults have more accumulated knowledge to sift through to find an answer than do young adults.

So while Carr’s right that we lose skills when tasks get automated, he misses how that’s a product of learning, not a failure of memory. The relevant question here is not “what do we know how to do?” but “what can we learn how to do?” There’s no reason to think automation affects our ability to learn. Deskilled pilots are a problem, but only because manual flight is still needed sometimes; in contrast, no one’s worried that we’ve forgotten how to hunt dodo because it’s no longer useful and we could relearn it if we needed to. The world changes and the brain adapts. It always has—Carr’s argument could have been made 2 million years ago: “sure, we’re making superior Achuelean bifaces now, but we don’t know how to make Olduwan choppers anymore!” He’s recognized that the world shapes how the mind works, but he’s so temporally provincial about it that he doesn’t realize it’s been happening for millions of years.

But when you think that “more is better” like Carr, you end up concerned that automation really is a glass cage, that Idiocracy is our future, that we’ll end up quasi-lobotomized simpletons if the world doesn’t “challenge” us. But think of the brain as adaptive and you realize how it’s always changing and we’re always learning; you recognize that while smartphones make us think differently than our parents, that doesn’t make us dumber. Yes, automation changes the brain, but everything changes the brain; that’s the whole point. It’s a good thing. If we didn’t forget how to fly manually when planes went to autopilot, we’d be worse off—it would mean we’ve stopped learning.

•     •     •

This would be a better book if Carr didn’t work so hard to fortify himself in a trivially true but practically meaningless middle ground. I’d rather read a guns-blazing polemic that’s off-target (e.g., Amusing Ourselves to Death, which hoes similar ground) than precision calibrated inoffensive centrism like “we can grant power to our tools that might not be in our best interest” (a sentence that would earn multiple weasel words tags on Wikipedia). There’s actually little to disagree with because he’s unwilling to say more than “automation has consequences, some are unexpected and some are bad.” Which, of course, is true…but only trivially.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s