Why We Should Think About the Threat of Artificial Intelligence

If the New York Timess latest article is to be believed, artificial intelligence is moving so fast it sometimes seems almost “magical.” Self-driving cars have arrived; Siri can listen to your voice and find the nearest movie theatre; and I.B.M. just set the “Jeopardy”-conquering Watson to work on medicine, initially training medical students, perhaps eventually helping in diagnosis. Scarcely a month goes by without the announcement of a new A.I. product or technique. Yet, some of the enthusiasm may be premature: as I’ve noted previously, we still haven’t produced machines with common sense, vision, natural language processing, or the ability to create other machines. Our efforts at directly simulating human brains remain primitive.

Still, at some level, the only real difference between enthusiasts and skeptics is a time frame. The futurist and inventor Ray Kurzweil thinks true, human-level A.I. will be here in less than two decades. My estimate is at least double that, especially given how little progress has been made in computing common sense; the challenges in building A.I., especially at the software level, are much harder than Kurzweil lets on.

But a century from now, nobody will much care about how long it took, only what happened next. It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine. There might be a few jobs left for entertainers, writers, and other creative types, but computers will eventually be able to program themselves, absorb vast quantities of new information, and reason in ways that we carbon-based units can only dimly imagine. And they will be able to do it every second of every day, without sleep or coffee breaks.

For some people, that future is a wonderful thing. Kurzweil has written about a rapturous singularity in which we merge with machines and upload our souls for immortality; Peter Diamandis has argued that advances in A.I. will be one key to ushering in a new era of “abundance,” with enough food, water, and consumer gadgets for all. Skeptics like Eric Brynjolfsson and I have worried about the consequences of A.I. and robotics for employment. But even if you put aside the sort of worries about what super-advanced A.I. might do to the labor market, there’s another concern, too: that powerful A.I. might threaten us more directly, by battling us for resources.

Most people see that sort of fear as silly science-fiction drivel—the stuff of “The Terminator” and “The Matrix.” To the extent that we plan for our medium-term future, we worry about asteroids, the decline of fossil fuels, and global warming, not robots. But a dark new book by James Barrat, “Our Final Invention: Artificial Intelligence and the End of the Human Era,” lays out a strong case for why we should be at least a little worried.

Barrat’s core argument, which he borrows from the A.I. researcher Steve Omohundro, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro’s words, “if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship,” in order to obtain more resources for whatever goals it might have. A purely rational artificial intelligence, Barrat writes, might expand “its idea of self-preservation … to include proactive attacks on future threats,” including, presumably, people who might be loathe to surrender their resources to the machine. Barrat worries that “without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals,” even, perhaps, commandeering all the world’s energy in order to maximize whatever calculation it happened to be interested in.

Of course, one could try to ban super-intelligent computers altogether. But “the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling,” Vernor Vinge, the mathematician and science-fiction author, wrote, “that passing laws, or having customs, that forbid such things merely assures that someone else will.”

If machines will eventually overtake us, as virtually everyone in the A.I. field believes, the real question is about values: how we instill them in machines, and how we then negotiate with those machines if and when their values are likely to differ greatly from our own. As the Oxford philosopher Nick Bostrom argued:

We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth. It might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose that its designers might want it to serve. But it is no less possible—and probably technically easier—to build a superintelligence that places final value on nothing but calculating the decimals of pi.

The British cyberneticist Kevin Warwick once asked, “How can you reason, how can you bargain, how can you understand how that machine is thinking when it’s thinking in dimensions you can’t conceive of?”

If there is a hole in Barrat’s dark argument, it is in his glib presumption that if a robot is smart enough to play chess, it might also “want to build a spaceship”—and that tendencies toward self-preservation and resource acquisition are inherent in any sufficiently complex, goal-driven system. For now, most of the machines that are good enough to play chess, like I.B.M.’s Deep Blue, haven’t shown the slightest interest in acquiring resources.

But before we get complacent and decide there is nothing to worry about after all, it is important to realize that the goals of machines could change as they get smarter. Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called “technological singularity” or “intelligence explosion,” the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

One of the most pointed quotes in Barrat’s book belongs to the legendary serial A.I. entrepreneur Danny Hillis, who likens the upcoming shift to one of the greatest transitions in the history of biological evolution: “We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoeba and we can’t figure out what the hell this thing is that we’re creating.”

Already, advances in A.I. have created risks that we never dreamt of. With the advent of the Internet age and its Big Data explosion, “large amounts of data is being collected about us and then being fed to algorithms to make predictions,” Vaibhav Garg, a computer-risk specialist at Drexel University, told me. “We do not have the ability to know when the data is being collected, ensure that the data collected is correct, update the information, or provide the necessary context.” Few people would have even dreamt of this risk even twenty years ago. What risks lie ahead? Nobody really knows, but Barrat is right to ask.

Photograph by John Vink/Magnum.