Back

 Industry News Details

 
When Artificial Intelligence Gets Too Clever by Half Posted on : May 26 - 2017

PICTURE A CREW of engineers building a dam. There’s an anthill in the way, but the engineers don’t care or even notice; they flood the area anyway, and too bad for the ants.

Now replace the ants with humans, happily going about their own business, and the engineers with a race of superintelligent computers that happen to have other priorities. Just as we now have power to dictate the fate of less intelligent beings, so might such computers someday exert life-and-death power over us.

That’s the analogy the superstar physicist Stephen Hawking used in 2015 to describe the mounting perils he sees in the current explosion of artificial intelligence. And lately the alarms have been sounding louder than ever. Allan Dafoe of Yale and Stuart Russell of Berkeley wrote an essay in MIT Technology Review titled “Yes, We Are Worried About the Existential Risk of Artificial Intelligence.” The computing giants Bill Gates and Elon Musk have issued similar warnings online.

Should we be worried?

Perhaps the most influential case that we should be was made by the Oxford philosopher Nick Bostrom, whose 2014 book, “Superintelligence: Paths, Dangers, Strategies,” was a New York Times best seller. The book catapulted the term “superintelligence” into popular consciousness and bestowed authority on an idea many had viewed as science fiction.

Bostrom defined superintelligence “as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,” with the hypothetical power to vastly outmaneuver us, just like Hawking’s engineers.

And it could have very good reasons for doing so. In the title of his eighth chapter, Bostrom asks, “Is the default outcome doom?,” and he suggests that the unnerving answer might be “yes.” He points to a number of goals that superintelligent machines might adopt, including resource acquisition, self-preservation, and cognitive improvements, with potentially disastrous consequences for us and the planet.

Bostrom illustrates his point with a colorful thought experiment. Suppose we develop an AI tasked with building as many paper clips as possible. This “paper clip maximizer” might simply convert everything, humanity included, into paper clips. Ousting humans would also facilitate self-preservation, eliminating our unfortunate knack for switching off machines. There’s also the possibility of an “intelligence explosion,” where even a modestly capable general AI might undergo a rapid period of self-improvement in order to better achieve its goals, swiftly bypassing humanity in the process. View More