Back Industry News

How humans will lose control of artificial intelligence Posted on Apr 20 - 2017

Share This :

This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the world's first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results — not knowing they've already doomed us all.

Before we get into the details of this galaxy-destroying blunder, it's worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it "calculations per second per $1,000," a number that continues to grow. If computing power maps to intelligence — a big "if," some have argued — we've only so far built technology on par with an insect brain. In a few years, maybe, we'll overtake a mouse brain. Around 2025, some predictions go, we might have a computer that's analogous to a human brain: a mind cast in silicon.

After that, things could get weird. Because there's no reason to think artificial intelligence wouldn't surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, "Machine intelligence is the last invention that humanity will ever need to make."

That's how profoundly things could change. But we can't really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations — feelings, even — that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.

Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators haven't considered the full ramifications of what they're building; they haven't built in the necessary safety protocols — forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.

But the superintelligence doesn't want to be turned off. It doesn't want to stop making paper clips. Acting quickly, it's already plugged itself into another power source; maybe it's even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: They'll have to be eliminated so the mission can continue. And Earth won't be big enough for the superintelligence: It'll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.

Galaxies reduced to paper clips: That's a worst-case scenario. It may sound absurd, but it probably sounds familiar. It's Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (It's also The Terminator, WarGames, and a whole host of others.) In this particular case, it's a reminder that superintelligence would not be human — it would be something else, something potentially incomprehensible to us. That means it could be dangerous.

Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence "the idea that eats smart people." Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes — superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.

Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. "But even if you find them persuasive," he said, "there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously." He suggests there are more subtle ways to think about the problems of A.I. Source

x

Get the Global Big Data Conference
Newsletter.

Weekly insight from industry insiders.
Plus exclusive content and offers.