TIME robotics

5 Very Smart People Who Think Artificial Intelligence Could Bring the Apocalypse

Theoretical physicist Stephen Hawking poses for a picture ahead of a gala screening of the documentary 'Hawking', a film about the scientist's life.
AFP/Getty Images Theoretical physicist Stephen Hawking poses for a picture ahead of a gala screening of the documentary 'Hawking', a film about the scientist's life.

'The end of the human race'

On the list of doomsday scenarios that could wipe out the human race, super-smart killer robots rate pretty high in the public consciousness. And in scientific circles, a growing number of artificial intelligence experts agree that humans will eventually create an artificial intelligence that can think beyond our own capacities. This moment, called the singularity, could create a utopia in which robots automate common forms of labor and humans relax amid bountiful resources. Or it could lead the artificial intelligence, or AI, to exterminate any creatures it views as competitors for control of the Earth—that would be us. Stephen Hawking has long seen the latter as more likely, and he made his thoughts known again in a recent interview with the BBC. Here are some comments by Hawking and other very smart people who agree that, yes, AI could be the downfall of humanity.

Stephen Hawking

“The development of full artificial intelligence could spell the end of the human race,” the world-renowned physicist told the BBC. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Hawking has been voicing this apocalyptic vision for a while. In a May column in response to Transcendence, the sci-fi movie about the singularity starring Johnny Depp, Hawking criticized researchers for not doing more to protect humans from the risks of AI. “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not—but this is more or less what is happening with AI,” he wrote.

[time-brightcove not-tgx=”true”]
[video id=IY8KG0y3]

Elon Musk

Known for his businesses on the cutting edge of tech, such as Tesla and SpaceX, Musk is no fan of AI. At a conference at MIT in October, Musk likened improving artificial intelligence to “summoning the demon” and called it the human race’s biggest existential threat. He’s also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the establishment of national or international regulations on the development of AI.

Nick Bostrom

The Swedish philosopher is the director of the Future of Humanity Institute at the University of Oxford, where he’s spent a lot of time thinking about the potential outcomes of the singularity. In his new book Superintelligence, Bostrom argues that once machines surpass human intellect, they could mobilize and decide to eradicate humans extremely quickly using any number of strategies (deploying unseen pathogens, recruiting humans to their side or simple brute force). The world of the future would become ever more technologically advanced and complex, but we wouldn’t be around to see it. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” he writes. “A Disneyland without children.”

James Barrat

Barrat is a writer and documentarian who interviewed many AI researchers and philosophers for his new book, “Our Final Invention: Artificial Intelligence and the End of the Human Era.” He argues that intelligent beings are innately driven toward gathering resources and achieving goals, which would inevitably put a super-smart AI in competition with humans, the greatest resource hogs Earth has ever known. That means even a machine that was just supposed to play chess or fulfill other simple functions might get other ideas if it was smart enough. “Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals,” he writes in the book.

Vernor Vinge

A mathematician and fiction writer, Vinge is thought to have coined the term “the singularity” to describe the inflection point when machines outsmart humans. He views the singularity as an inevitability, even if international rules emerge controlling the development of AI. “The competitive advantage—economic, military, even artistic—of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first,” he wrote in a 1993 essay. As for what happens when we hit the singularity? “The physical extinction of the human race is one possibility,” he writes.

Tap to read full story

Your browser is out of date. Please update your browser at http://update.microsoft.com


YOU BROKE TIME.COM!

Dear TIME Reader,

As a regular visitor to TIME.com, we are sure you enjoy all the great journalism created by our editors and reporters. Great journalism has great value, and it costs money to make it. One of the main ways we cover our costs is through advertising.

The use of software that blocks ads limits our ability to provide you with the journalism you enjoy. Consider turning your Ad Blocker off so that we can continue to provide the world class journalism you have become accustomed to.

The TIME Team