Stephen Hawking’s Final Say on AI

Stephen Hawking’s posthumous book published eight months after his March 2018 death, warns that humanity faces an existential threat from the development of “a super-intelligent AI”.

Hawking was just 21 and a Ph.D. student in Cambridge, England, when he was diagnosed with a very rare, slow-progressing form of ALS, also known as Lou Gehrig’s Disease. Condemned by doctors to live a short life, he defied the odds and lived for over five more decades, producing groundbreaking research in theoretical cosmology and theoretical physics. Coincidentally, Hawking was born exactly 300 years after Galileo’s death (January 8, 1642) and died on Einstein’s birthday, March 14.

Like Einstein, Hawking achieved scientific celebrity and worldwide stardom while alive – not something that happens to many scientists. His opinions are quoted widely and wary views on AI are similar to scientific superstar, Elon Musk’s.  In Hawking’s final book, called ‘Brief Answers to the Big Questions’, Stephen offers his answers to 10 fundamental questions that he was constantly asked throughout his life. The questions range from whether he thinks there’s a god (no), to if aliens exist (yes), and whether time travel is possible (maybe).

Responding to a question about artificial intelligence, Hawking warns, “a super-intelligent AI will be extremely good at accomplishing goals and if those goals aren’t aligned with ours we’re in trouble.” He goes on to say, “you’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.  Let’s not place humanity in the position of those ants.”

This position is not a new one; Hawking often spoke about his misgivings around artificial intelligence like in 2014, when he warned that a super-AI could end humanity if deployed carelessly. In 2015, Hawking called for more research on the societal impacts of AI as well. AI could be the “worst event in the history of our civilization” unless society finds a way to control its development, Hawking said in 2017.

While artificial intelligence is likely to drastically change our lives in the future, we believe that humanity is a long way from a general artificial intelligence that would have ‘self-sustaining long-term goals and intent’. We should be reasonable, not worried or unworried.

It is reasonable to be mindful of the inherent unpredictability of creativity: both the creativity of humans in designing AI and the creativity of some future AI. The former means that we cannot predict how fast (super-)human-level AI will be developed. The latter means that we cannot predict what interests a given (super-)human-level AI would pursue any more than we can predict what interests a given human will pursue. However, we can know the methods—the cognitive mechanisms—by which humans and human-like—i.e., anthronoetic—AI will pursue their interests. And these methods, we argue will ultimately constrain future AI—and humans—to converge on moral solutions.