Hawking recently spoke at a technology conference in Portugal, where he told the audience that maintaining control of artificial intelligence is absolutely paramount to keeping the human race alive.
has also spoken out about the dangers of AI
and the need to evaluate it very seriously
Highly intelligent people have had deep misgivings about technology since literally before technology was a word.
Socrates himself spoke out against the invention of writing by telling the story of a conversation between Theuth (Thoth), the Egyptian inventor of letters, and what the god-king of Egypt, Ammon, supposedly said to him:
One's first impulse, upon reading such dolorous prophecy, is to laugh...
But Socrates had a point:
The invention of writing allowed philosophers and accountants alike to create lasting tallies and recordings of their works, whether that meant a detailed list of crop yields along the Nile River or philosophic arguments we still discuss today.
Socrates may have missed the improvements that writing would create, but he wasn't wrong about it transforming civilization.
The invention of books aided the dissemination of knowledge by making it much easier to carry information in a single tome as opposed to a large number of scrolls.
The printing press, of course, revolutionized education and brought books to the masses (eventually) in a way even the most far-reaching visionaries of late antiquity could scarcely have imagined.
And these transformations continue–there have already been studies on how the internet's ever-present fountain of knowledge is changing how we remember things.
Hawking's fear that AI could easily turn against those who create it is not unfounded.
Many people have a view of artificial intelligence and the infallibility of computers that is blatantly at odds with the real-world reality of these devices.
Modern medicine has gotten pretty good at fixing physical problems within the body, but mental health treatments are much more difficult - and we've been trying to fix people's mental health problems for thousands of years.
Until the 20th century, our "best" treatments involved,
Now, imagine trying to talk an AI "down off the ledge" when it's feeling suicidal and happens to have nuclear launch codes in its back pocket.
It's not a new topic; he's been discussing the concept for several years (as seen in the interview with John Oliver above, though they touch on multiple topics).
I don't expect true AI to happen within my own lifetime, but I think Hawking is right to warn against potential risks.
This is one idea we ignore at our own peril.