by Joel Hruska
November 8, 2017

from ExtremeTech Website







Stephen Hawking has spoken about the need to be extremely careful when developing or experimenting with artificial intelligence (AI) in the past and the famed physicist recently delivered some of his starkest language yet.

 

Hawking recently spoke at a technology conference in Portugal, where he told the audience that maintaining control of artificial intelligence is absolutely paramount to keeping the human race alive.

"Computers can, in theory, emulate human intelligence, and exceed it," Hawking said.

 

"Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it."

 

"AI could be the worst event in the history of our civilization," Hawking continued.

 

"It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."

 

Elon Musk

has also spoken out about the dangers of AI

and the need to evaluate it very seriously

 

 

Highly intelligent people have had deep misgivings about technology since literally before technology was a word.

 

Socrates himself spoke out against the invention of writing by telling the story of a conversation between Theuth (Thoth), the Egyptian inventor of letters, and what the god-king of Egypt, Ammon, supposedly said to him:

[T]his discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.

 

The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth.

 

They will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

One's first impulse, upon reading such dolorous prophecy, is to laugh...

 

But Socrates had a point:

Writing was fundamental to our advancement as a species, but it also reshaped culture.

The invention of writing allowed philosophers and accountants alike to create lasting tallies and recordings of their works, whether that meant a detailed list of crop yields along the Nile River or philosophic arguments we still discuss today.

 

Socrates may have missed the improvements that writing would create, but he wasn't wrong about it transforming civilization.

 

The invention of books aided the dissemination of knowledge by making it much easier to carry information in a single tome as opposed to a large number of scrolls.

 

The printing press, of course, revolutionized education and brought books to the masses (eventually) in a way even the most far-reaching visionaries of late antiquity could scarcely have imagined.

 

And these transformations continue–there have already been studies on how the internet's ever-present fountain of knowledge is changing how we remember things.

 

Hawking's fear that AI could easily turn against those who create it is not unfounded.

 

Many people have a view of artificial intelligence and the infallibility of computers that is blatantly at odds with the real-world reality of these devices.

 

Modern medicine has gotten pretty good at fixing physical problems within the body, but mental health treatments are much more difficult - and we've been trying to fix people's mental health problems for thousands of years.

 

Until the 20th century, our "best" treatments involved,

  • amateur brain surgery

  • horrifying insulin comas

  • alternating forced baths in freezing and scalding water

Now, imagine trying to talk an AI "down off the ledge" when it's feeling suicidal and happens to have nuclear launch codes in its back pocket.

 

 

 




That said, Hawking is still an optimist about the possibility of the technology - he's just simultaneously very concerned about what could happen if we underestimate the potential for harm.

 

It's not a new topic; he's been discussing the concept for several years (as seen in the interview with John Oliver above, though they touch on multiple topics).

 

I don't expect true AI to happen within my own lifetime, but I think Hawking is right to warn against potential risks.

 

This is one idea we ignore at our own peril.