by L.J. Vanier
October 27, 2015

from ISoulScience Website

Spanish version

 


 


 



"Success in creating AI (Artificial Intelligence)

would be the biggest event in human history

and possibly the last."

Steven Hawking
 

 


Technological advancements are happening at an extremely rapid pace and if we go back ten years to 2005, we find that much of what we enjoy now,

  • YouTube

  • Netflix

  • Smart phones

  • tablets,

...were non-existent back then.

If we go further back into the 1900's we find that the Wright brothers were looking to complete their first flight and the Ford Model T was still just a prototype.

 

This really goes to show how rapidly we are progressing in terms of technology. In fact, some experts predict that in this century we are going to make 1000 times the progress in technology, compared to the last century.

 

This has led many to ask,

where are we headed?

This is where the question of singularity arises.

 

To put it simply, singularity is the moment when an intelligence that is smarter than any human on the planet is created and when this intelligence starts to make smarter copies of itself, it continues at an ever-increasing rate. This intelligence would quickly become smarter than every human combined, making it the most dominant intellectual force on Earth.

Until recently, we have only been dealing with ANI and AGI.

  • Artificial Narrow Intelligence (ANI), is a highly specialized system that is comparable to human intelligence in only selective niches.

     

  • Artificial General Intelligence (AGI), is comparable to the human brain in every aspect and many scientists think that the AGI will be created when scientists start to simulate the human brain on computers.

Now finally, there is Artificial Super Intelligence (ASI).

 

At this level, the AI is smarter than humans and if given access to the outside world, its actions would be unstoppable and unpredictable.

Experts believe that ASI can be created from an AGI in two different ways:

  • a soft takeoff

  • a hard takeoff

     

  • A soft takeoff occurs when the AGI realizes that it can make smarter copies of itself and continue to iterate these copies until it reaches the level of ASI

     

  • A hard takeoff would occur in the form of an intelligence explosion, where the AGI would take the form of an ASI in a matter of milliseconds

Due to this, you can easily see the unpredictability and danger that Artificial Super Intelligence brings to the table. This is why experts have already started warning about them.

In a recent UN meeting about emerging global risks, prominent scientists, like MIT physicist Max Tegmark, and the founder of Oxford's Future of Humanity Institute, Nick Bostorm, have shed light on the probable dangers of ASI.

According to them, one can view the positive impacts that ASI would have on our world but in the long-term it would become an uncontrollable machine whose actions can't be predicted by anyone.
 

 

 

 

 


The meeting was concluded by Bostorm in the following warning stating,

"All the really big existential risks are in the anthropogenic category. Humans have survived earthquakes, plagues, asteroid strikes, but in this century we will introduce entirely new phenomena and factors into the world.

 

Most of the plausible threats have to do with anticipated future technologies."

In the near future,

"world militaries are considering autonomous-weapon systems that can choose and eliminate targets."

Clearly stating that biological evolution wouldn't be able to keep up with the intellectual advancements of the Artificial Super Intelligence (ASI), rendering us humans, nothing more than slaves.

In conclusion, the advent of Artificial Super Intelligence isn't that far away if we continue on our course for technological advancement.

 

Because ASI doesn't need to be created directly, it could arise from just one small misstep effectively turning the world into a real life Terminator movie...