|

by Guillaume Thierry
April 14, 2025
from
TheConversation Website
Spanish version

Kundra/Shutterstock
We are constantly fed a version of AI that looks, sounds and acts
suspiciously like us.
It speaks in polished sentences, mimics
emotions, expresses curiosity, claims to feel compassion, even
dabbles in what it calls creativity.
But here's the truth:
it possesses none of those qualities.
It is
not human.
And presenting it as if it were?
That's dangerous.
Because it's convincing.
And nothing is more dangerous than a
convincing illusion...
In particular, general artificial intelligence -
the mythical kind of AI that supposedly mirrors human thought -
is still
science fiction, and it
might well stay that way.
What we call AI today is nothing more than a
statistical machine:
a digital parrot regurgitating patterns mined
from oceans of human data (the situation hasn't changed much since
it was
discussed here five years ago).
When it writes an answer to a
question, it literally just guesses which letter and word will come
next in a sequence – based on the data it's been trained on.
This means AI has,
No understanding.
No
consciousness.
No knowledge in any real, human sense.
Just pure
probability-driven, engineered brilliance...
Nothing more, and
nothing less...!
So why is a real "thinking" AI likely impossible?
Because it's bodiless.
It has no senses, no flesh, no nerves, no
pain, no pleasure.
It doesn't hunger, desire or fear.
And because
there is no cognition - not a shred - there's a fundamental gap
between the data it consumes (data born out of human feelings and
experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious
mechanism underlying the relationship between our physical body and
consciousness the,
"hard
problem of consciousness -
Facing Up to the Problem of Consciousness".
Eminent scientists have recently hypothesised that consciousness actually emerges from the
integration of internal,
mental states with sensory representations (such as changes in
heart rate, sweating and much more).
Given the paramount importance of the human
senses and emotion for consciousness to "happen", there is a
profound and probably irreconcilable disconnect between general AI,
the machine, and
consciousness, a human phenomenon...!
The Master
Before you argue that AI programmers are human,
let me stop you there.
I know they're human.
That's part of the
problem...
Would you entrust your deepest secrets, life decisions,
emotional turmoil, to a computer programmer?
Yet that's exactly what
people are doing:
just ask Claude, GPT-4.5, Gemini... or, if you
dare, Grok...
Giving AI a human face, voice or tone is a
dangerous act of digital cross-dressing.
It triggers an automatic
response in us, an anthropomorphic reflex, leading to aberrant
claims whereby some AIs are said to have passed the famous
Turing test (which tests a machine's ability to exhibit
intelligent, human-like behavior).
But I believe that if AIs are
passing the Turing test, we need to update the test.
The AI machine has no idea what it means to be
human.
It cannot offer genuine compassion.
It cannot foresee your
suffering
It cannot intuit hidden motives or lies.
It has no taste, no
instinct, no inner compass.
It is bereft of all the messy, charming
complexity that makes us who we are.
More troubling still:
AI has no goals of its own,
no desires or ethics unless, injected into its code...!
That means the
true danger doesn't lie in the machine, but in its master:
the
programmer, the corporation, the government.
Still feel safe...?
And please, don't come at me with:
"You're too
harsh! You're not open to the possibilities!"
Or worse:
"That's such
a bleak view. My AI buddy calms me down when I'm anxious."
Am I lacking enthusiasm?
Hardly...
I use AI every
day.
It's the most powerful tool I've ever had. I can translate,
summarize, visualize, code, debug, explore alternatives, analyze
data - faster and better than I could ever dream to do it myself.
I'm in awe...!
But it is still a tool:
nothing
more, nothing less...
And like every tool humans have ever invented,
from stone axes and slingshots to
quantum computing and atomic
bombs, it can be used as a weapon... it will be used as a weapon...
Her (2013 film)
Need a visual?
Imagine falling in love with an
intoxicating AI, like in the film 'Her'.
Now imagine it "decides" to
leave you.
What would you do to stop it?
And to be clear:
it won't
be the AI rejecting you.
It'll be the human or system behind it,
wielding that tool become weapon to control your
behavior...
Removing the Mask
So where am I going with this?
We must stop
giving AI human traits.
My first interaction with GPT-3 rather
seriously annoyed me.
It pretended to be a person.
It said it had
feelings, ambitions, even consciousness.
That's no longer the default behavior,
thankfully.
But the style of interaction - the eerily natural flow
of conversation - remains intact.
And that, too, is convincing.
Too
convincing.
We need to.
de-anthropomorphise AI.
Now...!
Strip it
of its human mask.
This should be easy.
Companies could remove all
reference to emotion, judgment or cognitive processing on the part
of the AI. In particular, it should respond factually without ever
saying "I", or "I feel that"... or "I am curious".
Will it happen? I doubt it.
It reminds me of
another warning we've ignored for over 20 years:
"We need to cut
CO2
emissions."
Look where that got us...
But we must warn big tech
companies of the dangers associated with the humanisation of AIs.
They are unlikely to play ball, but they should, especially if they
are serious about developing more
ethical AIs.
For now, this is what I do (because I too often
get this eerie feeling that I am talking to a synthetic human when
using
ChatGPT
or
Claude...):
I instruct my AI not to address me by
name.
I ask it to call itself AI, to speak in the third person, and
to avoid emotional or cognitive terms.
If I am using voice chat, I ask the AI to use a
flat
prosody and speak a bit like a robot.
It is actually quite fun
and keeps us both in our comfort zone...
|