by Evan Gorelick

July 21, 2025

from NYTimes Website

recovered through Archive Website

 

 

 

 


A keyboard with a red space bar

bearing one of Isaac Asimov's

Three laws of robotics

("2. A Robot Must Obey Orders").
Credit: Balazs Gardi

for The New York Times

 


 

Artificial intelligence isn't just for drafting essays and searching the web. It's also a weapon.

 

And on the Internet, both the good guys and the bad guys are already using it.

Offense:

Bots and algorithms perpetrate much of the world's cybercrime.

 

Con artists use them to generate deepfakes and phishing scams. Want malware to steal someone's data? A chatbot can write the code. Bots also cook up disinformation.

 

As Israel and Iran fired missiles at each other last month, they also flooded the Internet with A.I.-powered propaganda.

 


Defense:

Cybersecurity companies use A.I. to intercept malicious traffic and patch software vulnerabilities.

 

Last week, Google announced that one of its bots had found a flaw in code used by billions of computers that cybercriminals wanted to exploit - likely the first time A.I. has managed such a feat.

Cybersecurity used to be slow and laborious.

Human hackers would concoct new attacks, and then security companies would tweak their defenses to parry them.

But now, that cat-and-mouse game moves at the speed of A.I.

 

And the stakes couldn't be higher:

Cybercrime is expected to cost the world more than $23 trillion per year by 2027, according to data from the F.B.I. and the International Monetary Fund.

That's more than the annual economic output of China.


Today, I explain what the arrival of A.I. hacking means for the Internet - and the billions who use it every day.

 

 

 


The Siege


The newest cybercriminals are robots.

They write with flawless grammar and code like veteran programmers.

 

They solve problems in seconds that have vexed people for years.

Malicious emails used to be riddled with typos and errors, so spam filters could spot and snag them.

 

That strategy doesn't work anymore.

With generative A.I., anyone can craft bespoke, grammatical scams.

Since ChatGPT launched in November 2022, phishing attacks have increased more than fortyfold.

Deepfakes, which mimic photos, videos and audio of real people, have surged more than twentyfold...

Because commercial chatbots have guardrails to prevent misuse, unscrupulous developers built spinoffs for cybercrime.

 

But even the mainstream models - ChatGPT, Claude, Gemini - are easy to outsmart, said Dennis Xu, a cybersecurity analyst at Gartner, a research and business advisory firm.

"If a hacker can't get a chatbot to answer their malicious questions, then they're not a very good hacker," he told me.

Google, which makes Gemini, said criminals (often from Iran, China, Russia and North Korea) used its chatbots to scope out victims, create malware and execute attacks.

 

OpenAI, which makes ChatGPT, said criminals used its chatbots to generate fake personas, spread propaganda and write scams.

"If you look at the full life cycle of a hack, 90 percent is done with A.I. now," said Shane Sims, a cybersecurity consultant.

Here's something odd:

Attacks aren't necessarily getting smarter...

Sandra Joyce, who leads the Google Threat Intelligence Group, told me she hadn't seen any "game-changing incident where A.I. did something humans couldn't do."

 

But cybercrime is a numbers game, and A.I. makes scaling easy.

Strike enough times, and some hits are bound to land...

 

Ameca, a humanoid robot

created for realistic interactions

that uses ChatGPT.

Credit: Loren Elliott

for The New York Times

 

 

 


The Fortress


What makes A.I. good on offense - finding patterns in heaps of data - also makes it good on defense.

 

Walk into any big cybersecurity conference, and virtually every vendor is pitching a new A.I. product. Algorithms analyze millions of network events per second; they catch bogus users and security breaches that take people weeks to spot.


Because A.I. is so quick on offense, a mere human can't play good defense anymore.

"They're going to be outnumbered 1,000 to 1," said Ami Luttwak, co-founder of the cybersecurity company Wiz.

Algorithms have been around for decades, but humans still manually check compliance, search for vulnerabilities and patch code.

 

Now, cyber firms are automating all of it.

That's what Google said its bot had done.

 

Others are on the way.

Microsoft said that its Security Copilot bot made engineers 30 percent faster, and considerably more accurate.

There's a risk, though:

A.I. still makes mistakes, and when it has more power, the errors can be much bigger.

A well-meaning bot may try to block traffic from a specific threat and instead block an entire country...