The AI BriefThe AI Brief
BreakthroughsToolsStartupsIndustryDiscussions
The AI BriefThe AI Brief— AI news for developers
AboutMethodologySourcesAPITermsPrivacy

© 2026 The AI Brief. All rights reserved.

The AI BriefThe AI Brief
BreakthroughsToolsStartupsIndustryDiscussions
Supercharged scams
Tools & ModelsPosted 3w agoLIVE

Supercharged scams

Originally published by MIT Technology Review AI

When ChatGPT was released to the public in late 2022, it opened people’s eyes to how easily generative AI could churn out vast amounts of human-seeming text from simple prompts. This quickly caught the attention of criminals, who soon began using large language models to produce malicious emails-both the untargeted spam kind and more sophisticated, targeted attacks designed to steal money and sensitive information.

Since then, cybercriminals have adopted AI tools to supercharge their operations. They’ve used the technology to do everything from composing phishing emails and creating hyperrealistic, convincing deepfake clips to tweaking malicious software (commonly known as malware) so it is harder to detect. They can also use AI to automate the search for vulnerabilities in networks and computer systems, quickly generate ransom notes, and analyze vast swathes of stolen data to pinpoint what’s most valuable.

AI’s impact on hacking itself is not so clear-cut. But we do know that AI is lowering the barriers for would-be attackers, providing them with an ever-evolving arsenal of new capabilities, and making it faster, cheaper, and easier than ever before for them to try to infiltrate their targets. For example, scam centers across Southeast Asia are embracing inexpensive AI tools to quickly target greater numbers of potential victims and to swiftly switch to new locations, Interpol has warned. Similarly, the United Arab Emirates recently claimed to have foiled a series of shadowy AI-backed attacks on its vital sectors. And because these spammy, scattergun attacks can be pumped out at a colossal scale, they don’t need to be very sophisticated to have the desired effect-just lucky enough to get into a machine that happens to be undefended, or into the inbox of an unsuspecting victim at the right time.

Many organizations are already struggling to cope with the sheer volume of cyberattacks targeting them. The problem is likely to get significantly worse as increasing numbers of criminals try their luck, and as the capabilities of publicly available generative AI systems continue to improve. Earlier this month, AI company Anthropic claimed that Mythos, a model it’s developed and is now testing, found thousands of critical vulnerabilities, including some in every major operating system and web browser. Anthropic says all of them have been patched, but it’s delaying the model’s release as a result of these new capabilities and set up a consortium of tech companies called Project Glasswing that it says will try to put these capabilities to work for defensive purposes in the meantime.

Right now, cybersecurity researchers are optimistic that sloppier attacks can be thwarted through basic defenses, highlighting just how important it is to keep on top of software updates and stick to network security protocols. How well positioned we’ll be to ward off more sophisticated attacks in the future is much less clear.

The good news is that AI is also being used to defend. Each day, Microsoft-just one of the many businesses keeping tabs on such threats-processes more than 100 trillion signals flagged by its AI systems as potentially malicious or suspicious. The company says that between April 2024 and April 2025, it managed to block $4 billion worth of scams and fraudulent transactions, many of which may have been aided by AI content. The same technology that makes such attacks possible could also be our best bet at keeping us safe in years to come.

Ready to dive deeper?

Read the full story on the original source for primary detail and technical specifications.

Read on MIT Technology Review AI
Heat35

Based on social velocity, sharing rate, and discussion volume across communities.

Impact31

Estimated significance to the industry, potential for disruption, and technical novelty.

Why This Matters

This development reflects ongoing shifts in the AI ecosystem. Stay informed about how these changes might affect your technology stack, competitive landscape, and strategic planning.

Automated Summarization

This content was automatically aggregated and summarized from MIT Technology Review AI. Original content and nuance may vary.

Discussion

Start the conversation.

Related Stories

MixAtlas: Uncertainty-aware Data Mixture Optimization for Multimodal LLM Midtraining

MixAtlas: Uncertainty-aware Data Mixture Optimization for Multimodal LLM Midtraining

This paper was accepted at the Workshop on Navigating and Addressing Data Problems for Foundation Models (NADPFM) at ICLR 2026. Principled domain rewe…

3544
AI and the Future of Cybersecurity: Why Openness Matters

AI and the Future of Cybersecurity: Why Openness Matters

Read the full story to learn more.

3531
Training and Finetuning Multimodal Embedding & Reranker Models with Sentence Transformers

Training and Finetuning Multimodal Embedding & Reranker Models with Sentence Transformers

Read the full story to learn more.

3531
The AI BriefThe AI Brief— AI news for developers
AboutMethodologySourcesAPITermsPrivacy

© 2026 The AI Brief. All rights reserved.