Google has pulled off something big. Its AI agent Big Sleep spotted and blocked a cyber exploit before it could hit, a first for artificial intelligence in threat prevention. CEO Sundar Pichai broke the news on Tuesday, posting on X, “New from our security teams: Our AI agent Big Sleep helped us detect and foil an imminent exploit. We believe this is a first for an AI agent - definitely not the last - giving cybersecurity defenders new tools to stop threats before they’re widespread.”
How Big Sleep caught the threat
Big Sleep is no ordinary tool. It was developed by Google DeepMind and Project Zero to sniff out hidden security flaws. Back in November last year, it found its first real-world bug. Since then, it has found several more.
This time, Big Sleep uncovered CVE-2025-6965, a serious flaw in SQLite, a popular database engine used worldwide. According to Google, the vulnerability was “only known to threat actors and was at risk of being exploited.” The AI didn’t just find it; it predicted the bug would soon be used.
A spokesperson told Recorded Future News the threat intelligence team had picked up clues but could not pin down the exact problem at first. They said, “The limited indicators were passed along to other Google team members at the zero day initiative who leveraged Big Sleep to isolate the vulnerability the adversary was preparing to exploit in their operations.”
The company has not said who the hackers were or exactly what signs they spotted. But the fact remains: Big Sleep stopped an exploit before it was ever launched.
Why this matters for cybersecurity
Google calls it a turning point. For years, defenders have patched holes after breaches. Now, AI like Big Sleep may flip the script, catching flaws before criminals can use them.
In a blog post, Google said since its launch, Big Sleep has “exceeded” expectations, spotting multiple real-world bugs. It is now securing Google’s own ecosystem and open-source projects too.
Google said, “We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild.” The company believes these tools are a “game changer” because they “can free up security teams to focus on high-complexity threats, dramatically scaling their impact and reach.”
What else Google is building
Big Sleep is not the only AI project in Google’s lab. The tech giant has other systems in the works to help defenders get ahead.
One is Timesketch, an open-source forensics tool built with Sec-Gemini. Another is called Fast and Accurate Contextual Anomaly Detection, or FACADE, which has been spotting insider threats since 2018.
Together, these tools point to a future where AI watches for problems around the clock while human experts handle the complex work that machines cannot.
Bigger picture in AI security
Google is not alone in this race. Tech companies and government bodies worldwide are building AI to secure critical code. The US Defence Department will soon announce winners of a contest to create systems that protect vital digital infrastructure automatically.
Meanwhile, Google says it designed Big Sleep and its other agents to protect privacy and run transparently. A white paper explains how the company tries to stop AI from taking unintended actions.
Cyberattacks are growing sharper every year. Big Sleep’s breakthrough hints at a new playbook: smarter AI on the front line, stopping threats before they grow teeth.
How Big Sleep caught the threat
Big Sleep is no ordinary tool. It was developed by Google DeepMind and Project Zero to sniff out hidden security flaws. Back in November last year, it found its first real-world bug. Since then, it has found several more.
This time, Big Sleep uncovered CVE-2025-6965, a serious flaw in SQLite, a popular database engine used worldwide. According to Google, the vulnerability was “only known to threat actors and was at risk of being exploited.” The AI didn’t just find it; it predicted the bug would soon be used.
A spokesperson told Recorded Future News the threat intelligence team had picked up clues but could not pin down the exact problem at first. They said, “The limited indicators were passed along to other Google team members at the zero day initiative who leveraged Big Sleep to isolate the vulnerability the adversary was preparing to exploit in their operations.”
The company has not said who the hackers were or exactly what signs they spotted. But the fact remains: Big Sleep stopped an exploit before it was ever launched.
Why this matters for cybersecurity
Google calls it a turning point. For years, defenders have patched holes after breaches. Now, AI like Big Sleep may flip the script, catching flaws before criminals can use them.
In a blog post, Google said since its launch, Big Sleep has “exceeded” expectations, spotting multiple real-world bugs. It is now securing Google’s own ecosystem and open-source projects too.
Google said, “We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild.” The company believes these tools are a “game changer” because they “can free up security teams to focus on high-complexity threats, dramatically scaling their impact and reach.”
What else Google is building
Big Sleep is not the only AI project in Google’s lab. The tech giant has other systems in the works to help defenders get ahead.
One is Timesketch, an open-source forensics tool built with Sec-Gemini. Another is called Fast and Accurate Contextual Anomaly Detection, or FACADE, which has been spotting insider threats since 2018.
Together, these tools point to a future where AI watches for problems around the clock while human experts handle the complex work that machines cannot.
Bigger picture in AI security
Google is not alone in this race. Tech companies and government bodies worldwide are building AI to secure critical code. The US Defence Department will soon announce winners of a contest to create systems that protect vital digital infrastructure automatically.
Meanwhile, Google says it designed Big Sleep and its other agents to protect privacy and run transparently. A white paper explains how the company tries to stop AI from taking unintended actions.
Cyberattacks are growing sharper every year. Big Sleep’s breakthrough hints at a new playbook: smarter AI on the front line, stopping threats before they grow teeth.
You may also like
India likely to get 3 Apache attack choppers from US for its Army on July 21
This is not about Epstein: Trump's former NSA General Flynn reacts to 'hoax' amid MAGA meltdown
Alexander Isak 'disappointed' with Newcastle as Liverpool get transfer encouragement
Can't win today's warfare with yesterday's weapons, desi drone tech must: CDS General Anil Chauhan
The 'gritty', underrated' thriller loved by fans and critics that's streaming on Netflix