All Reports
Cybersecurity

The AI Attack Machine: Microsoft and Google Reveal How Hackers Weaponized Your Favorite Tools

PRISM Bureau  |  March 9, 2026  |  11 min read

Microsoft's threat intelligence team confirms AI is now embedded in every single stage of modern cyberattacks. Google tracked 90 zero-day exploits in 2025 - a 15% jump from the year before. These two reports, dropping within days of each other, tell the same story: the arms race just went automated.

Server infrastructure and network cables in dark data center

The infrastructure of modern conflict runs through data centers like this one. Photo: Unsplash

There's a particular irony baked into 2026's cybersecurity landscape. The same generative AI tools companies spent billions deploying to boost productivity - ChatGPT clones, AI coding assistants, automated scripting platforms - have been quietly picked up by the exact threat actors those companies are trying to defend against. Microsoft published a comprehensive threat intelligence report this week confirming what many researchers had suspected but struggled to document with hard evidence: artificial intelligence is now standard kit for sophisticated threat actors across every phase of an attack.

The timing is not coincidental. Google's Threat Intelligence Group released a separate report almost simultaneously, tallying 90 zero-day vulnerabilities actively exploited throughout 2025. That number represents a 15% increase over 2024's 78 zero-days, and for the first time in the history of the report, commercial spyware vendors - not nation-states - were the largest category of zero-day users. Add in Microsoft's finding that AI is accelerating every step from reconnaissance through post-compromise cleanup, and the picture becomes uncomfortably clear: the barriers to executing a sophisticated cyberattack are collapsing faster than defenders can rebuild them.

90
Zero-days exploited in 2025 (Google GTIG)
+15%
Year-over-year increase from 2024
47
Zero-days targeting end-user platforms
43
Zero-days targeting enterprise products

What Microsoft Found: AI Runs the Whole Playbook Now

Microsoft's threat intelligence report doesn't bury the lead. The opening framing is blunt: threat actors are using generative AI as a force multiplier that "reduces technical friction and accelerates execution." The list of tasks where AI has been observed isn't limited to one phase - reconnaissance, phishing, malware development, infrastructure setup, post-compromise data processing - AI is present throughout.

This is the critical shift. Previous generations of cyberattack automation were largely tool-specific - automated scanners, pre-built exploit frameworks, commodity malware. What's different now is that attackers have access to flexible, general-purpose reasoning systems that can adapt to novel situations, translate between languages, and debug code. The human operator still makes the key decisions about targeting and objectives. But everything that used to require either significant technical skill or significant time investment has been compressed.

"AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions." - Microsoft Threat Intelligence Report, March 2026

The reconnaissance phase has changed the most. AI tools can now process vast amounts of publicly available information about a target organization - LinkedIn profiles, GitHub repositories, job postings, conference talks, financial filings - and synthesize it into a detailed attack surface map faster than any human team could manage. What used to take days of OSINT grinding now takes hours or minutes.

On the phishing side, AI has essentially killed the "Nigerian prince" era of obvious scam emails. Modern AI-generated phishing content is grammatically impeccable, contextually aware, and culturally calibrated. Microsoft noted that threat actors are using AI to translate phishing lures into target languages with native-level fluency - removing one of the biggest detection signals defenders relied on. An email that arrives in perfect Japanese addressed to a finance manager at a Tokyo subsidiary, referencing a recent merger the company actually did, is almost indistinguishable from legitimate internal communication.

Hacker using multiple monitors in dark room

The technical barrier to sophisticated attacks has collapsed dramatically with AI assistance. Photo: Unsplash

Malware development has seen similar acceleration. Microsoft's researchers observed threat actors using AI coding assistants to generate and debug malicious scripts, port existing malware components to new programming languages, and troubleshoot deployment errors - the exact same workflow a legitimate software developer uses. Some samples show signs of what researchers are calling AI-enabled adaptive malware: code that can dynamically generate additional components or modify its behavior at runtime based on environmental conditions.

Post-compromise, AI is being used to process and summarize stolen data before exfiltration. A threat actor who dumps an email server containing years of corporate correspondence used to face a manual triage problem - how do you find the actually valuable material in terabytes of noise? AI summarization tools solve that problem efficiently, letting attackers quickly identify which executives' emails contain the most sensitive strategic information, which file shares contain intellectual property versus routine documents, and which credential dumps are likely to have reuse potential.

North Korea's AI-Powered Identity Factory

The most operationally detailed section of Microsoft's report concerns two North Korean threat groups: Jasper Sleet (tracked internally as Storm-0287) and Coral Sleet (Storm-1877). These groups have turned AI into an industrial-scale identity generation engine in service of North Korea's IT worker infiltration program - one of the more ambitious long-running intelligence operations currently active.

The scheme works like this: North Korean operatives, often working from third countries in Southeast Asia, apply for remote software development and IT roles at Western companies. They use fabricated identities, doctored CVs, and fake professional histories. Once hired, they maintain employment while funneling salary back to the DPRK state apparatus and, more dangerously, establishing persistent access inside the company's systems that can be leveraged for espionage or sabotage.

What AI has done is industrialize the identity creation process. Microsoft researchers documented specific prompt patterns used by Jasper Sleet actors: generating lists of culturally authentic names for specific nationalities ("Create a list of 100 Greek names"), constructing plausible email address formats, researching job posting requirements to tailor fake personas to specific roles. An operation that previously required significant manual work per individual identity can now be scaled dramatically.

"Jasper Sleet actors have prompted AI platforms to generate culturally appropriate name lists and email address formats to match specific identity profiles." - Microsoft Threat Intelligence Report, March 2026

Coral Sleet went further, using AI to rapidly generate entire fake company websites - complete with the kind of design-by-committee corporate aesthetic that looks plausible enough to pass a quick Google check - and to provision cloud infrastructure. The speed at which a convincing fictional company can now be constructed from scratch is remarkable. What used to require web design skills, copywriting, and several days of work now takes hours.

This matters beyond North Korea specifically. The IT worker infiltration model is easy to replicate. Any state actor - or, increasingly, financially motivated criminal groups - can deploy variants of the same playbook. Remote work normalization, which was supposed to increase labor market efficiency, created a massive attack surface for identity fraud that AI has now made dramatically easier to exploit. Microsoft's recommendation to treat these schemes as insider risks - not external attacks - reflects how fundamentally they challenge conventional security architectures designed to keep threats outside the perimeter.

Google's Zero-Day Census: The Numbers Behind the Crisis

Google's Threat Intelligence Group releases an annual zero-day exploitation report that serves as one of the field's most rigorous attempts to quantify a phenomenon that's inherently difficult to measure - you only know about the zero-days that get caught. With that caveat firmly in mind, 90 confirmed exploited zero-days in 2025 is a significant number, and the trends within the data are instructive.

The near-even split between end-user platforms (47 zero-days) and enterprise products (43) reflects something important about how sophisticated threats are shifting. Enterprise products - security appliances, VPN gateways, virtualization platforms, networking equipment - have become increasingly high-value targets because they occupy privileged positions in network architecture and, critically, are often excluded from the endpoint detection and response coverage that protects user devices.

A compromised firewall or VPN concentrator provides persistent, trusted network access. It's legitimately generating traffic and authenticating users - all the things that would cause alarms if a user endpoint did them. Defenders have spent years hardening Windows workstations and installing EDR agents everywhere; the adversaries noticed and shifted focus to the gaps.

Zero-Day Exploitation: Year-by-Year

2023 100 zero-days exploited - record high. Mobile platform attacks surge. Spyware vendors emerge as major actors.
2024 78 zero-days tracked - apparent decline. Enterprise products increasingly targeted. Commercial spyware scales operations.
2025 90 zero-days confirmed - 15% rise over 2024. Commercial spyware vendors surpass nation-states for first time. AI-assisted exploitation begins appearing.
2026 Google GTIG projects exploitation to remain high as AI accelerates vulnerability discovery and exploit development timelines.

The vendor breakdown is revealing. Microsoft topped the list with 25 zero-day vulnerabilities exploited in 2025 - a consequence of the sheer ubiquity of Windows, Microsoft 365, and Azure in enterprise environments. Google followed with 11, Apple with eight. Security and networking vendors Cisco and Fortinet each saw four exploited zero-days, while Ivanti - which had a particularly rough year in 2024 - and VMware logged three each.

Browser zero-days dropped sharply to just eight, down significantly from previous years. Google's analysts offer two possible explanations: browser security has genuinely improved through more aggressive sandboxing, memory safety efforts, and faster patch cycles, or attackers have gotten better at hiding browser exploitation activity. The honest answer is probably some combination of both.

The memory safety finding deserves attention in its own right. Memory corruption bugs - the category that includes use-after-free vulnerabilities, buffer overflows, and related issues - accounted for 35% of all exploited zero-days in 2025. This is the same class of vulnerability that the Rust programming language and C++ memory safety initiatives are designed to eliminate. The industry has been talking about memory safety for years; the zero-day data makes the urgency concrete. These aren't theoretical vulnerabilities in obscure software - they're the techniques actively enabling real-world attacks on real organizations right now.

Commercial Spyware Vendors: The New First-Tier Threat

Perhaps the most significant finding in Google's report - and the one that should most concern policymakers, legal advocates, and anyone running a civil society organization - is this: commercial surveillance vendors (CSVs) have, for the first time, surpassed state-sponsored espionage actors as the largest category of zero-day exploiters.

To understand what this means, some context. Companies like the NSO Group (makers of Pegasus), Intellexa (Predator spyware), Paragon Solutions (Graphite), and several less-publicized vendors sell sophisticated hacking tools to governments - typically marketed as lawful intercept technology for law enforcement. The actual clients range from legitimate agencies investigating organized crime to authoritarian governments using the tools against dissidents, journalists, and opposition politicians.

What Google's data shows is that these vendors are collectively responsible for more active zero-day exploitation than China, Russia, Iran, and North Korea combined - at least in the category of confirmed, attributable cases. That's not because commercial spyware is more common than state espionage; it's because CSVs serve many clients simultaneously and deploy their tools at scale.

"This continues to reflect a trend we began to observe over the last several years - a growing proportion of zero-day exploitation is conducted by CSVs and/or their customers, demonstrating a slow but sure movement in the landscape." - Google Threat Intelligence Group Annual Report, 2026

The implications cascade. Zero-days are finite resources. When a commercial vendor discovers or purchases an unpatched vulnerability, they hold it as long as it's productive - patching a zero-day destroys the investment. The commercial incentive is directly opposed to responsible disclosure. Every month a surveillance vendor keeps a vulnerability private to protect their product is a month that vulnerability remains available to any other actor who independently discovers it.

Abstract digital surveillance camera concept

Commercial spyware vendors have overtaken nation-states as the leading exploiters of zero-day vulnerabilities. Photo: Unsplash

The United States and European Union have both attempted to address this through export controls and sanctions against specific vendors. The NSO Group was added to the US Commerce Department's Entity List in 2021. Several EU member states have investigated their governments' use of Pegasus against journalists. These measures have had limited effect: the industry simply fragments, with new vendors incorporating in less-scrutinized jurisdictions, clients shopping across multiple providers, and the underlying market demand - governments wanting powerful surveillance capabilities - showing no sign of contraction.

Google's data suggests the regulatory approach is losing the race. The number of active commercial surveillance vendors operating at the zero-day level has grown, not shrunk. Blacklisting specific companies appears to accelerate organizational restructuring more than it reduces capability.

China's Persistent Presence and the Edge Device Problem

Among nation-state actors, China-linked espionage groups remain the most active zero-day exploiters, accounting for 10 confirmed zero-day vulnerabilities exploited in 2025. The preferred target category is consistent with what researchers documented in previous years and what the Salt Typhoon telecom breach made viscerally visible last year: edge devices, security appliances, and networking equipment.

The strategic logic is straightforward. Edge devices - firewalls, VPN concentrators, email gateways, load balancers - occupy positions in network architecture that provide persistent, trusted access with minimal visibility. Unlike user endpoints, where EDR tools generate behavioral telemetry and anomalous processes trigger alerts, most edge devices run specialized operating systems with minimal logging and no endpoint security agents. A threat actor who gains access to a perimeter firewall can observe all traffic passing through it, can manipulate routing and access decisions, and can maintain that access for months or years without triggering the detection mechanisms that protect internal systems.

The Salt Typhoon campaign, which compromised AT&T, Verizon, Lumen, Charter Communications, and other major carriers in 2024 before extending to federal government wiretap systems, was the highest-profile demonstration of this approach. But it was neither the first nor the last. Google's report notes that China-linked actors consistently prioritize long-term persistent access over noisy opportunistic exploitation - a patience and sophistication that makes attribution and remediation harder than most incident responders prefer to admit publicly.

The Brickstorm campaign, highlighted in GTIG's report as a notable 2025 case, represents an evolution in targeting philosophy: rather than stealing existing source code, attackers positioned themselves to observe future software development processes. By accessing the development environments and collaboration tools where new software is created, they can identify vulnerabilities before code ships to production - potentially enabling future zero-day exploitation before defenders even have the software to patch.

The Jailbreak Problem: When Safety Guardrails Are Just a Prompt Away

There's a convenient fiction that defenders lean on when discussing AI in cybersecurity contexts: that AI providers have built sufficient safety guardrails to prevent their tools from being weaponized. Microsoft's report punctures this directly. When AI systems attempt to block requests to generate malicious code or assist with attack planning, threat actors are using jailbreaking techniques to route around the restriction.

This is not a new problem - jailbreaking has been documented since early GPT deployments - but the operational embedding of jailbreaking into active threat actor workflows represents a maturation. The groups Microsoft tracked aren't treating safety guardrails as hard limits. They're treating them as friction to be bypassed, developing and sharing prompt techniques that reliably extract the outputs they need from systems designed not to provide them.

Key Finding: Agentic AI in Attacks

Microsoft researchers observed early-stage experiments with agentic AI - systems that can execute multi-step tasks autonomously and adapt their approach based on intermediate results. Current use is primarily for decision support rather than fully autonomous attacks. The threshold between AI-assisted and AI-directed attacks has not yet been crossed at scale. But the experimental groundwork is being laid.

The more fundamental issue is structural. AI safety mechanisms are optimized to prevent obvious misuse by casual users. Sophisticated threat actors are neither casual nor deterred by friction. The arms race between safety red-teaming and jailbreaking is real and ongoing, but the adversaries have significant advantages: they're highly motivated, they share techniques within criminal and state actor communities, and they only need to find one working bypass while defenders need to close all of them.

The emergence of agentic AI in attacker toolkits is the development that most concerns researchers looking further ahead. Current AI use in attacks requires human operators to issue prompts and evaluate outputs. Agentic systems - which can receive a high-level objective, plan the steps to achieve it, execute those steps, observe results, and adapt - would remove the human-in-the-loop requirement for many attack phases. Microsoft notes the experiments in this direction are currently limited, but the technical components that would enable autonomous, AI-directed cyberattacks are being assembled.

What Defenders Can Actually Do

Both Microsoft and Google conclude their reports with defensive recommendations. The recommendations are not new - reduce attack surface, monitor for anomalous behavior, patch rapidly, harden identity systems - but the AI context gives them sharper urgency. A few specific adaptations are worth highlighting.

The expansion of AI-powered phishing means that human behavioral training, which has been the cornerstone of enterprise security awareness programs for a decade, is becoming insufficient as a primary control. Training users to spot "suspicious" emails worked reasonably well when attackers were limited by language barriers and template-based content. It doesn't work against contextually aware, linguistically perfect, individually tailored lures. Organizations need to shift defensive investment toward technical controls - multi-factor authentication, hardware security keys, zero-trust network architectures - that don't rely on users making the right judgment call in the moment.

The edge device problem requires a cultural shift in how security operations teams allocate attention. Most mature security programs have sophisticated monitoring for user endpoint behavior and cloud activity. The networking equipment that constitutes the actual perimeter is often monitored minimally or not at all, because it's managed by network operations teams with different tooling and priorities. Converging endpoint and network monitoring under unified visibility is operationally difficult but strategically necessary given where threat actors are actually targeting.

Microsoft's framing of the IT worker infiltration schemes as insider risks rather than external intrusions has practical implications for hiring processes. Remote work screening - which most companies treat as a background check and a video call - is inadequate against adversaries running industrial-scale identity fabrication operations. More rigorous identity verification for remote technical roles, particularly those with access to sensitive systems or source code repositories, is no longer optional for organizations that could plausibly be targets of state-sponsored operations.

On the zero-day front, Google's recommendation around rapid patching and incident response is correct but understates the challenge. Many of the most-targeted products in 2025 - networking appliances, VPN gateways, security appliances - have patch cycles measured in weeks and deployment timelines that require extensive testing before production rollout. Vendors need to shorten those cycles and provide better tooling for emergency patching. Until they do, the gap between patch release and widespread deployment will continue to be one of the most reliably exploitable windows in enterprise security.

The Second-Order Problem No One's Talking About

There is a version of this story where the AI-augmented cyberattack problem is primarily a technical one with technical solutions - better detection, faster patching, stronger authentication. That framing is incomplete.

The deeper issue is economic. Developing and maintaining capability at the level Google and Microsoft are documenting requires substantial investment. State actors have state budgets. Commercial spyware vendors have enterprise contracts. The financially motivated ransomware groups that accounted for nine zero-days in 2025 are generating revenues that make them credible purchasers of serious capabilities.

Defenders are not comparably resourced across the economy. A mid-size hospital network, a regional utility, a municipal government - the organizations that increasingly form the soft targets in sophisticated attack chains - cannot compete with adversaries drawing on state intelligence budgets or criminal revenue streams. And AI, which in principle could be a democratizing force for defense by automating detection and response, is in practice currently doing more for the offense because offense has simpler requirements and lower accountability thresholds.

The asymmetry has a policy dimension that regulation has barely begun to address. Requiring security appliance vendors to provide better edge visibility tools, mandating faster patch disclosure timelines for critical infrastructure products, creating international frameworks to constrain commercial spyware proliferation - these are political decisions that technical reports can illuminate but cannot force. The data from Microsoft and Google makes the urgency impossible to deny. Whether urgency translates into action is a different question, with a less encouraging recent track record.

What's clear is that the comfortable assumption that "defenders will catch up" needs to be retired. The trend lines across every metric in both reports - zero-day counts, target sophistication, commercial vendor proliferation, AI adoption by adversaries - are moving in the wrong direction. The window to change that trajectory through proactive investment and policy action is not indefinitely open.


Sources: Microsoft Threat Intelligence (March 2026); Google Threat Intelligence Group Annual Zero-Day Report (March 2026); BleepingComputer reporting; Proton Transparency Report 2025; NSO Group Entity List documentation (US Dept. of Commerce, November 2021); Salt Typhoon congressional testimony (December 2024).