For the last year, one word has represented the conversation living at the intersection of AI and cybersecurity: speed. Speed matters, but it’s not the most important shift we are observing across the threat landscape today. Now, threat actors from nation states to cybercrime groups are embedding AI into how they plan, refine, and sustain cyberattacks. The objectives haven’t changed, but the tempo, iteration, and scale of generative AI enabled attacks are certainly upgrading them.
However, like defenders, there is typically a human-in-the-loop still powering these attacks, and not fully autonomous or agentic AI running campaigns. AI is reducing friction across the attack lifecycle; helping threat actors research faster, write better lures, vibe code malware, and triage stolen data. The security leaders I spoke with at RSAC™ 2026 Conference this week are prioritizing resources and strategy shifts to get ahead of this critical progression across the threat landscape.
The operational reality: Embedded, not emerging
The scale of what we are tracking makes the scope impossible to dismiss. Threat activity spans every region. The United States alone represents nearly 25% of observed activity, followed by the United Kingdom, Israel, and Germany. That volume reflects economic and geopolitical realities.1
But the bigger shift is not geographic, it’s operational. Threat actors are embedding AI into how they work across reconnaissance, malware development, and post-compromise operations. Objectives like credential theft, financial gain, and espionage might look familiar, but the precision, persistence, and scale behind them have changed.
Email is still the fastest inroad
Email remains the fastest and cheapest path to initial access. What has changed is the level of refinement that AI enables in crafting the message that gets someone to click.
When AI is embedded into phishing operations, we are seeing click-through rates reach 54%, compared to roughly 12% for more traditional campaigns. That is a 450% increase in effectiveness. That’s not the result of increased volume, but the result of improved precision. AI is helping threat actors localize content and adapt messaging to specific roles, reducing the friction in crafting a lure that converts into access. When you combine that improved effectiveness with infrastructure designed to bypass multifactor authentication (MFA), the result is phishing operations that are more resilient, more targeted, and significantly harder to defend at scale.
A 450% increase in click-through rates changes the risk calculus for every organization. It also signals that AI is not just being used to do more of the same, it is being used to do it better.
Tycoon2FA: What industrial-scale cybercrime looks like
Tycoon2FA is an example of how the actor we track as Storm-1747 shifted toward refinement and resilience. Understanding how it operated teaches us where threats might be headed, and fueled conversations in the briefing rooms at RSAC 2026 this week that focused on ecosystem instead of individual actors.
Tycoon2FA was not a phishing kit, it was a subscription platform that generated tens of millions of phishing emails per month. It was linked to nearly 100,000 compromised organizations since 2023. At its peak, it accounted for roughly 62% of all phishing attempts that Microsoft was blocking every month. This operation specialized in adversary-in-the-middle attacks designed to defeat MFA. It intercepted credentials and session tokens in real time and allowed attackers to authenticate as legitimate users without triggering alerts, even after passwords were reset.
But the technical capability is only part of the story. The bigger shift is structural. Storm-1747 was not operating alone. This was modular cybercrime: one service handled phishing templates, another provided infrastructure, another managed email distribution, another monetized access. It was effectively an assembly line for identity theft. The services were composable, scalable, and available by subscription.
This is the model that has changed the conversations this week: it is not about a single sophisticated actor; it is about an ecosystem that has industrialized access and lowers the barrier to entry for every actor that plugs into it. That is exactly what AI is doing across the broader threat landscape: making the capabilities of sophisticated actors available to everyone.
Disruption: Closing the threat intelligence loop
Our Digital Crimes Unit disrupted Tycoon2FA earlier this month, seizing 330 domains in coordination with Europol and industry partners. But the goal was not simply to take down websites. The goal was to apply pressure to a supply chain. Cybercrime today is about scalable service models that lower the barrier to entry. Identity is the primary target and MFA bypass is now packaged as a feature. Disrupting one service forces the market to adapt. Sustained pressure fragments the ecosystem. By targeting the economic engine behind attacks, we can reshape the risk environment.
Every time we disrupt an attack, it generates signal. The signal feeds intelligence. The intelligence strengthens detection. Detection is what drives response. That is how we turn threat actor actions into durable defenses, and how the work of disruption compounds over time. Microsoft’s ability to observe at scale, act at scale, and share intelligence at scale is the differentiation that matters. It makes a difference because of how we put it into practice.
AI across the full attack lifecycle
When we step back from any single campaign and look for a broader pattern, AI doesn’t show up in just one phase of an attack; it appears across the entire lifecycle. At RSAC 2026 this week, I offered a frame to help defenders prioritize their response:
- In reconnaissance: AI accelerates infrastructure discovery and persona development, compressing the time between target selection and first contact.
- In resource development: AI generates forged documents, polished social engineering narratives, and supports infrastructure at scale.
- For initial access: AI refines voice overlays, deepfakes, and message customization using scraped data, producing lures that are increasingly difficult to distinguish from legitimate communications.
- In persistence and evasion: AI scales fake identities and automates communication that maintains attacker presence while blending with normal activity.
- In weaponization: AI enables malware development, payload regeneration, and real-time debugging, producing tooling that adapts to the victim environment rather than relying on static signatures.
- In post-compromise operations: AI adapts tooling to the specific victim environment and, in some cases, automates ransom negotiation itself.
The objective has not changed: credential theft, financial gain, and espionage. What has changed is the tempo, the iteration speed, and the ability to test and refine at scale. AI is not just accelerating cyberattacks, it’s upgrading them.
What comes next
In my sessions at RSAC 2026 this week, I shared a set of themes that help define the AI-powered shift in the threat landscape.
The first is the agentic threat model. The scenarios we prepare for have changed. The barrier to launching sophisticated attacks has collapsed. What once required the resources of a nation-state or well-organized criminal enterprise is now accessible to a motivated individual with the right tools and the patience to use them. The techniques have not fundamentally changed; the precision, velocity, and volume have.
The second is the software supply chain. Knowing what software and agents you have deployed and being able to account for their behavior is not a compliance exercise. The agent ecosystem will become the most attacked surface in the enterprise. Organizations that cannot answer basic inventory questions about their agent environment will not be able to defend it.
The third is understanding the value of human talent in a security operation using agentic systems to scale. The security analyst as practitioner is giving way to the security analyst as orchestrator. The talent models organizations are hiring against today are already outdated. But technology can help protect humans who may make mistakes. Though it means auditability of agent decisions is a governance requirement today, not eventually. The SOC of the future demands a fundamentally different kind of defender.
The moment to lead with strategic clarity, ranked priorities, and a hardened posture for agentic accountability is now.
If AI is embedded across the attack lifecycle, intelligence and defense must be embedded across the lifecycle too. Microsoft Threat Intelligence will continue to track, publish, and act on what we are observing in real time. The patterns are visible. The intelligence is there.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.