Agentic AI, like any software, is just one part of a business solution. It is not the only element that needs to be secured. Engineers need to approach securing agentic AI in the corporate IT ecosystem the same way they would consider any security problem—from end to end.
Yonatan Zunger, CVP and deputy CISO for Microsoft, suggests focusing exclusively on hardening a piece of software to security threats may make it difficult to use and introduce a new risk when users get frustrated and try to bypass controls. This is why engineers need to consider not just individual components but how they work together to maintain productivity.
“Think of every system as a socio-technical system containing many parts, and all of them working together in unison have to be secured,” Zunger says.

Learn from our experience
Read our practical advice about applying security fundamentals to AI.

Related links
- Explore more from Zunger about how to deploy AI safely.
- Discover what you need to know about governing autonomous agents.
- Learn about aligning user, developer role, and organizational intent in agent governance.
- Find out how to use Authorization Fabric to govern AI agents at scale.
- Read about agent abuse patterns.
- Identify ways to secure AI agents with Azure AI Foundry.
- Explore how Data Security Posture Management for AI can prevent runtime risks.

We’d like to hear from you!

