Microsoft CISO advice: How to build Trustworthy Agentic AI

|

Corporate Vice President and Deputy Chief Information Security Officer Yonatan Zunger explains the importance of rigorous testing when building trustworthy agentic AI.

Building production-ready solutions with agentic AI comes with inherent risks. When agents make mistakes or hallucinate, the potential impacts can multiply rapidly.

“It turns out that it’s very easy to write AI-powered software, but it’s very hard to write AI-powered software that works right in real-world cases,” says Yonatan Zunger, CVP and deputy CISO for Microsoft.

Yunger explains how important it is to test if you want to build trustworthy agentic AI.

Watch this video to see Yonatan Zunger explain how to build trustworthy agentic AI. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=eNU7c48541M)

Key takeaways

Here are best practices to apply while building trustworthy agentic AI:

  • Prototype. Test. Iterate. Think of and try prompts your real users might give your agentic AI. Use real data. From those trials, build a set of test cases and keep testing.
  • Use AI tools to amplify testing. Evaluating agents requires a “try it and repeat it” mindset. Using AI Foundry with such tools as Python Risk Identification Tool amplifies these assessment capabilities.
  • Record your tests. Applying this practice, as you would with unit testing, enables you to repeat evaluations as your data models and agents evolve.
  • Don’t skimp on testing. Test early, test often, test with real data. This is the best way to understand what your agent might do when it encounters the unexpected.

Recent