Recently, there’s been some very public (and, frankly, very funny) AI agent and bot failures.
Like Chipotle’s assistant supporting codegen (since patched): “Stop spending money on Claude Code. Chipotle’s support bot is free” (r/ClaudeCode)
And in a surreal fashion, Washington state’s call-center hotline providing Spanish support by speaking English with a Spanish accent: “Washington state hotline callers hear AI voice with Spanish accent” (AP News)
Coinciding with this, other Forrester analysts and I have had a spate of calls where organizations have launched a new AI agent without testing them.
Put simply, please do not do this.
Please test your AI agents before launching them — some options on how to do this are below.
What do we mean by this?
At minimum: Test all your bot’s features (and use cases) yourself.
For any AI agent, or new feature you’re introducing to it, the minimum effort you should invest is to make sure someone has used it as an end user before this goes live.
This can be as simple as someone on the developer team or as involved as a dedicated testing group. But you need to make sure that someone has actively used your solution — and all its features. This should also be done on an ongoing basis so that when new features are launched, they’re tested, too.
This can be time-intensive, but as we see with the public cases, not everything works as expected all the time.
In fact, AI can go wrong in more unexpected ways than before. If you can’t ensure that features are working as intended, then you might end up on the news.
Please note that this is the minimum possible effort. This is not enough to ensure that something won’t go wrong or your application won’t fail — this will only catch the most obvious/embarrassing outcomes. A more robust testing practice is recommended.
For more on how agentic systems fail: Why AI Agents Fail (And How To Fix Them)
Recommended: Practice red teaming.
A good way to prevent this kind of unexpected permutation is with red teaming or intentionally trying to break the bot. We recommend this as a standard practice for your organization.
There are two sides to this: One is traditional or infosec red teaming. This is focused on finding security exploits. The second is behavioral. This is focused on getting the solution or model to behave in an inappropriate or unintended fashion. It is best to have a practice on both.
At the very least, your team should kick the tires for a day and try as many exploits as possible. Even when you have a governance layer, you must ensure that it’s holding up in the wild or, ideally, even post-launch.
For more on the red team practice: Use AI Red Teaming To Evaluate The Security Posture Of AI-Enabled Applications
For more on standard governance approaches that should be followed: Introducing Forrester’s AEGIS Framework: Agentic AI Enterprise Guardrails For Information Security
For specific common governance failures, see AIUC-1’s page, “The world’s first AI agent standard”
For a fun example of what employee-driven red teaming can look like, check out Anthropic’s write-up, “Project Vend: Can Claude run a small shop? (And why does that matter?)”
Recommended: Test using a testing suite and practice.
Testing an AI agent system that has agentic capabilities is still an emerging field, but rapid progress is being made. To supplement your testing programs (people whose job is to test your AI tools, applications, and agents), testing suites provide additional integrated support. There are two ways to think of testing suites today: synthetic and ongoing agentic.
Synthetic tests are simple — they test your AI agent against a sample of precreated prompts and ideal answers to act as a “golden set” to test against. This allows you to perform a regression test over time to validate the question, “Does our AI agent provide the correct responses?”
But synthetic regression tests are often only performed for an AI agent after some noteworthy change, such as switching out the model used or introducing a number of new use cases. Increasingly, larger testing suites are looking to test automatically and continuously. Other techniques like large language model-as-a-judge can provide supplementary runtime supervision.
(Further work is coming from Forrester on synthetic testing.)
Please note that if you do not have a formal testing program for AI systems, please either hire people for this or hire a testing services company.
For more on building tests, see Anthropic’s, “Demystifying evals for AI agents”
For more on autonomous testing: The Forrester Wave™: Autonomous Testing Platforms, Q4 2025
For how you can make continuous testing work: It’s Time To Get Really Serious About Testing Your AI: Part Two
Recommended: Test with a representative sample.
The ultimate test of your agents, however, will come from your users. They alone determine if you pass or fail. It is in your best interests to make them happy.
The question is: How do we test with real users before production? The answer is a user champion group (or similar convention). These are users who have either volunteered themselves or been selected by you to test what your agent is capable of.
This is easier in internal-facing use cases, as employee groups are more straightforward to assemble, but many customer-facing organizations can achieve the same thing through voluntary test sign-ups.
The risk is that you have users who are an overeager group who don’t make up a representative sample of your user base. In other words, they don’t necessarily represent your average user. This can be avoided through careful group design or, at least, asking users to take on a persona when conducting the test.
If this isn’t possible, you could use a canary test/conditional rollout that can serve as this testbed (though it’s better when it’s voluntary).
For more on building this user champion group internally: Best Practices For Internal Conversational AI Adoption





















