Podcast
Sean & Ari's Hot Takes

Agent Washing, Guardrails, and Busted AI Projects

Episode #
6
  |  
November 19, 2025
  |  
36 min

Episode Overview

In this episode, Sean Heuer and Ari Stowe dig into whether today’s so-called “AI agents” are actually agentic… or just glorified chatbots riding the hype. They also tackle broken data strategies, why “dump everything into an LLM and see what happens” is a fast track to a busted project, and how shrinking the universe to specific, high-value use cases keeps AI from collapsing under its own buzz.

Key Takeaways

  • “Agent washing” is real, and the simplest tell is whether the “agent” can take action. If it only chats or summarizes information, it’s closer to a chatbot. If it can plan and execute (safely) across systems, you’re looking at something genuinely agentic.
  • Real agentic systems tend to look like teams, not individuals. The episode leans into “AI FTEs” as a useful metaphor: multiple domain-specific agents (often small, specialized models) orchestrated together for better precision and fewer bizarre outputs than a single “do everything” model.
  • Data isn’t the only blocker, but “old busted data on old busted systems” is how you get expensive busted projects. The point isn’t “fix all data first.” It’s: shrink the universe, pick a high-impact use case, get the minimum clean context you need, and expand from there.
  • The strategy that keeps showing up is incremental execution over ocean-boiling. Start small, prove an outcome the business cares about, earn permission for the next use case, and build momentum. AI doesn’t change that playbook; it makes the penalties for skipping it worse.

FAQ

Q: How can buyers spot “agent washing” when vendors call everything agentic?

A: Look for action and flexibility. If it only returns information, it’s a chatbot. If it can plan and execute steps in your environment (with guardrails) and adapt when inputs change, it’s agentic. In practice, you only know for sure by testing real use cases in your environment.

Timestamp: 3:42–6:38

Q: Is orchestration a reliable differentiator between chatbots and agentic AI?

A: Yes, because orchestration is what turns intent into controlled execution. Chatbots can retrieve knowledge and even route users, but agentic systems connect intent to workflows, apply permissions and guardrails, take action, and produce a traceable record of what happened. That orchestration layer is also where enterprises start solving hard problems.

Timestamp: 12:51–16:18

Q: Is bad data the #1 blocker to agentic AI success?

A: It’s a major blocker, but the bigger issue is context. AI “thrives on context,” and messy, siloed data creates unreliable context, which leads to unreliable outcomes. The recommended approach is to choose a high-value use case, build a clean dataset/context, and expand incrementally instead of dumping everything in and hoping.

Timestamp: 26:12–34:59