← Back to portfolio

Agentic AI

Agentic AI Changes More Than Your Tech Stack — It Changes How Work Gets Done

4 min read · Prasad Kavuri

Most of the conversation around agentic AI is still focused on the model layer. Which foundation model, which orchestration framework, which vector database. That's the wrong level of abstraction. The more important shift is operational.

When AI moves from answering questions to executing tasks, the failure modes change completely. A wrong answer in a chatbot is annoying. A wrong action in an agent workflow can modify data, trigger downstream systems, or make irreversible decisions. The engineering discipline required is closer to distributed systems reliability than to prompt engineering.

This means human-in-the-loop checkpoints aren't a nice-to-have — they're an architectural requirement for any agentic system operating in a business context. It means evaluation frameworks need to cover not just response quality but action correctness. And it means trace IDs and audit logs need to be first-class citizens, not afterthoughts.

The organizations getting this right are treating agentic AI the way they treat any other production system: with runbooks, rollback procedures, cost budgets, and on-call escalation paths. The ones struggling are treating it like a chatbot that can also do things — and discovering, usually painfully, that the gap between those two mental models is enormous.

Building agentic systems that actually ship to production requires rethinking not just the tech stack but the operating model around it. That's the work I find most interesting — and the problem that most AI deployments are still underestimating.

← Back to portfolio