
Prasad Kavuri
AI Engineering Leader | Built Agentic AI at Krutrim & Ola | LLM Platforms · AI FinOps · 200+ Teams | Chicago
Agentic AILLM PlatformsApplied AI StrategyGlobal Engineering Leadership
I've spent the last 20 years building and scaling technology platforms-from cloud transformation to what we're now seeing with Agentic AI. What I care about is simple: turning AI from something that looks impressive in a demo into something that actually delivers business value at scale. At Krutrim, I led teams building India's first agentic AI platform (Kruti.ai), working across multi-model orchestration, real-time personalization, and production-grade systems. At Ola, I helped scale mapping and location platforms to support 13,000+ B2B customers. Across both, the focus has been consistent-take complex systems and make them reliable, efficient, and commercially viable. A big part of my work sits in the gap most companies struggle with moving from experimentation to production. That means designing multi-agent workflows that go beyond chat, driving 40-70% cost reductions through smarter model strategies, and building the governance layer that lets enterprises actually trust what they're deploying. I've also spent a significant part of my career building and leading global engineering teams across North America, Europe, and APAC-creating environments where teams can move fast, challenge ideas, and still stay aligned to business outcomes. Right now, I'm focused on helping organizations move past the "PoC stage" and actually operationalize AI-especially in environments where scale, cost, and trust matter. I'm based in the Chicago area and always open to conversations around AI strategy, platform engineering, and where this space is heading next.
Most AI programs fail in production because cost discipline, governance, and operational ownership are bolted on too late.
I build production AI systems — not prototypes.
I optimize for cost, latency, and scalability — not just model quality.
I align engineering, product, and business teams around measurable outcomes.
I design AI systems with measurable quality loops and human oversight and governance.
Signature System: AI Evaluation Showcase
Offline eval suites, live drift monitoring, hallucination indicators, and regression-minded quality gating are built into this platform.
Why this matters: quality regressions are surfaced before release, so AI reliability is managed as an engineering system.
Explore Signature System13K+B2B Customers Enabled
Up to 70%Cost Reduction Delivered
Currently Exploring
On-device Small Language ModelsAgent-to-Agent (A2A) ProtocolLLM Observability and TracingMultimodal Agentic Workflows