3/15/2026 · 7 min read

AI for Consulting Firms: What Actually Works (and What Doesn't)

After working with consulting teams on AI adoption, here's what delivers real value and what wastes everyone's time.

Consulting firms have a complicated relationship with AI. They sell AI strategy to clients, but their own internal adoption is often limited to individual partners using ChatGPT when nobody's watching.

I've spent months talking to consulting teams about how they use AI — and how they wish they could use it. Here's what I've learned about what actually works.

What doesn't work: giving everyone ChatGPT access and hoping for the best. This is the most common approach, and it fails for three reasons. First, output quality varies wildly because each consultant prompts differently. Second, nobody shares their approaches, so there's no compounding value. Third, the output is generic — it doesn't follow the firm's methodology or template structure.

What doesn't work: custom GPTs with long system prompts. Some firms create internal GPTs with detailed instructions. This is better — at least there's a shared starting point. But the system prompt is static, disconnected from the firm's documents, and identical for every consultant regardless of their specialization or style.

What doesn't work: building custom AI tools from scratch. A few firms have tried building internal tools with LangChain or similar frameworks. The result is always the same: three months of development, a prototype that works for one use case, and nobody to maintain it when the developer moves on.

What actually works: structured AI agents with the firm's methodology embedded as permanent context.

Here's what that looks like in practice. The firm's engagement framework — say, a 5-phase approach with specific deliverables at each phase — is configured as team context. The proposal template structure, pricing guidelines, and quality standards are in the knowledge base. Each partner's communication style is captured as personal context.

When a junior consultant asks the Proposal Writer agent to draft a proposal for a financial services client, the output follows the firm's 5-phase framework, uses the correct pricing tiers, includes the standard scope definition format, and matches the firm's tone. The junior consultant's first draft looks like a senior consultant wrote it.

The Quality Reviewer agent then checks the draft against the firm's quality standards: methodology compliance, data accuracy, formatting, and client-appropriate language. Issues are flagged before the partner ever sees the document.

The Research Analyst agent pulls relevant industry data, past case studies from similar engagements, and competitive landscape information — all from the firm's own knowledge base plus current web research.

The key insight is that consulting work is highly structured but repetitive. Every proposal follows a similar pattern. Every client report has the same sections. Every quality review checks the same criteria. AI agents that understand these patterns and follow the firm's specific methodology can handle 60-70% of the work — leaving consultants to focus on the strategic thinking and client relationships that no AI can replace.

The firms that are getting real value from AI aren't the ones with the fanciest technology. They're the ones that invested time in codifying their methodology and making it available to their AI tools.

Stay updated

Join our newsletter for product updates and case studies.

Related posts