Agentic Bloat: Why Collecting AI Tools Won’t Save You

Point AI solutions feel like progress. Until you realize you’ve created the same fragmented knowledge problem that made AI necessary in the first place.

David Cotten
CTO & Co-Founder

The Pattern We’re Seeing

Every week, I talk with professional services leaders who’ve done the “responsible” thing. They’ve responded to the AI imperative and they’ve invested.

Leveraging ChatGPT Enterprise for research, Cursor for code assistance, a proposal generator from one vendor, a meeting summarizer from another. Maybe an agent that pulls project data, and a separate one that drafts status reports.

Each purchase made sense and each solves a real problem, but collectively? They’ve created a new version of the same fragmented knowledge problem and operational spend that made AI necessary in the first place.

Many are trying to build “it” on their own but what exactly are they building, and what if the person building leaves?

Gartner projects that more than 40% of agentic AI projects will be cancelled by the end of 2027. That’s not a failure of ambition but the predictable outcome of treating AI adoption as a procurement exercise rather than an architectural one.

Why It Feels Right But Isn’t

Let’s be clear, the individual tools aren’t the problem because they work. The meeting summarizer does summarize meetings, proposal generators generate proposals and the efficiency gains are real, if modest.

The problem is that none of them talk to each other, they can’t share context, they don’t know what the other knows.

Your meeting summarizer captures a client’s concern about timeline risk. Your proposal generator, running on completely separate context, produces a scope that doesn’t account for it. Your project status agent pulls metrics that look fine, but has no visibility into the sentiment data sitting in another tool’s silo. To compound the problem, the people prompting these tools still have the same context gaps as before AI. The developers haven’t spoken to the client. The BA doesn’t understand the client’s business challenge. We're perpetuating the same garbage-in-garbage-out problem as before, but now with AI.

You’ve created AI islands, and just like the data islands and knowledge silos that have plagued professional services for decades, the knowledge in them doesn't compound.

Recent research found that 70% of enterprises haven’t moved beyond basic integration for their AI tools. Three in four have experienced at least one negative outcome from disconnected AI systems. This isn’t early-adopter growing pains but a fundamental structural problem.

The Technical Reality: When Agents Add Work

Here’s where it gets worse. Deloitte’s recent analysis on agentic AI identified something they call “workslop”: poorly designed agentic applications that actually add work to a process instead of removing it.

How does that happen? When agents operate without unified context, their outputs require human reconciliation. Someone has to cross-reference the meeting summary against the proposal. Someone has to validate the project status against what they heard in the client call. Someone has to be the integration layer that the technology should have provided.

Deloitte also flags what they call “agent washing”: vendors rebranding existing automation capabilities as “agents” to ride the hype cycle. Many so-called agentic initiatives are actually automation use cases in disguise, applying agents where simpler tools would suffice.

The result? You’ve added cognitive overhead by purchasing the appearance of AI transformation while creating new reconciliation work that didn’t exist before.

The Root Problem: Context Without Architecture

When I talk with technical leaders at consulting firms, we eventually get to the same realization: AI tool collection is a symptom, not the disease.

The disease is that their institutional knowledge has no coherent architecture. It’s scattered across:

  • Individual consultants’ heads (and leaves when they do)
  • Project folders organized by engagement, not by insight
  • Slack threads that disappear into the void
  • Meeting recordings nobody has time to watch
  • Methodologies documented in PDFs that nobody can find

You can bolt as many AI tools onto that chaos as you want. They’ll just accelerate the chaos.

Every agent you deploy is only as intelligent as the context it can access. If your context is fragmented, your agents will produce fragmented outputs. If your knowledge architecture is a mess, your AI will generate confident-sounding mess at scale.

“We’ll Just Connect Everything”

This is usually where someone suggests the integration play: “We'll use connectors to pipe data between tools” or “We’ll aggregate everything into a data lake and build dashboards.”

It sounds reasonable, but it misses the point.

Connectors move data. They don’t create understanding. You can sync your meeting transcripts to your project management system and your CRM, but that doesn’t mean your proposal agent understands the client relationship. You’ve centralized the storage without architecting the meaning.

Data aggregation has the same problem. A lake full of unstructured context is still just a lake. Your agents are swimming in it, not reasoning against it. You’ve given them access to everything and the ability to make sense of very little.

The difference between integration and orchestration is the difference between having all your ingredients in the same kitchen versus having a recipe. One is a prerequisite. The other is the thing that actually produces results.

Context Engineering: The Foundation That Has to Come First

Before you adopt another AI tool, you need to answer three questions:

1. Where does your institutional knowledge actually live? Not where it should live. Where it actually lives. In what systems, in whose heads, in which formats? Take an honest inventory.

2. How would an agent access it? If the answer is “it can’t” or “it would need to query seven different systems with different permissions and no common schema,” you've identified your real problem.

3. How does learning compound? When your team learns something on one engagement, how does that insight become available to the next engagement? To the next team? To the entire firm? If the answer is “it doesn’t” or "it depends on whether someone remembers to tell someone else,” you have a knowledge lottery, not knowledge architecture.

This is what context engineering means: designing the foundational layer that allows AI systems to access, understand, and build on your firm’s collective intelligence.

The Strategic Sequence

The firms that will win the AI transformation aren’t the ones adopting the most tools. They're the ones building in the right order:

First: Context architecture. Before any AI adoption, establish how your institutional knowledge will be unified, organized, and made accessible. This isn't a six-month project, it's a strategic commitment to treating your collective intelligence as a compound asset.

Second: Selective tool adoption. Once your context layer exists, evaluate AI tools against a single criterion: does this compound on our foundation, or does it create another island? Tools that can’t integrate into your knowledge architecture aren’t investments, they’re future technical debt.

Third: Orchestration over collection. The goal isn’t to have the most agents but to have agents that share context, learn from each other’s outputs, and collectively make your firm smarter over time. One well-orchestrated system beats ten disconnected point solutions.

Why This Cycle Is Different

I spent years in enterprise consulting at PwC and IBM and I’ve seen every technology cycle create the same pattern. Anxiety leads to reactive purchasing, reactive purchasing leads to fragmentation, fragmentation leads to the next cycle’s “transformation initiative” to clean up the mess.

AI is different in one important way, the fragmentation happens faster because the tools are easier to adopt, the silos multiply quicker, and the reconciliation work piles up before you realize you’ve created it.

The question you should be asking is not, “which AI tools should we buy?”, but “what does our knowledge architecture need to look like for AI to actually compound our capabilities?”

That's a harder question but it's the right one.

------

Frequently Asked Questions

What's the difference between agentic bloat and normal technology sprawl?

Traditional tech sprawl is about redundant capabilities and license costs. Agentic bloat is when agents actively produce outputs that require reconciliation work because they can’t share context. You’re not just wasting money, you’re creating new work.

Can’t we just integrate our existing AI tools?

You can connect them, but connection isn’t context. If you’re evaluating integration, ask: after the data syncs, can an agent in Tool A actually reason about what Tool B learned? Or did you just create a more complicated way to still need a human to reconcile the outputs?

How do we know if we have an agentic bloat problem?

Ask your team: when two AI tools produce different information about the same client or project, what happens? If the answer involves manual cross-referencing or “someone has to check,” you’ve got the problem.

What’s context engineering?

Context engineering is designing the foundational layer that allows AI systems to access your firm's institutional knowledge in a unified, governed way. It’s the architecture that enables agents to share understanding rather than operate in silos.

Enter your email to download the complete resource:

The build vs. buy debate misses the point if you haven’t solved the context problem first. Learn how professional services firms are approaching knowledge architecture in Build vs Buy in the Time of AI. Ready to explore what context engineering looks like for your firm? Schedule a conversation with our team.

Continue Reading

Blog

The Existential Crisis Facing Professional Services

McKinsey now counts 25,000 AI agents in their workforce. Here’s what that means for every professional services firm.

read Post

Webinar

Beyond Back Office: Turning Your Finance & Operations Data into Competitive Advantage

Highlights from our Jan 2026 webinar on turning PSA data into competitive advantage featuring a conversation & live demo with OpMentors and Orion.

read Post

Blog

Moonnox & Swantide Announce AI Partnership to Empower Salesforce Delivery Teams

Moonnox and Swantide announce a strategic partnership that combines the why and context behind every Salesforce implementation with agentic capabilities that accelerate delivery, precision, and time to value.

read Post