MCP Solved the Pipes. Who Solves the Water Quality?

·Patrick Joubert·5 min read
mcpcontext-graphagent-reliabilitycontext-engineeringproduction-infrastructure

The Model Context Protocol won.

97 million monthly SDK downloads. Over 8,500 public servers. Adopted by Anthropic, OpenAI, Google, and Microsoft. Governed by the Agentic AI Foundation under the Linux Foundation, with Apple, Amazon, Block, Cloudflare, Confluent, Intuit, Meta, Microsoft, PayPal, Shopify, and Stripe as platinum members.

MCP is the USB-C of AI agents. Every agent can now connect to every tool, every data source, every service. The universal connector exists. The plumbing problem is solved.

So why are production agent failure rates still between 41% and 87%?

Because the plumbing was never the problem.

What MCP actually does

MCP standardizes three primitives: tools (actions agents can take), resources (data agents can read), and prompts (templates agents can use). It defines how an agent discovers what is available, how it requests access, and how it receives responses.

This is genuine infrastructure. Before MCP, every agent-to-tool integration was custom. Every API wrapper was bespoke. Every data connection was hand-wired. MCP eliminated that fragmentation.

But MCP is a transport protocol. It answers: "What tools exist? How do I call them? What format does the response come in?"

It does not answer: "Is this data still valid? Does this information apply to this decision? What happens if two sources contradict each other? Can I trace why this context was selected?"

These are not edge cases. These are the questions that determine whether an agent makes a reliable decision or a plausible-sounding wrong one.

The water quality problem

Imagine a municipal water system. Engineers spent years building pipes that connect every building to every water source. The pipes are standardized. The connections are universal. Any building can now receive water from any source.

Nobody built a filtration plant.

The water flows. Some of it is clean. Some of it expired three days ago. Some of it comes from a source that was decommissioned last quarter but still responds to queries. Some of it contradicts water from another pipe. The building receives all of it, mixed together, with no metadata about freshness, source reliability, or contamination risk.

The building's residents drink what arrives and hope for the best.

This is what MCP without context governance looks like in production.

Four things MCP does not do

Temporal validation. MCP delivers data as it exists at query time. It carries no concept of "this data was valid until yesterday" or "this pricing policy was superseded in March." An agent connected via MCP to a knowledge base will receive semantically relevant results regardless of whether those results are current. The protocol does not distinguish between fresh data and expired data because freshness is not part of the transport specification.

Scope binding. MCP gives agents access to everything they are authorized to reach. Authorization is not relevance. A customer service agent authorized to access the full product catalog does not need the full product catalog when resolving a billing dispute. MCP delivers what is available. It does not filter for what is relevant to the specific decision being made.

Conflict resolution. When an agent queries two MCP-connected sources and receives contradictory information, MCP delivers both responses. It has no mechanism for evaluating which source is more authoritative, which data is more recent, or which answer should take precedence. The agent must resolve the conflict on its own, typically by defaulting to whatever appeared last in the context window.

Decision traceability. MCP logs what was requested and what was returned. It does not record why a particular piece of data was selected for a particular decision, what alternative data was available, or what constraints were active at decision time. When a production incident occurs, the MCP logs show the data flow. They do not show the decision logic.

What a context graph adds above MCP

A context graph is the governance layer above the transport layer. MCP delivers data. A context graph ensures that the data is valid, relevant, consistent, and traceable before it reaches the agent's reasoning.

MCP connects. The context graph validates. Every piece of data entering through MCP passes through the context graph, which checks: Is this within its validity window? Has it been superseded? Does it apply to the current decision scope? Only validated context reaches the agent.

MCP delivers. The context graph scopes. Instead of flooding the agent with everything available, the context graph traverses only the subgraph relevant to the current decision. A pricing decision traverses pricing nodes, customer segment nodes, and active promotion nodes. Not the full catalog. Not the support ticket history. Not the HR policies.

MCP returns responses. The context graph resolves conflicts. When two MCP-connected sources return contradictory data, the context graph resolves the conflict using provenance metadata and temporal priority. The more authoritative source wins. The more recent data wins. The resolution is structural, not left to the LLM's best guess.

MCP logs requests. The context graph traces decisions. Every decision produces a trace: which context nodes were active, which edges were traversed, which constraints applied, and what data was excluded. This is not a log of API calls. It is a replayable record of the decision.

The architecture stack

The agent stack in 2026 has three layers, not two:

Layer Function Solved by
Transport Connecting agents to tools and data MCP
Governance Validating, scoping, and tracing context Context graph
Reasoning Making decisions from validated context LLM

Most production systems today have layers 1 and 3. They connect agents to everything and let the model reason over whatever arrives. Layer 2 is missing. And layer 2 is where reliability lives.

Without governance, transport at scale is a liability. The more sources you connect, the more unvalidated data floods the context window. The more tools you enable, the more potential for stale schemas, expired contracts, and scope pollution. MCP's success makes the context graph more necessary, not less.

The MCP roadmap confirms the gap

The MCP specification continues to evolve. Authentication, authorization, streaming, and server discovery are on the roadmap. These are transport improvements. They make the pipes more secure, more efficient, and easier to find.

None of them address temporal validity. None of them add provenance tracking. None of them implement scope binding or conflict resolution or decision traceability. These are not transport concerns. They are governance concerns. They belong in a different layer.

This is not a criticism of MCP. It is a recognition that MCP solved exactly the problem it set out to solve. The problem it did not set out to solve is the one that causes 80% of production agent failures.

The question nobody is asking

Every team adopting MCP in 2026 is asking: "How do we connect our agents to more data sources?"

Almost nobody is asking: "How do we ensure the data that reaches our agents is valid, relevant, scoped, and traceable?"

The first question is a transport question. MCP answers it.

The second question is a governance question. A context graph answers it.

The teams that will run reliable agents at scale are the ones asking both questions. The ones that only ask the first will connect their agents to everything and wonder why the outputs keep degrading.

MCP solved the pipes. The context graph solves the water quality. You need both.

Cite this memo

Patrick Joubert. (2026). "MCP Solved the Pipes. Who Solves the Water Quality?." The Context Graph. https://thecontextgraph.co/memos/mcp-solved-the-pipes-who-solves-water-quality

Running into these patterns in production?