On March 26, 2026, Anthropic did something that most companies in a competitive race never do. They gave away one of their strongest assets.

Anthropic donated the Model Context Protocol — MCP — to the Linux Foundation. Not a partial contribution. Not a “we’ll open-source it but keep governance.” A full donation to a neutral foundation, with co-governance shared across companies that compete with each other every day.

The Linux Foundation responded by forming the Agentic AI Foundation (AAIF), co-founded by Anthropic, Block, and OpenAI. The governing board includes Google, AWS, Microsoft, Cloudflare, and Bloomberg.

Read that list again. Anthropic and OpenAI co-governing a protocol. Google and Microsoft sitting on the same board. AWS and Cloudflare in the same foundation. These companies fight for market share in every other context. Here, they’re building shared infrastructure.

This is the moment MCP stopped being Anthropic’s protocol and became the industry’s protocol. And if you’re building anything in the agentic AI space, the implications are significant.

Why Neutral Governance Changes Everything

When a protocol is owned by a single company, adoption carries risk. Build on it, and you’re dependent on that company’s roadmap, pricing decisions, and strategic pivots. Developers have learned this lesson the hard way — with APIs that got deprecated, platforms that changed terms, and protocols that got acqui-hired into oblivion.

When competing companies co-govern a protocol through a neutral foundation, that risk evaporates. The protocol becomes infrastructure.

HTTP wasn’t owned by Netscape. TCP/IP wasn’t owned by Sun. SMTP wasn’t owned by any email provider. These protocols became the backbone of the internet precisely because no single company controlled them. Developers built on them with confidence because the governance model guaranteed stability.

MCP just joined that category. Not because it’s as mature as TCP/IP — it isn’t yet — but because the governance structure now mirrors the same pattern that made those protocols durable. When your competitors are co-governing the protocol with you, nobody can pull the rug.

The Numbers Tell the Story

The adoption curve before the donation was already steep. After it, the trajectory is clear.

MCP SDK downloads have crossed 97 million monthly across the Python and TypeScript packages combined. Over 10,000 active public MCP servers are registered. The TypeScript SDK is at v1.29.0. The C# SDK shipped its 1.0. The Python SDK is stable.

On the client side, MCP is now integrated into ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code. That’s not a niche developer tool protocol. That’s the primary interface layer for every major AI coding and productivity environment.

The adoption numbers matter because they represent the network effect threshold. When every major AI tool speaks MCP, the cost of not supporting MCP becomes higher than the cost of supporting it. That’s when a protocol stops being optional and starts being assumed — the same way HTTPS went from “nice to have” to “required for any serious web application.”

What AAIF Actually Governs

MCP isn’t the only project under the Agentic AI Foundation. The scope is broader than a single protocol.

Block contributed goose, their open-source AI agent framework. OpenAI contributed AGENTS.md, their specification for agent behavior. Together with MCP, these three projects represent different layers of the same stack:

  • MCP defines how AI systems connect to tools, data, and context
  • AGENTS.md defines how agents describe their behavior and capabilities
  • goose provides a reference implementation for building agents that use both

The foundation isn’t building just a transport protocol. It’s building the full standard stack for agentic AI — connection, behavior, and execution. That’s a deliberate architectural decision. It means the interoperability story extends beyond “tools can connect” to “agents can describe themselves and coordinate.”

For developers, this means the standards surface is expanding. Today you build an MCP server. Tomorrow you publish an AGENTS.md alongside it. The day after, agents discover your tool, understand its capabilities, and integrate it — all through foundation-governed standards.

The Pipe vs. What’s in the Pipe

Here’s the distinction that matters most for anyone evaluating their stack.

MCP defines HOW AI systems connect to tools and context. It specifies the transport layer — the protocol for sending requests, receiving responses, and streaming data between AI clients and servers. It does this well, and with AAIF governance, it will do it reliably for years.

What MCP does not define is HOW SMART the thing on the other end of the connection is.

A filing cabinet connected via MCP is still a filing cabinet. It stores documents. You ask for one. It hands it back. The protocol doesn’t change the intelligence of the retrieval — it just standardizes the connection.

An intelligence pipeline connected via MCP is fundamentally different. It classifies your request before deciding what to retrieve. It routes to the right information source based on intent, not just keyword similarity. It delivers a targeted briefing instead of a document dump. The protocol is the same. What happens behind the server endpoint is not.

This is the shift that standardization accelerates. When the connection problem is solved — when every AI tool can connect to every MCP server through a universal interface — the differentiation moves up the stack. The question stops being “does it connect?” and becomes “how smart is what it delivers?”

Every MCP server that stores and retrieves is now competing on the same interface. The same transport. The same client support. That commoditizes the connection layer. It does not commoditize the intelligence layer. Classification, routing, learning, delivery quality — these become the axes of competition.

What This Means for Developers

The practical implications break into three categories depending on what you’re building.

If you’re building MCP servers: your distribution surface just became universal. Every major AI client supports MCP. You don’t need separate integrations for Claude, ChatGPT, Gemini, and Copilot. Build one server, reach every platform. The AAIF governance guarantees that this surface is stable — no single vendor can change the protocol out from under you.

If you’re building on top of AI tools: vendor lock-in on the transport layer is gone. Your application can connect to any AI platform through the same protocol. If you’re evaluating Claude today and want to test Gemini tomorrow, the integration layer doesn’t change. Your MCP servers work with both. This is the portability promise that developers have been asking for since the first AI APIs launched.

If you’re evaluating context and memory tools: the evaluation criteria just shifted. Before AAIF, “does it integrate with my AI tool?” was a legitimate question. Now, everything integrates with everything — MCP handles that. The new question is: “What happens between the request and the response? How does this tool decide what to deliver? Does it just retrieve, or does it actually understand what I need?”

The Enterprise Timeline

The timing of the AAIF formation isn’t accidental. It aligns with an enterprise adoption wave that’s already in motion.

Gartner predicts that 40% of enterprise applications will feature task-specific AI agents by the end of 2026 — up from less than 5% in 2025. That’s not incremental growth. That’s an eight-fold increase in twelve months.

Those agents need context. They need to connect to enterprise data, internal tools, knowledge bases, and workflow systems. Without a standard protocol, every agent-to-tool connection is a custom integration. At enterprise scale — hundreds of agents, thousands of tools — custom integrations don’t work. The combinatorial complexity is unsustainable.

MCP under AAIF governance solves the connection problem at scale. One protocol. Universal support. Stable governance. Enterprise IT teams can adopt it with the same confidence they give to HTTP or OAuth — it’s a foundation-governed standard, not a vendor’s product.

But the connection is only the pipe. What flows through it determines whether those agents actually produce results. A task-specific agent connected to a context server that returns the twenty most similar documents is going to struggle. An agent connected to a context server that classifies the task, identifies exactly what information is needed, and delivers a targeted intelligence packet is going to perform.

The enterprises that figure out this distinction — pipe vs. what’s in the pipe — will be the ones that land in Gartner’s “achieved ROI” column instead of the cancellation pile.

Where grāmatr Fits

grāmatr is an MCP-native context engineering platform. Patent-pending pre-classification routing that delivers targeted intelligence packets instead of raw retrieval. The protocol standardization means it works with every AI tool in the ecosystem through a single integration surface.

If you want to understand how context engineering works on top of MCP, start with how grāmatr approaches it.


The Agentic AI Foundation is governed by the Linux Foundation. MCP, goose, and AGENTS.md are projects under AAIF governance. All source links and adoption metrics cited in this post are current as of April 2026.