Model Context Protocol (MCP): The future USB-C of enterprise AI?
Is MCP a fad or the beginning of a shift in how enterprise AI systems get things done?
As generative AI becomes more and more embedded in enterprise systems, the challenge shifts from “What can AI do?” to “How do we let it do it safely, and intelligently?” This is where Model Context Protocol (MCP) could play a role.
In this issue of The AI Ultimatum, we unpack MCP: what it is, how it works and why it could prove to matter increasingly in AI automation initiatives.
What is the Model Context Protocol (MCP)?
The Model Context Protocol was introduced by Anthropic in November 2024. It’s a standard that promises to make integrating AI with enterprise tools as seamless as plugging in a USB device.
It’s designed to let Large Language Models (LLMs) dynamically discover and interact with external tools, APIs and data sources in a safe and structured way.
Imagine an LLM, like GPT-4o, that can call a customer database, trigger a Slack message, or run a supply chain report, all without bespoke integrations or hardcoded logic. MCP is the abstraction layer that makes this happen. It allows LLMs to describe what they want to do in natural language, and lets an MCP server match that intent with the right tool, run it, and return the result.
It’s the “operating system interface” for tools in an AI-driven world. It’s gaining interest because it solves some of the biggest headaches in enterprise AI deployments: governance, integration overhead, scalability and safety.
How does MCP work?
At the heart of MCP is a client-server architecture:
LLMs generate structured requests (like call tool XYZ with this input) based on user prompts or internal reasoning.
The MCP server acts as a gateway. It exposes available tools to the model (via the tools/list endpoint) and executes tool requests securely (via tools/call).
Tools are described using JSON schemas that define their name, description, inputs and expected outputs. This allows the LLM to reason about how to use them, without knowing implementation details.
The LLM doesn’t “hardcode” the tool it wants, it infers what it needs and the MCP server matches that to what’s available. This makes it possible to swap tools, add new capabilities and manage access centrally.
You give each tool a title and description. Then, when a host application receives a user input, it can go to the list of tools, read through the tools and descriptions and decide whether it should use one and, if so, which one.
Example: AI assistant booking a meeting
Let’s say you ask your AI assistant:
“Book a meeting with Tom for Thursday morning and send him a calendar invite.”
Here’s what would happen if you’re using MCP:
LLM analyzes the request and infers that it needs:
Access to Tom’s availability
A calendar tool to create an event
An email or messaging tool to send an invite
LLM queries the MCP server: “List tools that can read calendars, create events, and send messages.”
MCP server responds with tool options:
check_availability
create_event
send_message
LLM invokes tools with the appropriate inputs:
check_availability (Tom, Thursday morning)
create_event (subject, time, participants)
send_message (Tom, 'Invite sent')
MCP executes the tools and returns the results to the model, which responds: “Meeting with Tom confirmed for Thursday at 10am. Invite sent.”
No need for the LLM to “know” your exact setup. That’s the power of MCP.
Why MCP could be a game-changer for enterprises
Integration Made Easy. Instead of hand-wiring every tool into every model, enterprises can expose tools once through the MCP server. This slashes development time, reduces duplication and creates a scalable integration layer that you can reuse across use cases.
Dynamic, Adaptable AI Agents. LLMs can adjust to new tools on the fly. Add a new expense-reporting system or a proprietary forecasting API and the model will “learn” to use it by reading its schema. No retraining needed.
Governance and Control. MCP puts the enterprise in control. You decide which tools are exposed, how they’re used and who can approve requests. That means safer, more auditable AI deployments.
Security by Design. MCP separates intent from execution. LLMs don’t get raw access to databases or internal systems, they go through a broker that handles auth, rate limits and data controls.
The AI Ultimatum
MCP could be quietly laying the foundation for the next wave of enterprise AI: autonomous agents that operate across your systems, safely and intelligently.
The ultimatum for enterprises is clear: either continue to build and maintain a spaghetti mess of one-off integrations that don’t scale… Or consider looking into standard like MCP that gives you modularity, control and flexibility, with AI that truly works across your business.
This isn’t just a technical protocol. It’s a philosophical shift From hardcoded bots to adaptive agents. From rigid APIs to contextual understanding. From siloed systems to orchestrated intelligence.
MCP might not be optional in the long run. Now’s the time to get familiar
I think about MCP as another layer on top of the OSI reference model (or TCP/IP if you prefer). It is another push towards higher levels of abstraction. While OSI stars defining from cables to signals, and finally HTTP, etc. for me MCP sits on top, managing "understanding" (this is, what you are trying to do).