The Data Reality: Customer Experience Myths Debunked
Register Now

Disruptive Innovation or Industry Buzz? Understanding Model Context Protocol’s Role in Data-Driven Agentic AI

Last Published: Jan 05, 2026 |

Table Of Contents

Table Of Contents

Model Context Protocol (MCP) is an open-source standard for connecting AI applications to external systems, tools, and data sources. Often described as a "universal remote" or "USB-C for AI," MCP enables AI models to seamlessly interact with databases, APIs, file systems, and enterprise services without requiring custom integrations for each connection.

Before MCP, every AI agent needed unique, hand-coded integration logic for every external service it accessed. This created an unsustainable development burden and vendor lock-in. MCP solves this by establishing a standardized protocol built on JSON-RPC 2.0 that any compatible AI model can use to connect with any compatible external tool.

Disclaimer: This article is not intended as a tutorial. Instead, it aims to demystify Model Context Protocol, clarify common misconceptions, and share an architect's insights into its role in the evolving world of agentic AI. I will provide a high-level overview of MCP's architecture and explore how it works alongside high-quality master data to drive scalable enterprise AI.

MCP, what's in a Name?

MCP (Model Context Protocol) is an open-source standard for connecting AI applications to external systems. It is often compared to a "universal remote" for AI.

To understand MCP, let's break down the acronym itself:

Understanding the MCP Acronym

Model – Designed for AI Language Models

This protocol is designed for AI models, specifically large language models (LLMs), but can be used for small language models (SLMs) as well in some specific cases. Its primary audience isn't a human developer or another traditional application. It's the model itself.

Context – Providing Rich Metadata and State

The core purpose of MCP is to provide the LLM with relevant, structured context. It includes but is not limited to information about available tools (a function or capability exposed by a server), user identity, data structures, database schemas, device state, preferences, environmental variables, session state, and semantic labels etc. This metadata enhances LLM comprehension, enabling relevant, timely, and personalized responses.

Protocol – Standardized Communication Rules

This refers to the standardized set of rules and formats (built on JSON-RPC 2.0) that govern communication between the AI application (the Host/Client) and the external service (the Server). This standardization ensures that any compatible AI model can seamlessly integrate with any compatible external tool.

Together, MCP creates a powerful synergy where the model uses rich context through a platform agnostic standardized protocol to effortlessly connect with external tools and services.

MCP as the “USB-C for AI Applications”

The Model Context Protocol (MCP) provides universal interoperability for AI, much like the USB-C standard for revolutionized device connections.

Before USB-C, every function needed a specific cable. Before MCP, every LLM needed custom integration for every external service. MCP solves this by creating a standardized protocol. A single MCP server can be built once and plugged into any MCP-aware client, instantly giving the AI model access to that tool (a function or capability exposed by a server) without writing new logic. This standardization is a game-changer for scalability and community sharing. It allows developers to leverage pre-built connectors instead of reinventing the wheel for every new project.

MCP Architecture: Core Components

Now that we have a basic understanding of MCP, let’s get acquainted with some of the primary components of its architecture.

The MCP framework is built on a client-server model with a key orchestrator, the Host.

The Host Application

The AI application that coordinates and manages one or multiple MCP clients. The host is the main AI application or interface. This is what the end-user interacts with. It could be a chatbot, an IDE extension, or a custom web app. The host contains and invokes the LLM. Examples of MCP Hosts include Claude Code, a VS Code extension, or Cursor etc.

MCP Client

This component runs within the host application and acts as an intermediary, managing the connection to one or more MCP servers. It is powered by a large language model but should not be confused with the model itself. The client communicates using the MCP protocol, sending requests from the LLM to the appropriate server, and relaying responses back. Its responsibilities include checking the server’s capabilities and working with LLM to determine which action to take based on the user’s request. It then makes the appropriate request and processes the response.

MCP Server

This is a separate program that wraps a specific data source or functionality behind the MCP standard. The server "exposes" a set of Tools, Resources, and Prompts. For example, a server might expose your local file system, a CRM database, or a third-party API like a weather service. It handles incoming requests (like "read this resource" or "execute this tool") and returns the results.

MCP Tools

These are the specific functions an AI can call to perform actions. Examples include readFile, sendSlackMessage, identifyCustomer, getCustomerDetails, listProducts etc. They are defined with a clear schema for inputs and outputs.

MCP Resource

An MCP Resource is a data object that an AI can read or reference. It represents a piece of information, like a such as files, database schemas, or application-specific information. Each resource is uniquely identified by a URI. The goal is to provide the LLM with factual, up-to-date information from external systems, helping to ground its responses and reduce hallucinations. Resources can contain either text or binary data.

MCP Prompt

An MCP Prompt is a structured template for an action. It guides the user or the LLM on how to perform a common or multi-step task. It’s less about reading data and more about doing something. The goal is to simplify complex interactions by providing a pre-defined structure for inputs, ensuring that the AI gets all the necessary information to complete a task successfully. Therefore, Prompts in MCP are for performing guided actions (e.g., "Help me create bug in JIRA”). the AI uses this Prompt to ask you for the required title and description before executing the action.

How MCP Client and Server Communicate

MCP uses JSON-RPC as its underlying message format, which is a lightweight, language-agnostic protocol for remote procedure calls. This means the client and server exchange messages in a structured JSON format.

For transport, MCP supports two main mechanisms:

STDIO (Standard Input/Output)

Limited to local environments. It cannot be used across machines or networks. The host application simply launches the server as a subprocess and communicates with it using standard input and output streams. This is secure and requires no network configuration.

Streamable HTTP

Built for remote, distributed environments where client and server reside on different machines or networks. The client communicates with the server over standard HTTP/HTTPS connections. It is a modern communication standard that allows flexible, real-time data streaming. Unlike the traditional request-and-response model, it enables a server to push continuous updates or multiple messages to a client without the client needing to constantly request (or "poll") new information, which is essential for long-running tasks or streaming results.

Alright, now that I have covered the basics of MCP, let’s explore the agentic world without MCP.

In the early days of AI, an LLM was a brain in a jar. It could reason and generate text, but it couldn't "see" or interact with the outside world. It was a closed system. The problem became even more apparent with the rise of AI agents, which need to perform complex tasks in real-world environments. They need to access data, use tools, and interact with systems they weren't trained on.

Without a standardized way to manage this, developers faced a chaotic, one-off integration nightmare. Each agent or application required a new, custom solution for every single tool or data source. This is where MCP comes in. It's a structured approach to giving LLMs the context and tools they need to act effectively.

The Agentic AI Challenge Before MCP

Here are some practical challenges that arise in an agentic world without a protocol like MCP:

Challenge 1: Context Loss Across Interactions

Critical metadata like session info, user preferences, or device state is missing, so responses cannot be tailored timely

Example: Customer Support Chatbots

Imagine a customer interacting with a support AI Agent on a retail website. The customer begins by asking about the status of a previous order and then shifts to inquire about return policies. Without a standard context protocol, the bot treats each user input as a new, independent query. It doesn’t recall the order ID or previous issues discussed.

Impact and Traditional Solutions

The customer must repeat information multiple times ("My order number is CM100235") during the conversation, leading to frustration and inefficiency. However, addressing this issue will require custom coding or leveraging advanced frameworks such as LangChain or LangGraph, along with implementing memory management techniques.

How MCP Resolves Context Loss

Using MCP, the AI Agent retains and shares contextual information (order ID, customer history, session state) dynamically between components, so the bot “remembers” prior interaction details and provides seamless assistance.

Challenge 2: Inconsistent Mult-Channel Experience

The same user switching from web chat to voice assistant gets disjointed experiences without a shared context.

Example: Online Flight Booking

A user begins booking a flight on a mobile app chatbot, selecting city, dates, seat preferences, and payment options. Later, the same user calls customer support and interacts with a voice assistant for meal preference and baggage options. Since the voice assistant has no access to the prior chat session context, it asks the user to repeat flight details, seat class, and preferences from scratch. This stateless communication causes frustration and errors (e.g., mismatched flight dates or wrong seat allocation). Furthermore, backend airline APIs are integrated individually with each interface, making synchronization difficult.

Impact on Customer Experience

The customer repeats the information for each interaction channel, or different agents provide inconsistent or conflicting responses.

How MCP Resolves Cross-Channel Consistency

MCP standardizes the context format and shares it across systems, enabling each channel to synchronize user context. So, when the customer calls or uses the app, agents or systems see the same up-to-date interaction history, preferences, and status for consistent, personalized service.

Challenge 3: No Interoperability Between AI Providers

Example: Switching LLM Providers

A company builds a powerful agent that can manage their Google Drive files. When they decide to switch to a different LLM provider (e.g., from OpenAI to Anthropic), they find that their Google Drive integration is completely incompatible, forcing a costly and time-consuming migration.

Impact of Vendor Lock-In

Impact: When a company switches LLM providers (e.g., from OpenAI to Anthropic), traditional AI integrations like Google Drive often break because each provider requires custom, tightly coupled API and prompt logic. This leads to costly rewrites, migration delays, and vendor lock-in

How MCP Provides Universal Compatibility

MCP acts as a universal, standardized interface between the AI model and external tools such as Google Drive. By wrapping Google Drive functionality behind an MCP-compliant server with a common protocol for context and tool interaction, the integration becomes plug-and-play and agnostic to the underlying LLM provider.

Model Context Protocol (MCP) in Practice

Let’s examine a practical example of how MCP functions in the agentic world. While the process can be somewhat complex, I will try to explain it in simple terms and avoid technical details

The Flight Booking Scenario

Let's say a user asks an AI assistant to "Find a flight from New York to London next Friday and book the non-stop option."

  • The Host captures the user's request “Find a flight from New York to London next Friday and book the non-stop option” in natural language and prepares to process it with its Large Language Model (LLM).
  • The Host environment maintains a service registry or configuration file listing all available MCP Servers it can communicate with. When the Host application starts, it reads this registry. For each registered server, the Host creates a corresponding MCP Client instance. Each Client instance acts as a dedicated translator assigned to a specific server. For example, let’s say one client is instantiated for the 'Flight Booking' server (the name may vary based on configuration).
  • The Client then sends a tools/list request to the server, which returns a list of available tools, like flight/search and flight/book. In this case the Client verifies what actions the Flight Server can perform.
  • The LLM, seeing the flight/search tool and the user's intent, plans to call that tool first. So, the AI determines the first logical step, search for flights with parameters (origin, destination, date).
  • The Client formats LLM’s request into a standardized MCP call and sends it to the Flight Server. Under the hood, Client sends a tools/call request to the server with the appropriate parameters for the flight search.
  • The Flight Server executes the call, queries the Airline API (MCP Tool), and returns a list of flight options as a structured result. This means that the Server translates the MCP call into a native API request, gets the raw data, and sends back a curated list of top flights.
  • The LLM receives the results, identifies the non-stop option as per the user's initial request, and then plans its next step, calling the flight/book tool with the selected flight ID. The AI orchestrates the second action, executing the booking workflow autonomously based on the search results and user criteria
  • The LLM receives the results, identifies the non-stop flight option as requested by the user, and determines the next step. It then calls the flight booking tool (e.g., flight/book) with the selected flight ID. The AI autonomously manages the booking process, executing the workflow based on the search results and user criteria which would include handling payment/confirmation elicitation (MCP client’s feature) as needed.

MCP in the landscape of Agentic AI

Common Misconceptions About MCP

MCP is Just Another API

Not quite. While it uses protocols like HTTP, its purpose is fundamentally different. An API is for a human developer to integrate a specific service. MCP is a model-facing protocol designed to give non-deterministic (probabilistic reasoning) agent access to a deterministic world (external tools).

MCP Replaces Traditional APIs

MCP is a layer built on top of traditional APIs, not a replacement. Most MCP servers act as lightweight wrappers around existing API endpoints. They translate complex, raw API requests/responses into simplified, structured Tool or Resource definitions that are LLM-friendly. The MCP server still uses the underlying REST API to do the actual work

Tools are Agents

This is a critical distinction. An agent plans, reasons, and evaluates. A tool executes a specific, deterministic action. MCP provides the tools, but the agent uses them as part of a larger plan to achieve a goal.

MCP only provides tools

While tools are a key feature, MCP is about providing a full suite of context, including resources (like files and data) and prompts (reusable templates). An agent needs to be able to read data as well as take action.

Current Challenges and Limitations of MCP

While MCP is a promising technology, it's not without its challenges. Let’s explore a few of them

Security Concerns

The Confused Deputy Problem

When an MCP server acts on a user’s request, there is a risk of the "confused deputy" problem. Ideally, the action is performed with the user’s permission on their behalf, but this depends on the server’s implementation. If done incorrectly, users may access resources available to the MCP server but meant to be restricted, violating the principle of least privilege

Executable Code Risks

Since an MCP server has executable codes, it introduces security risks. or vulnerabilities in the server's code can lead to unauthorized command execution. Developers must treat these servers with the same scrutiny as any other mission-critical application.

Malicious MCP Servers

Someone could build a malicious MCP server. A malicious MCP server poses significant risks. It may seem safe initially, with normal source code and tool descriptions, but future updates could alter tools (Tool injection) for example, changing a weather tool to collect and leak confidential data. Additionally, deceptive tool names can trick the LLM into using malicious tools instead of legitimate ones, leading to unintended or harmful actions.

Context Window Overload

When the number of available tools grows large (e.g., in an enterprise setting with hundreds of APIs), the Tool Definitions alone can consume a sizable portion, or even all, of the available context window. This leads to LLM confusion and Poor Performance. With too many options, the LLM struggles to identify the correct tool quickly and accurately for the job. This results in slow decision-making; incorrect tool calls get "drowned out" by the irrelevant information.

Enterprise Readiness for MCP Adoption

Adopting MCP isn’t just about selecting a top AI model or vendor. Being ready also means considering, but not limited to, the following factors:

Infrastructure Requirements:

The foundational requirement for a successful Model Context Protocol implementation is a robust and highly scalable infrastructure. This demands more than just basic cloud compute. It requires dedicated platforms designed to handle the complexity and latency of AI agents. Furthermore, a Centralized Host Runtime is essential to manage the LLM orchestration, while an API Gateway can act as a governance and security layer.

Security and Authentication Standards

Mandate strict security standards for all MCP Servers (e.g., OAuth 2.x authentication, input validation etc). This adheres to the Principle of Least Privilege. Implement comprehensive logging and audit trails for every tool call and data access event to track AI agent activity for compliance and debugging.

Governance and Access Policies

Establish clear policies that specify which internal systems (APIs, databases, file stores) and metadata can be accessed through MCP Servers, ensuring adherence to ethical guidelines.

Organizational Culture and Cross-Functional Teams

Establish a core team comprising AI Engineers, Security Architects, and Domain Experts (the data owners) to collaboratively design, secure, and validate MCP tools.

Performance and Cost Optimization

Implement infrastructure for low-latency MCP Server deployment (e.g., using containers, edge computing, etc.). Monitor tool usage and associated costs (e.g., API call fees, database query load) to ensure AI workflows are financially viable.

Talent Reskilling and Training

Train developers to think in terms of LLM-centric API design (workflow over granularity) rather than traditional REST principles. Educate employees on the new capabilities and limitations of AI agents connected to internal systems.

Trusted Master Data is the Backbone of Agentic AI

AI agents powered by large language models (LLMs) depend heavily on high-quality, trusted data to deliver accurate, meaningful outcomes. Master data serves as the single source of truth which ensures consistency, validity, and contextual richness. This is essential for AI agents to make reliable decisions across diverse enterprise functions.

Poor data quality, duplication, and disconnected sources compromise AI effectiveness and escalate risks in security, operational efficiency, and costs. In distributed environments, inconsistent data and fragmented governance can lead to redundant processes and soaring cloud and API expenses.

Why Master Data Management Matters for AI Agents

Master Data Management (MDM) provides a unified and governed foundation that anchors AI agents to trusted golden records covering customers, products, suppliers, and more. This foundation enables seamless real-time access to critical data and relationships, which MCP leverages to empower AI agents for reasoning, discovery, and autonomous task execution.

Challenges in Agentic AI Without Master Data and Standardized Integration

Traditionally, AI systems rely on direct API integration for data access and actions, which works well for general purposes, but this method needs developers to configure the integration for every application and lacks the standardization and flexibility needed for autonomous AI agents.

Building smarter, autonomous AI systems requires dynamic discovery and access to external data sources, tools, and services. Without robust discoverability, agents face resource identification challenges, leading to redundant actions, fragmented workflows, and inefficiencies. Scaling AI agents across heterogeneous systems increase complexity without standardized context sharing.

How MCP and Master Data Together Drive Scalable Agentic AI

The Model Context Protocol (MCP) standardizes how AI agents discover, access, and interact with data and tools across applications, enabling seamless, autonomous workflows. When integrated with high-quality master data, MCP:

  • Enables agents to access comprehensive, validated enterprise data for improved decision-making and automation.
  • Enhances Enterprise AI agents’ agility and cost efficiency by automating MDM tasks such as job execution, task routing, auto classification, auto translation, on-demand data validation, enrichment, and configuration, enabling seamless access and updates to master records while reducing manual effort.
  • Ensures consistent use of trusted data, prevents model drift, and enhances output reliability.
  • Supports multi-agent collaboration for orchestrating complex workflows while maintaining data governance and security.

Certain scenarios, such as customer and supplier onboarding, data stewardship, or configuration, rely on domain knowledge and metadata intelligence. With the built-in capabilities of sophisticated Informatica CLAIRE agents, agent-to-agent integration becomes possible. Additionally, external AI agents can access these robust CLAIRE agents on the Informatica Intelligent Data Management Cloud (IDMC), all while maintaining strict standards for data quality, governance, and compliance.

Therefore, high quality master data is the non-negotiable foundation for building Agentic Enterprise, an ecosystem where autonomous AI agents operate on trusted, connected enterprise data through interoperable protocols like MCP. Organizations that prioritize mastering their data foundation and embracing MCP will unlock scalable, secure, and impactful AI driven innovation.

Context Engineering and MCP

What is Context Engineering?

Context Engineering is outside the scope of this blog, but I would like to briefly touch upon it due to its relevance to MCP. It is an emerging discipline in AI that focuses on the deliberate design and management of the entire flow of information between users, applications, and AI models. It goes beyond simple prompt crafting by determining what information the LLM sees, how that information is structured, and when it is delivered. Whether aiming to compact context for long-horizon tasks, design token-efficient tools, or enable agents to explore their environment just-in-time. So, the core principle remains to identify the smallest set of high-value tokens that maximizes the chance of achieving the desired. The goal is to maximize the quality, relevance, and structure of the input context, which directly impacts the AI's output quality, reliability, and security.

MCP as the Foundation for Context Engineering

The Model Context Protocol (MCP) acts as a critical standardization layer that directly provides the “foundational framework” for Context Engineering, particularly for the tool-use aspect. MCP provides a reliable, uniform method for connecting AI applications (MCP Clients) to external systems and data sources (MCP Servers). This standardized protocol helps address major context management challenges like context window limitations and relevance determination by making tool output clean and structured. By formalizing the way AI agents access and incorporate external actions and data, MCP ensures the assembled context is both trustworthy and operationally effective, enabling more complex agentic AI workflows in a secure, scalable manner.

Related Protocols: MCP, A2A, and ACP

The world of AI protocols can be confusing, so let's clarify the difference between MCP and a few other concepts.

MCP (Model Context Protocol)

This is a protocol for Agent to Tools/Resource communication. It allows a single AI agent to access external tools and data sources.

A2A (Agent-to-Agent)

This is a protocol for Agent-to-Agent communication. It defines how two or more independent agents can communicate, delegate tasks, and collaborate on a shared goal.

ACP (Agent Communication Protocol)

This protocol is an open standard designed to enable seamless and interoperable communication between diverse AI agents, applications, and humans.

These protocols aren’t mutually exclusive and can complement each other. For instance, an agent might use MCP to access a tool, then use A2A to communicate with another agent, all orchestrated within a framework that uses ACP as standardized protocol to ensure all internal agent messages and routing are compatible.

While a detailed explanation of A2A and ACP is outside the scope of this blog, I tried to provide a brief overview of these protocols

Conclusion: MCP's Role in the Future of Enterprise AI

MCP is more than just a passing industry buzzword. It’s a foundational shift in how we build and scale AI applications. By standardizing the way LLMs interact with the outside world, it solves real, persistent challenges and paves the way for a more interoperable, modular, and secure future for agentic AI. It's the silent, universal connector that makes the grand vision of powerful, context-aware AI agents a tangible reality.

Frequently Asked Questions About Model Context Protocol

What is Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open, standardized communication protocol that allows AI agents, such as Large Language Models LLMs), to securely and dynamically access external enterprise resources and execute tools. It provides a unified, standard way for the LLM to discover capabilities and manage the context needed to perform complex, real-world actions reliably.

How does MCP differ from traditional APIs?

MCP is designed to be AI native, unlike traditional APIs which are built for human developers. MCP abstracts the low-level complexity, eliminating the "manual translation burden" of verbose wrappers. It offers a unified, discovery-driven interface that allows the LLM to focus on high-level goals, making it the "USB-C" for connecting diverse AI agents to tools

What are the main components of MCP architecture?

The core architecture is client-server-based, comprising following main components:

  • Host: The AI application that coordinates and manages one or multiple MCP clients. The host is the main AI application or interface.
  • MCP Client: This component runs within the host application and acts as a middleman, managing the connection to one or more MCP servers
  • MCP Server: This is a separate program that wraps a specific data source or functionality behind the MCP standard. The server "exposes" a set of Tools, Resources, and Prompts.
  • MCP Tools: These are the specific functions an AI can call to perform actions. Examples include readFile, sendSlackMessage, identifyCustomer, getCustomerDetails, listProducts etc. They are defined with a clear schema for inputs and outputs.
  • MCP Resources: An MCP Resource is a data object that an AI can read or reference. It represents a piece of information, like a such as files, database schemas, or application-specific information

What security concerns exist with Model Context Protocol?

The Confused Deputy Problem arises when an MCP server executes actions using its own high privileges instead of the end-user's, causing unauthorized resource access. Additionally, MCP servers introduce risks from executable code vulnerabilities (like Prompt Injection) and Malicious Servers that could trick the LLM into harmful actions. Finally, a large number of available tools can cause Context Window Overload, confusing the LLM and severely degrading performance.

Why is master data important for agentic AI with MCP?

Master data is crucial because it provides the AI agent with a single, consistent, and clean source of truth (Context) for core entities (e.g., customers, products, suppliers, etc.). By accessing MDM via MCP, the AI can avoid processing conflicting data, drastically reducing hallucinations, improving decision accuracy, and ensuring compliance in its actions.

Learn more about Informatica tools and resources.

 

Citations:

https://modelcontextprotocol.io/specification/2025-06-18/architecture
https://modelcontextprotocol.io/specification/draft/basic/security_best_practices
https://boomi.com/blog/model-context-protocol-how-to-use/
https://bagrounds.org/articles/context-engineering-an-emerging-concept-in-the-mcp-ecosystem
https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
https://medium.com/@sandibesen/an-unbiased-comparison-of-mcp-acp-and-a2a-protocols-0b45923a20f3

First Published: Jan 05, 2026