AI agents are everywhere, drafting emails, summarizing reports, answering questions in plain language. But ask one to do something genuinely useful inside your business, like pull the latest readings from a SCADA system or validate a permit against a zoning database, and the limits show up fast. Language models are powerful reasoners, but on their own, they are isolated from the systems where real work happens.
The gap between a thinking AI and a doing AI is what the Model Context Protocol (MCP) was built to close. MCP gives AI a standardized way to access external tools and systems. But a protocol is only useful if there is something on the other end of it, a workflow that knows how to read your databases, transform your data, and call your APIs. That is where FME comes in.
This post breaks down how FME uses MCP to turn the workflows you have already built into AI-callable tools.
What Are AI-Callable Tools?
An AI-callable tool is any capability, query, calculation, or action that an AI model can invoke when it decides it needs to. The model does not run the tool itself, but rather sends a structured request to be executed elsewhere, and the result is returned as context for the model’s next step.
In an MCP architecture, three pieces are clearly separated:
- Tool: something the AI can invoke (a workflow, an API call, a database query)
- MCP: the standardized way the AI describes what it wants and receives the answer
- Execution: happens outside the model, in a system designed to actually do the work
In essence, the AI is asking your system to do the work. That is what keeps AI grounded in your real source of truth instead of guessing from training data.
The Challenge of Connecting AI to Data Workflows
Most enterprises have already solved data integration once. They have ETL pipelines, geospatial workflows, and orchestration logic spanning hundreds of systems. The problem is that none of it was designed to be called by an AI agent.
The traditional alternatives all have serious drawbacks:
- Custom-built integrations tie a specific AI model to a specific system through hand-written glue code. They work for one use case and break when anything upstream changes.
- The N×M problem. With N AI models and M systems, naively connecting them means building N × M integrations. Adding one model means rebuilding everything.
- Fragmented AI tooling. Function calling, plugins, and proprietary connectors each work inside their own walled garden. Logic written for one provider has to be rewritten for the next.
MCP collapses this into a single standardized interface. Expose your systems as MCP tools once, and any MCP-compatible AI can use them.
How MCP Connects AI Models to External Tools
Under the hood, MCP is an open protocol that runs over HTTP. The AI sends a request, something happens, and a response comes back. The flow has four steps:
- The AI sends a request. The model decides it needs information or an action it cannot produce on its own and formats the request to the MCP specification.
- MCP routes the request to the appropriate MCP server, which knows what tools are available and how to call them.
- The tool executes. Real work happens here, outside the model. A database is queried, a file is parsed, a workflow runs, an API is called.
- A response is returned to the AI, which incorporates it into its answer or its next decision.
Step 3, where the actual work happens, is exactly the kind of thing FME has been doing for over 30 years. MCP just gives AI a clean way to ask for it.
How MCP Works in FME Workflows
FME’s role in MCP is twofold: it can serve workflows as MCP tools, and it can call other MCP tools from inside its workflows.
FME Flow as an MCP Server
FME Flow is being extended with MCP Server capabilities, exposing workspaces as governed, AI-ready tools. Every published workflow becomes discoverable and callable through the MCP interface, with OAuth 2.0 keeping access controlled. Nothing gets rebuilt. The same workspace running your nightly data sync can now also be called by an AI agent on demand.
Workflows Become AI-Callable Tools
Each FME workspace is already a self-contained capability with defined inputs, outputs, validation logic, and system connections. Wrapping it in MCP just publishes those characteristics in a format AI models can understand. No custom API. The workspace’s parameters become the tool’s input schema automatically, and updates flow through without breaking the AI integration.
FME as an Orchestration Layer
Real-world tasks rarely involve just one system. A useful answer might mean pulling from a GIS database, reconciling it with an asset register, transforming the geometry, and writing the result back to a reporting system. FME has always been good at this kind of multi-system orchestration. The MCP server simply exposes it to AI as a single, clean capability, the AI does not need to know what is involved underneath. It just calls the tool.
Calling External MCP Services with MCPCaller
FME also works in the other direction. The new MCPCaller transformer lets FME workspaces consume tools from any MCP server in the growing ecosystem, instantly extending FME’s reach to systems it does not have a native connector for. MCPCaller has two modes: design-time (deterministic), where the author specifies exactly which tool to call, and run-time (dynamic), where the workspace evaluates intent at execution and picks the most appropriate tool from the available inventory.
Together, MCP Server and MCPCaller make FME both a producer and a consumer of AI-callable tools. FME is the execution and orchestration layer for MCP.
Turning an FME Workflow into an AI-Callable Tool: A Scenario
Imagine a city planner asking an AI assistant: “Are there any zoning conflicts for the proposed development at 1247 Main Street?”
Here is how that resolves through FME with MCP:
- The AI receives the question and recognizes it cannot answer from training data, it needs live spatial data and policy rules.
- The AI calls an MCP tool exposed by FME Flow’s MCP Server, such as check_zoning_conflicts.
- MCP routes the request to the published FME workspace.
- FME executes the workflow. It pulls the parcel from the GIS database, intersects it against current zoning layers, checks policy rules, and validates against recent council amendments.
- Results return to the AI, which turns them into a natural-language answer for the planner, grounded in the underlying data.
No custom integration was written, and no data was hallucinated. The AI did what it is good at, language and reasoning, and FME did what it is good at, orchestrating data across enterprise systems.
Why MCP and FME Change How AI Integrates with Data
- Scalability: no more N×M custom integrations. Expose workflows once, reuse across any AI.
- Flexibility: swap AI models without rebuilding. Today’s model can become tomorrow’s without touching a workspace.
- Reusability: every workflow you have ever built is a candidate AI tool. Years of accumulated integration logic get a second life.
- Governance and data residency: AI agents only access what you explicitly expose. OAuth 2.0, audit trails, and FME Flow’s existing security model apply to every AI-triggered call. Because FME Flow runs where you deploy it, including on-premises or in air-gapped environments, your sensitive data never has to leave your infrastructure to be useful to an AI.
Where FME Fits in an MCP Architecture
For technical decision-makers, FME plays three distinct roles in an MCP-enabled stack:
- Integration layer: connecting AI to the long tail of enterprise systems, including spatial data, that other tools cannot reach.
- Governance layer: a single, controlled gateway between AI agents and sensitive enterprise systems, with authentication, authorization, and observability built in.
- Execution engine: running the actual logic that turns an AI’s request into a real, validated answer.
This architecture matters most where data residency and sensitivity are non-negotiable. For organizations operating under GDPR, sector-specific regulations, or internal policies that prohibit cloud exposure of sensitive records, the question is not just can AI access this data but where does the data physically live during the interaction. With FME Flow deployed on-premises, in a private cloud, or in an air-gapped environment, the answer is straightforward: the workflow runs locally, the data stays inside your boundary, and only the result, the part you have decided is safe to share, is returned to the AI. The AI gets the answer it needs without ever touching the underlying records.
This is what Safe Software means by “All-Data, Any-AI.” Bring whichever AI your organization chooses, connect it to whichever systems hold your data, on whatever infrastructure your policies require, and let FME do the heavy lifting in between.
Getting Started with MCP in FME
MCP capabilities are arriving across the FME Platform in phases. The MCPCaller transformer is part of FME 2026.1, available in FME Form for authoring workflows, with FME Flow’s MCP Server arriving as a beta capability in FME 2026.2.
- FME with MCP solutions overview
- Model Context Protocol on the FME Platform
- On-demand webinar: MCP and the Power of Choice
- AI agent development guide (best practices for building agents on FME)
- MCP adoption discussion in the FME Community (see how other organizations are approaching MCP)
The pattern that has held for the last three decades of data integration holds here too: the systems that win are the ones that connect everything else. MCP is the new connector standard for AI, and FME is how your existing workflows show up on the other end.
Ready to turn your workflows into AI-callable tools? Get started with FME or request a demo to see what MCP and FME can do together in your environment.