Blog

Tips, tricks and ideas from Metorial.

How to build and deploy a Model Context Protocol (MCP) server

How to build and deploy a Model Context Protocol (MCP) server

The Model Context Protocol (MCP) is an open standard that’s revolutionizing how AI agents connect to external tools and data. While building a basic MCP server is straightforward, deploying, scaling, and securing it for production is a significant challenge. Metorial’s serverless MCP runtime simplifies this entire process, allowing developers to deploy robust, scalable MCP servers in just a few clicks, so they can focus on building innovative AI applications instead of managing infrastructure.

The Dawn of a New Era: Agentic AI and the Integration Challenge

We are witnessing a transformative shift in artificial intelligence. AI is no longer just about passive models that respond to prompts; we are entering the age of agentic AI. These are proactive, autonomous systems capable of planning and executing complex, multi-step tasks to achieve a goal. According to a recent McKinsey report on the state of AI, 62% of organizations are already experimenting with AI agents, signaling a massive wave of adoption on the horizon.

However, for an AI agent to be truly useful, it needs to interact with the outside world. It needs to access data from APIs, read files from Google Drive, send messages on Slack, or update records in Salesforce. This is where the integration challenge begins. Historically, connecting AI to external systems has been a messy, time-consuming process. Each new integration requires custom code, complex authentication handling, and constant maintenance to keep up with API changes. A study found that a staggering 95% of organizations face significant API integration challenges, a problem that only gets worse as the number of tools and data sources multiplies.

This is the problem that the Model Context Protocol (MCP) was created to solve. In our internal article, The jQuery Age of AI Agents, we draw a parallel between the current state of AI integrations and the early days of web development, where jQuery brought order to the chaos of browser incompatibilities. MCP aims to do the same for AI.

What is the Model Context Protocol (MCP)?

Introduced by Anthropic in late 2024, the Model Context Protocol (MCP) is an open-source standard designed to create a universal language for AI agents to communicate with external systems. Instead of building dozens of bespoke integrations, developers can build a single MCP server that exposes tools and data to any MCP-compatible AI client, like Claude, ChatGPT, or a custom-built agent.

An MCP server can provide three core capabilities:

  • Tools: These are functions that an AI agent can call to perform actions, such as send_email or create_calendar_event.
  • Resources: This is file-like data that an agent can read, like the contents of a document, a database query result, or an API response.
  • Prompts: These are pre-written templates that guide the user or the AI agent on how to perform a specific task.

By standardizing this interaction, MCP decouples the AI agent from the tools it uses. This means you can build an MCP server for your company’s internal API once and have it be instantly accessible to any AI agent your team uses, now and in the future.

How to Build an MCP Server: A Step-by-Step Guide

Building a basic MCP server is surprisingly straightforward, especially with the help of libraries like FastMCP for Python. Let’s walk through creating a simple weather server that can provide weather forecasts and alerts.

Step 1: Set Up Your Environment

First, you’ll need Python 3.10 or higher. We recommend using a virtual environment to manage your dependencies. You can install the necessary packages using pip:

pip install "mcp[cli]" httpx

Step 2: Write the Server Code

Create a new Python file, for example weather_server.py. We’ll use the FastMCP library to define our server and its tools. The library uses Python type hints and docstrings to automatically generate the tool definitions that the AI client will see.


import httpx
from mcp.server.fastmcp import FastMCP
from typing import Any

# Initialize the MCP server
mcp = FastMCP("weather_server")

# Define a helper function to call the weather API
async def make_nws_request(url: str) -> dict[str, Any] | None:
async with httpx.AsyncClient() as client:
try:
response = await client.get(url, headers={"User-Agent": "(my-weather-app, my-email@example.com)"})
response.raise_for_status()
return response.json()
except httpx.HTTPStatusError as e:
print(f"HTTP error occurred: {e}")
return None

@mcp.tool()
async def get_forecast(latitude: float, longitude: float) -> str:
"""Get the weather forecast for a specific location."""
points_url = f"<https://api.weather.gov/points/{latitude},{longitude>}"
points_data = await make_nws_request(points_url)
if not points_data:
return "Could not retrieve forecast data."

forecast_url = points_data.get("properties", {}).get("forecast")
if not forecast_url:
return "Could not find forecast URL."

forecast_data = await make_nws_request(forecast_url)
if not forecast_data:
return "Could not retrieve detailed forecast."

periods = forecast_data.get("properties", {}).get("periods", [])
if not periods:
return "No forecast periods found."

return periods[0]["detailedForecast"]

if __name__ == "__main__":
mcp.run(transport="stdio")

Step 3: Run the Server

You can run your server from the command line:

python weather_server.py

This will start the MCP server, which will listen for requests over standard input/output (stdio). You can now connect this server to a local MCP client, like the Claude for Desktop app, and start asking for weather forecasts!

The Deployment Dilemma: From Localhost to Production

Building a simple MCP server that runs locally is the easy part. The real challenge comes when you need to deploy it for production use. This is where developers often run into a wall of complexity. Here are some of the key challenges:

  • Hosting and Infrastructure: Where do you run your server? You could use a traditional VPS, a container service, or a serverless platform. Each option comes with its own set of complexities around configuration, maintenance, and cost.
  • Scalability: What happens when your AI agent becomes popular and starts making thousands or even millions of requests? Your server needs to be able to scale automatically to handle the load without crashing or slowing down. This often requires setting up load balancers, auto-scaling groups, and a robust monitoring system.
  • Security and Authentication: How do you ensure that only authorized users and agents can access your MCP server? You need to implement a secure authentication and authorization system, which can be a significant undertaking, especially if you need to support standards like OAuth 2.0.
  • Observability: When things go wrong, you need to be able to debug them. This requires detailed logging, tracing, and monitoring to understand what your AI agent is doing and why it might be failing.

These challenges can quickly turn a simple MCP server project into a major infrastructure undertaking, distracting you from your core goal: building amazing AI applications. This is where Metorial comes in.

The Metorial Advantage: Serverless MCP for Effortless Deployment

At Metorial, we believe that developers should be able to focus on building great AI products, not on managing infrastructure. That’s why we built a powerful, serverless MCP runtime designed to make deploying, scaling, and securing your MCP servers effortless. With Metorial, you can go from code to a production-ready, globally scalable MCP server in just a few clicks.

Here’s how Metorial solves the deployment dilemma:

  • Instant, Serverless Deployment: With Metorial, you don’t have to worry about servers at all. Simply provide your code, and our platform automatically builds and deploys it on our serverless infrastructure. Your MCP server will be live and ready to serve requests in seconds.
  • Automatic Scaling: Our platform is built to handle massive scale. Whether you have one user or millions, your MCP server will automatically scale to meet the demand. Our proprietary hibernation technology ensures that your server starts in under a second when a request comes in and scales down to zero when it’s not in use, so you only pay for what you use.
  • Enterprise-Grade Security: Security is at the core of our platform. We provide a secure environment for your MCP servers with built-in authentication and authorization. Our platform handles the complexities of OAuth 2.0, so you can easily and securely connect your users’ accounts from services like Google, Slack, and hundreds of others.
  • Built-in Observability: Metorial provides detailed logs, traces, and metrics for every request to your MCP server. This gives you complete visibility into how your AI agents are using your tools, making it easy to debug issues and optimize performance.

Metorial vs. The Alternatives

When it comes to deploying MCP servers, you have a few options. Let’s see how they stack up against Metorial.

Feature

Self-Hosting

Other Platforms (Workato, Pipedream)

Metorial

Deployment Complexity

High (manual setup of servers, networking, etc.)

Medium (platform-specific configurations)

Low (deploy in 3 clicks or via API)

Scalability

Manual (requires setting up load balancers, auto-scaling)

Managed, but often with limits

Automatic and limitless (zero to millions of requests)

Cost Model

Fixed (pay for idle servers)

Tiered, often based on tasks or active users

Usage-based (pay only for requests served)

MCP Native Support

Requires custom implementation

Limited or non-existent

First-class, native support for MCP

Security

DIY (you are responsible for everything)

Managed, but may not be designed for agentic AI

Enterprise-grade, with per-user isolation at scale

While platforms like Workato and Pipedream are great for traditional workflow automation, they weren’t built for the new paradigm of agentic AI and the Model Context Protocol. Metorial is the only platform that provides a serverless, scalable, and secure environment specifically designed for deploying and managing MCP servers. With over 600 pre-built MCP servers in our marketplace and the ability to deploy your own custom servers, Metorial is the most powerful and flexible platform for building advanced agentic AI applications.

Conclusion: Build, Deploy, and Scale with Confidence

The rise of agentic AI is creating incredible new opportunities, but it also brings new challenges. The Model Context Protocol provides a much-needed standard for connecting AI agents to the world, but building and deploying production-ready MCP servers is a complex task. Metorial removes this complexity, providing a powerful serverless platform that lets you deploy, scale, and secure your MCP servers with ease.

Don’t let infrastructure challenges slow down your AI innovation. With Metorial, you can focus on what you do best: building the next generation of intelligent applications. The future of AI is agentic, and the future of agentic AI is built on Metorial.

Ready to get started? Sign up for Metorial today and deploy your first MCP server in minutes!

Ready to build with Metorial?

Connect any AI agent to 600+ apps.

Star us on GitHub