25 min read

Model Context Protocol (MCP): The Complete Multi-Platform Setup Guide

The Model Context Protocol (MCP) is revolutionising how AI coding assistants connect to external tools and data sources. This comprehensive guide covers MCP setup across six major platforms: OpenCode, Cursor, Claude Desktop, Windsurf, Continue, and Zed—giving you the knowledge to supercharge your AI-powered development workflow.

Model Context Protocol connecting AI applications to external tools and data sources

Connect your AI assistant to the tools it needs

Key Takeaways

  • MCP is an open standard by Anthropic that enables AI applications to connect with external data sources, tools, and workflows.
  • Think of MCP as a USB-C port for AI: a standardised way to connect AI applications to external systems.
  • Major AI coding platforms including OpenCode, Cursor, Claude Desktop, Windsurf, Continue, and Zed all support MCP.
  • MCP servers can be local (running on your machine) or remote (cloud-hosted), with support for various transport protocols.

What is the Model Context Protocol?

The Model Context Protocol (MCP) is an open-source standard developed by Anthropic for connecting AI applications to external systems. Using MCP, AI applications like Claude, ChatGPT, or any MCP-compatible coding assistant can connect to data sources (e.g., local files, databases), tools (e.g., search engines, APIs), and workflows (e.g., specialised prompts).

MCP architecture showing host application connecting to MCP servers via JSON-RPC, exposing tools, resources, and prompts
Model Context Protocol architecture: Host applications connect to MCP servers via JSON-RPC, accessing tools, resources, and prompts through a standardised interface

Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardised way to connect electronic devices, MCP provides a standardised way to connect AI applications to external systems.

Before MCP, each AI tool had its own proprietary way of connecting to external services. This meant developers had to learn different integration patterns for each platform, and tool creators had to build separate integrations for every AI assistant. MCP solves this by providing a universal protocol that works across all compatible platforms.

How MCP Works

MCP operates on a client-server architecture:

  • MCP Clients: AI applications (like OpenCode, Cursor, or Claude Desktop) that consume capabilities from MCP servers.
  • MCP Servers: Programs that expose specific capabilities (tools, resources, prompts) through the MCP protocol.
  • Transport Layer: The communication method between clients and servers (stdio for local, HTTP/SSE for remote).

Why MCP Matters for Developers

MCP unlocks transformative capabilities for AI-assisted development:

Real-Time Data Access

AI agents can access your Google Calendar, Notion, databases, and other live data sources to provide contextually relevant assistance.

Tool Integration

Connect AI to search engines, browsers, 3D design tools, deployment platforms, and virtually any service with an API.

Reduced Development Time

Build integrations once using the MCP standard, and they work across all compatible AI platforms automatically.

Enhanced AI Capabilities

Transform AI from a knowledge-limited assistant to a powerful agent that can take actions and access current information.

MCP Capabilities: Tools, Resources, and Prompts

MCP servers can expose three types of capabilities to AI clients:

Tools

Functions that AI models can execute to perform actions like searching, file operations, or API calls.

Resources

Structured data sources that can be read and referenced by the AI model during conversations.

Prompts

Templated messages and workflows that guide AI interactions for specific use cases.

Platform Comparison

Here's a comparison of MCP support across the major AI coding platforms:

PlatformConfig FileTransport TypesDifficulty
OpenCodeopencode.jsonLocal (stdio), Remote (HTTP), OAuthEasy
Cursormcp.jsonstdio, SSE, Streamable HTTPEasy
Claude Desktopclaude_desktop_config.jsonstdioMedium
Windsurfmcp_config.jsonstdio, HTTPEasy
Continue.continue/mcpServers/*.yamlstdio, SSE, Streamable HTTPMedium
Zedsettings.jsonstdio, HTTPEasy

Setting Up MCP in OpenCode

OpenCode is an open source AI coding agent available as a terminal-based interface, desktop app, or IDE extension. It supports both local and remote MCP servers with OAuth authentication.

Local MCP Server Configuration

Add local MCP servers to your opencode.json configuration file:

opencode.jsonjson
{
  "$schema": "https://opencode.ai/config.json",
  "mcp": {
    "filesystem": {
      "type": "local",
      "command": ["npx", "-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"],
      "enabled": true
    }
  }
}

Remote MCP Server Configuration

For cloud-hosted MCP servers, use the remote configuration:

opencode.jsonjson
{
  "$schema": "https://opencode.ai/config.json",
  "mcp": {
    "context7": {
      "type": "remote",
      "url": "https://mcp.context7.com/mcp",
      "enabled": true
    }
  }
}

Using MCP Tools

Once configured, reference the MCP server in your prompts:

use the context7 tool to search the Next.js documentation for server components

Tip

You can also add instructions to your AGENTS.md file to automatically use specific MCP tools for certain tasks.

Setting Up MCP in Cursor

Cursor is an AI-first code editor with extensive MCP support, including one-click installation from their MCP server collection.

Configuration File Location

  • Project-specific: .cursor/mcp.json
  • Global: ~/.cursor/mcp.json

Node.js MCP Server

mcp.jsonjson
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"],
      "env": {
        "API_KEY": "your-api-key"
      }
    }
  }
}

Remote Server Configuration

mcp.jsonjson
{
  "mcpServers": {
    "remote-api": {
      "url": "https://api.example.com/mcp",
      "headers": {
        "Authorization": "Bearer ${env:MY_API_TOKEN}"
      }
    }
  }
}

Variable Interpolation

Cursor supports variable interpolation in configuration:

  • ${env:NAME} - Environment variables
  • ${userHome} - Home directory path
  • ${workspaceFolder} - Project root directory

Setting Up MCP in Claude Desktop

Claude Desktop is Anthropic's official desktop application, providing native MCP support for file system access and other tools.

Configuration File Location

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Filesystem Server Example

claude_desktop_config.jsonjson
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/username/Desktop",
        "/Users/username/Downloads"
      ]
    }
  }
}

Security Note

Only grant access to directories you're comfortable with Claude reading and modifying. The server runs with your user account permissions.

Verification

After configuration, restart Claude Desktop. You should see an MCP server indicator in the bottom-right corner of the conversation input box. Click it to view available tools.

Setting Up MCP in Windsurf

Windsurf by Codeium features Cascade, an intelligent agent with native MCP integration. It supports both stdio and HTTP transport types.

Plugin Store Installation

The easiest way to add MCP servers in Windsurf is through the Plugin Store:

  1. Click the Plugins icon in the Cascade panel
  2. Browse available MCP plugins
  3. Click Install on your desired plugin
  4. Press the refresh button after installation

Manual Configuration

For custom servers, edit ~/.codeium/windsurf/mcp_config.json:

mcp_config.jsonjson
{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
      }
    }
  }
}

HTTP Server Configuration

mcp_config.jsonjson
{
  "mcpServers": {
    "figma": {
      "serverUrl": "https://your-server-url/mcp"
    }
  }
}

Tool Limit

Cascade has a limit of 100 total tools. You can toggle individual tools on/off from the plugin's Tools tab or Windsurf Settings.

Setting Up MCP in Continue

Continue is an open source AI code assistant for VS Code and JetBrains. It uses YAML configuration for MCP servers.

Quick Start Example

Create a folder .continue/mcpServers/ in your workspace and add a YAML file:

.continue/mcpServers/playwright-mcp.yamlyaml
name: Playwright mcpServer
version: 0.0.1
schema: v1
mcpServers:
  - name: Browser search
    command: npx
    args:
      - "@playwright/mcp@latest"

Transport Types

Continue supports multiple transport types:

YAML
# SSE Transport# Streamable HTTP Transport# Streamable HTTP Transport# Streamable HTTP Transport# Streamable HTTP Transportport
mcpServers:
  - name: HTTP Server
    type: streamable-http
    url: https://your-server.com

Using Secrets

YAML
mcpServers:
  - name: GitHub
    command: npx
    args: ["-y", "@modelcontextprotocol/server-github"]
    env:
      GITHUB_PERSONAL_ACCESS_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Note

MCP can only be used in Continue's agent mode. Make sure you're using agent mode when prompting with MCP tools.

Setting Up MCP in Zed

Zed is a high-performance, multiplayer code editor with native MCP support through extensions and custom configuration.

Installing via Extensions

Many MCP servers are available as Zed extensions:

  1. Open Command Palette and run zed: extensions
  2. Search for MCP servers (e.g., “GitHub MCP”, “Context7”)
  3. Install the extension
  4. Configure required credentials when prompted

Custom Server Configuration

Add custom servers to your settings.json:

settings.jsonjson
{
  "context_servers": {
    "local-mcp-server": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"],
      "env": {}
    },
    "remote-mcp-server": {
      "url": "https://mcp.example.com",
      "headers": { "Authorization": "Bearer <token>" }
    }
  }
}

Custom Agent Profiles

Create custom profiles to control which tools are enabled:

settings.jsonjson
{
  "agent": {
    "profiles": {
      "mcp-only": {
        "name": "MCP Only",
        "tools": {
          "read_file": false,
          "edit_file": false,
          "terminal": false
        },
        "enable_all_context_servers": true
      }
    }
  }
}

Best Practices and Security

Security Considerations

  • Verify sources: Only install MCP servers from trusted developers and repositories.
  • Use environment variables: Never hardcode API keys or secrets in configuration files.
  • Limit permissions: Use restricted API keys with minimal required permissions.
  • Review code: For critical integrations, audit the server's source code.

Performance Tips

  • Limit active servers: Each MCP server adds to your context window. Only enable what you need.
  • Use per-project configuration: Configure MCP servers at the project level rather than globally.
  • Disable unused tools: Toggle off individual tools you don't need within an MCP server.

Get Started Today

MCP is transforming how developers interact with AI coding assistants. Start with a single platform and one MCP server, then expand as you become comfortable with the capabilities.

Frequently Asked Questions

The Model Context Protocol (MCP) is an open-source standard developed by Anthropic that enables AI applications to connect to external data sources, tools, and workflows. Think of it like a USB-C port for AI: a standardised way to connect AI applications to external systems such as file systems, databases, APIs, and more.
MCP is supported by major AI coding platforms including OpenCode, Cursor, Claude Desktop, Windsurf, Continue, and Zed. Each platform has its own configuration file format and setup process, but they all use the same underlying MCP protocol, allowing servers built for one platform to work with others.
Local MCP servers run on your machine using stdio (standard input/output) transport and are launched as processes by the AI client. Remote MCP servers are hosted in the cloud and communicate via HTTP, SSE (Server-Sent Events), or Streamable HTTP protocols. Local servers offer better privacy and lower latency, while remote servers provide easier setup and can be shared across teams.
Common troubleshooting steps include: verifying your configuration file syntax is valid JSON/YAML, ensuring the MCP server package is installed correctly, checking that required environment variables (like API keys) are set, restarting your AI client after configuration changes, and reviewing logs for error messages. Most platforms show server status in their interface.
MCP can be secure when used properly. Best practices include: only installing servers from trusted sources, using environment variables for API keys instead of hardcoding them, granting minimal required permissions, reviewing server source code for critical integrations, and limiting file system access to specific directories. MCP servers run with your user permissions, so be cautious about which servers you enable.
Yes, you can build custom MCP servers using the official SDKs available in TypeScript and Python. The TypeScript SDK (@modelcontextprotocol/sdk) and Python SDK (mcp) provide all the tools needed to create servers that expose tools, resources, and prompts. This allows you to integrate proprietary systems, custom databases, or specialised workflows with any MCP-compatible AI client.

References & Further Reading

Related Articles

Ayodele Ajayi

Senior DevOps Engineer based in Kent, UK. Specialising in cloud infrastructure, DevSecOps, and platform engineering. Passionate about AI-assisted development and sharing knowledge through technical writing.