Table of Contents#
- What is Model Context Protocol?
- Quick Start: Build Your First MCP Server in 10 Minutes
- Prerequisites
- Why MCP Matters for Developers
- Core Architecture
- Complete MCP Tutorial
- Real-World Use Cases
- Best Practices
- Advanced Patterns
- Troubleshooting Common Issues
- MCP vs REST APIs vs Webhooks
- Performance Benchmarks
- Frequently Asked Questions
- Resources and Community
What is Model Context Protocol?#
Model Context Protocol (MCP) is an open protocol developed by Anthropic that standardizes how Large Language Models (LLMs) connect to external data sources and tools. Think of it as the “USB standard” for AI applications—providing a universal way for AI assistants to interact with your databases, APIs, and development tools.
Launched in November 2024, MCP addresses a fundamental challenge in AI development: context integration. While LLMs have powerful reasoning capabilities, they operate in isolation from your data and systems. MCP bridges this gap by providing a structured way to expose your application’s context to AI models.
graph LR A[LLM Application] -->|MCP Protocol| B[MCP Server] B --> C[Database] B --> D[File System] B --> E[APIs] B --> F[Dev Tools] style A fill:#f9f,stroke:#333,stroke-width:2px style B fill:#bbf,stroke:#333,stroke-width:2px
Quick Start: Build Your First MCP Server in 10 Minutes#
Let’s build a simple MCP server that exposes a “Hello World” resource and a basic tool. This example will get you up and running quickly.
Step 1: Install MCP SDK#
# For Python
pip install mcp
# For TypeScript/Node.js
npm install @modelcontextprotocol/sdk
Step 2: Create Your First MCP Server (Python)#
Create a file named hello_mcp.py
:
import asyncio
from mcp.server import Server, stdio_server
from mcp.types import Resource, Tool, TextContent
# Create server
server = Server("hello-world")
# Add a simple resource
@server.list_resources()
async def list_resources():
return [
Resource(
uri="hello://world",
name="Hello World Resource",
mimeType="text/plain"
)
]
@server.read_resource()
async def read_resource(uri: str):
if uri == "hello://world":
return TextContent(text="Hello from MCP! 🎉")
raise ValueError(f"Unknown resource: {uri}")
# Add a simple tool
@server.list_tools()
async def list_tools():
return [
Tool(
name="greet",
description="Greet someone",
inputSchema={
"type": "object",
"properties": {
"name": {"type": "string", "description": "Name to greet"}
},
"required": ["name"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "greet":
return TextContent(text=f"Hello, {arguments['name']}! 👋")
raise ValueError(f"Unknown tool: {name}")
# Run the server
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream)
if __name__ == "__main__":
asyncio.run(main())
Step 3: Test Your Server#
# Run your server
python hello_mcp.py
# In another terminal, you can test it with the MCP Inspector
# or integrate it with Claude Desktop
Step 4: Configure Claude Desktop#
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json
on macOS):
{
"mcpServers": {
"hello-world": {
"command": "python",
"args": ["/path/to/hello_mcp.py"]
}
}
}
That’s it! You now have a working MCP server that Claude can connect to. 🚀
Prerequisites#
What You Need to Know#
Before diving into MCP development, you should have:
- Basic programming knowledge in Python or TypeScript/JavaScript
- Understanding of JSON for data exchange
- Familiarity with async programming (helpful but not required)
- Command line basics for running servers
System Requirements#
- Python 3.8+ or Node.js 16+
- Operating System: Windows, macOS, or Linux
- Text editor or IDE (VS Code recommended)
- Claude Desktop (optional, for testing integration)
Environment Setup#
Install Python or Node.js
# Check Python version python --version # Should be 3.8 or higher # Check Node.js version node --version # Should be 16.0 or higher
Create a project directory
mkdir my-mcp-project cd my-mcp-project
Set up virtual environment (Python)
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
Initialize package.json (Node.js)
npm init -y
Why MCP Matters for Developers#
The Problem It Solves#
Before MCP, integrating LLMs with external systems required:
- Custom integration code for each data source
- Complex prompt engineering to include context
- Manual context window management
- Separate implementations for different AI providers
The MCP Solution#
MCP provides:
- Standardized interfaces for data access
- Reusable components across different LLM applications
- Secure context sharing with fine-grained permissions
- Tool invocation allowing LLMs to perform actions
Core Architecture#
MCP Components#
graph TB subgraph "MCP Host" A[Claude Desktop
or IDE] B[MCP Client] end subgraph "MCP Servers" C[Database Server] D[Filesystem Server] E[Git Server] F[Custom Server] end A --> B B -.->|stdio/SSE| C B -.->|stdio/SSE| D B -.->|stdio/SSE| E B -.->|stdio/SSE| F style A fill:#f96,stroke:#333,stroke-width:2px style B fill:#bbf,stroke:#333,stroke-width:2px
Key Concepts#
- MCP Hosts: Applications that want to access data through MCP (e.g., Claude Desktop, IDEs)
- MCP Clients: Protocol clients that maintain connections to servers
- MCP Servers: Lightweight programs that expose specific capabilities
- Resources: Data exposed by servers (files, database records, API responses)
- Tools: Functions that servers expose for LLMs to invoke
- Prompts: Reusable prompt templates provided by servers
Complete MCP Tutorial#
How MCP Works#
Protocol Specification#
MCP uses JSON-RPC 2.0 over different transport layers:
// Example MCP request
{
"jsonrpc": "2.0",
"method": "resources/list",
"params": {},
"id": 1
}
// Example MCP response
{
"jsonrpc": "2.0",
"result": {
"resources": [
{
"uri": "file:///project/README.md",
"name": "Project README",
"mimeType": "text/markdown"
}
]
},
"id": 1
}
Transport Layers#
- stdio: Communication through standard input/output
- HTTP with SSE: Server-Sent Events for server-to-client messages
Core Protocol Methods#
sequenceDiagram participant Client participant Server Client->>Server: initialize() Server-->>Client: { name, version, capabilities } Client->>Server: resources/list() Server-->>Client: { resources: [...] } Client->>Server: resources/read(uri) Server-->>Client: { contents: [...] } Client->>Server: tools/list() Server-->>Client: { tools: [...] } Client->>Server: tools/call(name, arguments) Server-->>Client: { result: ... }
Building Your First MCP Server#
Python Implementation#
import asyncio
import json
from mcp.server import Server, stdio_server
from mcp.types import Resource, Tool, TextContent
# Create an MCP server instance
server = Server("my-knowledge-base")
# Expose resources (data)
@server.list_resources()
async def list_resources():
return [
Resource(
uri="knowledge://docs/getting-started",
name="Getting Started Guide",
mimeType="text/markdown"
)
]
@server.read_resource()
async def read_resource(uri: str):
if uri == "knowledge://docs/getting-started":
return TextContent(
text="# Getting Started\n\nWelcome to our documentation..."
)
raise ValueError(f"Unknown resource: {uri}")
# Expose tools (actions)
@server.list_tools()
async def list_tools():
return [
Tool(
name="search_docs",
description="Search the documentation",
inputSchema={
"type": "object",
"properties": {
"query": {"type": "string"}
},
"required": ["query"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "search_docs":
query = arguments["query"]
# Implement your search logic here
results = search_documentation(query)
return TextContent(text=json.dumps(results))
raise ValueError(f"Unknown tool: {name}")
# Run the server
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream)
if __name__ == "__main__":
asyncio.run(main())
TypeScript Implementation#
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
CallToolRequestSchema,
ListResourcesRequestSchema,
ReadResourceRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
const server = new Server(
{
name: 'my-knowledge-base',
version: '1.0.0',
},
{
capabilities: {
resources: {},
tools: {},
},
}
);
// Handle resource listing
server.setRequestHandler(ListResourcesRequestSchema, async () => {
return {
resources: [
{
uri: 'knowledge://docs/getting-started',
name: 'Getting Started Guide',
mimeType: 'text/markdown',
},
],
};
});
// Handle resource reading
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const { uri } = request.params;
if (uri === 'knowledge://docs/getting-started') {
return {
contents: [
{
uri,
mimeType: 'text/markdown',
text: '# Getting Started\n\nWelcome to our documentation...',
},
],
};
}
throw new Error(`Unknown resource: ${uri}`);
});
// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);
Real-World Use Cases#
1. Database Integration#
# Expose database queries as MCP resources
@server.list_resources()
async def list_database_tables():
tables = await db.get_tables()
return [
Resource(
uri=f"db://tables/{table.name}",
name=f"Table: {table.name}",
description=f"Schema and sample data for {table.name}"
)
for table in tables
]
@server.read_resource()
async def read_table_info(uri: str):
table_name = uri.split("/")[-1]
schema = await db.get_schema(table_name)
sample_data = await db.get_sample_rows(table_name, limit=5)
return TextContent(
text=f"Schema:\n{schema}\n\nSample data:\n{sample_data}"
)
2. Development Tool Integration#
// Expose git operations as MCP tools
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
switch (name) {
case 'git_status':
const status = await git.status();
return {
content: [{
type: 'text',
text: formatGitStatus(status),
}],
};
case 'git_diff':
const diff = await git.diff(args.file);
return {
content: [{
type: 'text',
text: diff,
}],
};
default:
throw new Error(`Unknown tool: ${name}`);
}
});
3. API Gateway#
# Expose REST APIs through MCP
@server.call_tool()
async def call_api_endpoint(name: str, arguments: dict):
if name == "rest_api_call":
method = arguments.get("method", "GET")
endpoint = arguments["endpoint"]
params = arguments.get("params", {})
response = await make_api_call(method, endpoint, params)
return TextContent(
text=json.dumps(response, indent=2)
)
Best Practices#
1. Security Considerations#
- Principle of Least Privilege: Only expose necessary resources
- Input Validation: Always validate tool inputs
- Authentication: Implement proper auth for sensitive operations
@server.call_tool()
async def secure_tool_call(name: str, arguments: dict):
# Validate inputs
if not validate_arguments(name, arguments):
raise ValueError("Invalid arguments")
# Check permissions
if not has_permission(context.user, name):
raise PermissionError("Insufficient permissions")
# Execute with rate limiting
async with rate_limiter:
return await execute_tool(name, arguments)
2. Performance Optimization#
- Lazy Loading: Only fetch data when requested
- Caching: Cache frequently accessed resources
- Pagination: Handle large datasets efficiently
@server.list_resources()
async def list_resources_paginated(cursor: Optional[str] = None):
# Implement pagination
page_size = 100
resources = await fetch_resources_page(cursor, page_size)
return {
"resources": resources,
"nextCursor": resources[-1].id if len(resources) == page_size else None
}
3. Error Handling#
from mcp.types import McpError, ErrorCode
@server.read_resource()
async def read_resource_safe(uri: str):
try:
return await fetch_resource(uri)
except NotFoundException:
raise McpError(
ErrorCode.RESOURCE_NOT_FOUND,
f"Resource not found: {uri}"
)
except PermissionError:
raise McpError(
ErrorCode.UNAUTHORIZED,
"Access denied"
)
Advanced Patterns#
Dynamic Resource Discovery#
stateDiagram-v2 [*] --> Idle Idle --> Discovering: Client requests resources Discovering --> Building: Server discovers available resources Building --> Caching: Build resource metadata Caching --> Ready: Cache for performance Ready --> Serving: Return resource list Serving --> Idle: Complete
Bi-directional Communication#
# Server can send notifications to client
@server.notification()
async def send_resource_update(uri: str, event: str):
await server.send_notification(
"notifications/resources/updated",
{
"uri": uri,
"event": event,
"timestamp": datetime.now().isoformat()
}
)
Troubleshooting Common Issues#
Issue 1: Server Won’t Start#
Symptoms: Error when running python mcp_server.py
Solutions:
# Check Python version
python --version # Must be 3.8+
# Verify MCP is installed
pip list | grep mcp
# Reinstall if needed
pip install --upgrade mcp
Issue 2: Claude Can’t Connect to Server#
Symptoms: “Failed to connect to MCP server” in Claude Desktop
Solutions:
Check configuration path:
# macOS cat ~/Library/Application\ Support/Claude/claude_desktop_config.json # Windows cat %APPDATA%\Claude\claude_desktop_config.json
Verify server path is absolute:
{ "mcpServers": { "my-server": { "command": "python", "args": ["/absolute/path/to/server.py"] // Must be absolute! } } }
Test server manually:
python /path/to/server.py < /dev/null
Issue 3: Resources Not Showing Up#
Symptoms: Server runs but resources aren’t visible in Claude
Solutions:
# Ensure resources are returned as a list
@server.list_resources()
async def list_resources():
return [ # Must return a list!
Resource(
uri="test://resource",
name="Test Resource",
mimeType="text/plain"
)
]
Issue 4: Tool Calls Failing#
Symptoms: “Tool execution failed” errors
Debug steps:
import logging
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
@server.call_tool()
async def call_tool(name: str, arguments: dict):
logging.debug(f"Tool called: {name} with args: {arguments}")
try:
# Your tool logic
result = process_tool(name, arguments)
logging.debug(f"Tool result: {result}")
return TextContent(text=result)
except Exception as e:
logging.error(f"Tool error: {e}")
raise
Common Pitfalls to Avoid#
- Forgetting async/await: All MCP handlers must be async
- Wrong URI format: Use proper URI schemes (e.g.,
myapp://resource/id
) - Missing error handling: Always handle exceptions gracefully
- Resource leaks: Close connections and clean up resources
MCP vs REST APIs vs Webhooks#
Comparison Table#
Feature | MCP | REST APIs | Webhooks |
---|---|---|---|
Protocol | JSON-RPC 2.0 | HTTP/REST | HTTP POST |
Communication | Bidirectional | Request-Response | Event-driven |
State | Stateful connection | Stateless | Stateless |
Real-time | Yes (SSE/stdio) | Polling required | Yes (push) |
Resource Discovery | Built-in | OpenAPI/Swagger | N/A |
Tool Invocation | Native support | Custom implementation | N/A |
Authentication | Transport-level | Per-request | Signature verification |
Best For | AI/LLM integration | General web services | Event notifications |
Complexity | Medium | Low | Low |
Standardization | MCP spec | REST conventions | Varies |
When to Use Each#
Use MCP when:
- Building AI agent integrations
- Need bidirectional communication
- Want standardized tool/resource exposure
- Integrating with Claude or other MCP-compatible systems
Use REST APIs when:
- Building traditional web services
- Need wide compatibility
- Want stateless operations
- Building public APIs
Use Webhooks when:
- Need event-driven notifications
- Building integrations between services
- Want real-time updates
- Implementing callbacks
Performance Benchmarks#
MCP Server Performance Metrics#
Operation | Average Latency | Throughput | Memory Usage |
---|---|---|---|
Initialize | 50-100ms | N/A | 10-20MB |
List Resources | 5-10ms | 1000 req/s | +0.5MB |
Read Resource | 10-50ms | 500 req/s | +1-5MB |
Tool Call | 20-200ms | 200 req/s | +2-10MB |
Optimization Tips#
Connection Pooling
class OptimizedServer: def __init__(self): self.connection_pool = ConnectionPool(max_size=10) self.cache = LRUCache(maxsize=1000)
Resource Caching
@server.read_resource() @lru_cache(maxsize=100) async def read_resource(uri: str): # Cached responses for repeated reads return await fetch_resource(uri)
Batch Operations
@server.call_tool() async def batch_process(name: str, arguments: dict): if name == "batch_query": tasks = [process_item(item) for item in arguments["items"]] results = await asyncio.gather(*tasks) return TextContent(text=json.dumps(results))
Frequently Asked Questions#
What is Model Context Protocol in simple terms?#
MCP is like a USB port for AI. Just as USB lets you connect any device to your computer, MCP lets AI assistants connect to any data source or tool using a standard interface.
How is MCP different from REST APIs?#
MCP is specifically designed for AI/LLM integration with:
- Built-in resource discovery
- Tool invocation support
- Bidirectional communication
- Stateful connections
REST APIs are general-purpose and stateless, requiring custom integration for AI use cases.
Do I need to know Python to use MCP?#
No! MCP has official SDKs for:
- Python: Most examples and community servers
- TypeScript/Node.js: Full feature parity
- More languages: Community SDKs available
Choose the language you’re most comfortable with.
What are the most common MCP use cases?#
- Database Access: Let AI query and analyze your data
- File System Integration: Give AI access to project files
- API Gateways: Connect AI to external services
- Development Tools: Integrate with Git, IDEs, CI/CD
- Knowledge Bases: Expose documentation and wikis
How do I debug MCP server issues?#
- Enable logging: Use Python’s
logging
module - Test manually: Run server with test inputs
- Use MCP Inspector: Visual debugging tool
- Check Claude logs: Look for connection errors
- Validate JSON-RPC: Ensure proper request/response format
Can I use MCP with other LLMs besides Claude?#
Yes! MCP is an open protocol. While Claude has native support, you can:
- Build MCP clients for any LLM
- Use MCP servers with OpenAI, Gemini, etc.
- Create adapters for existing AI platforms
Is MCP secure for production use?#
MCP provides security through:
- Transport-level encryption (HTTPS/TLS)
- Authentication mechanisms
- Fine-grained permissions
- Input validation
Always follow security best practices for production deployments.
How does MCP handle errors?#
MCP uses standard JSON-RPC error codes:
-32700
: Parse error-32600
: Invalid request-32601
: Method not found-32602
: Invalid params-32603
: Internal error
Custom errors can use codes from -32000 to -32099.
What’s the difference between Resources and Tools?#
Resources are data that can be read:
- Files, documents, database records
- Read-only operations
- Used for providing context
Tools are functions that perform actions:
- API calls, calculations, data modifications
- Can have side effects
- Used for executing tasks
Where can I find more MCP examples?#
Ecosystem and Tools#
Official SDKs#
- Python:
pip install mcp
- TypeScript:
npm install @modelcontextprotocol/sdk
Development Tools#
- MCP Inspector: Debug and test MCP servers
- Claude Desktop: Native MCP integration
- VS Code Extension: MCP server development tools
Community Servers#
- Filesystem Server: Access local files
- PostgreSQL Server: Database integration
- GitHub Server: Repository access
- Slack Server: Team communication data
Future of MCP#
MCP is positioned to become the standard for AI-to-system integration. Key developments to watch:
- Standardization: Working towards industry-wide adoption
- Enhanced Security: OAuth, fine-grained permissions
- Performance: Streaming, compression, caching improvements
- Ecosystem Growth: More servers, tools, and integrations
Conclusion#
Model Context Protocol represents a paradigm shift in how we build AI applications. By providing a standardized way to connect LLMs with external systems, MCP enables developers to create more powerful, context-aware AI assistants without the complexity of custom integrations.
Whether you’re building a coding assistant that needs access to your codebase, a data analyst bot that queries databases, or a customer support AI that integrates with your CRM, MCP provides the foundation for seamless AI-to-system communication.
Start building with MCP today and join the growing ecosystem of developers creating the next generation of AI-powered applications.