For a bit of a knowledge building exercise, I’d created a custom MCP server, tested it successfully, then promptly forgot about it.
Weeks later, while working on a WordPress theme, I mentioned to Claude: “I need placeholder images for the product categories – electronics, clothing, furniture, lifestyle products.”
Instead of directing me to stock photo sites, Claude responded: “I’ll generate placeholder images that match your theme requirements.”
Within minutes, I had professional product mockups: wireless headphones on white backgrounds, elegant wooden furniture with studio lighting, clothing displayed with perfect composition. Each image was sized correctly for my theme and styled consistently.
Then I realised what had happened. Claude had proactively used my forgotten MCP server to connect with Segmind’s AI generation models. It understood the context, recognised the appropriate tool, and solved my problem without any explicit instruction from me.
That moment reminded me why MCP servers are so powerful — they transform Claude from a conversational assistant into an active problem-solver that can work with real tools on your behalf.
Here’s how I built that server, and how you can use the same concepts to create MCP servers for your own APIs and workflows.
What you’ll build
By the end of this guide, you’ll understand how to build MCP servers through a practical example:
- A working weather server that accepts city names and returns current conditions
- The architectural concepts behind more complex servers like my Segmind integration
What you’ll learn
- The three core concepts that make MCP servers work
- How to structure APIs for natural language interaction
- Why schema-driven design enables AI to understand your tools
- How to handle multiple AI models in a single server
Why Segmind works perfectly for MCP integration
Before diving into the code, let’s understand why I chose Segmind for this integration.
Most AI generation requires juggling multiple services: DALL-E for images, RunwayML for videos, ElevenLabs for speech. Each has different APIs, pricing models, and authentication methods.
Segmind consolidates this chaos. One API provides access to:
- Image generation (FLUX, SDXL, Stable Diffusion)
- Video creation (Veo 3, Kling AI, Seedance)
- Music composition (Lyria 2, MiniMax Music)
- Text-to-speech (Dia TTS, Orpheus TTS)
For MCP servers, this means one integration gives Claude access to dozens of AI models through consistent interfaces. Perfect for demonstrating advanced server concepts.
Building your first MCP server: weather data
Let’s start with something simple to understand the core concepts. We’ll build a weather server that accepts city names and returns current conditions.
The three core MCP concepts
Every MCP server revolves around three concepts:
Tools perform actions. They’re like API endpoints that accept parameters and return results. In our weather server, get_weather
is a tool.
Resources provide data. They’re read-only information sources that AI models can query. Think database records or file contents.
Schemas describe what each tool needs and does. They enable AI models to understand when and how to use your tools without hardcoded instructions.
MCP servers provide schema-driven bridges between Claude AI and external APIs, enabling context-aware tool selection and automatic parameter mapping
Step 1: Set up the project
We’ll use Open-Meteo’s free APIs for both weather data and geocoding. No API keys required.
mkdir weather-mcp-server
cd weather-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk axios zod
npm install -D typescript @types/node
Update package.json
to enable ES modules so it looks like this:
{
"name": "weather-mcp-server",
"version": "1.0.0",
"description": "",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node build/index.js"
},
"main": "index.js",
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.17.1",
"axios": "^1.11.0",
"zod": "^3.25.76"
},
"devDependencies": {
"@types/node": "^24.2.0",
"typescript": "^5.9.2"
}
}
Create tsconfig.json
for TypeScript configuration:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./build",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
}
}
Step 2: Create the server foundation
Create src/index.ts
and start with the basic server structure:
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import axios from 'axios';
import { z } from 'zod';
const server = new McpServer({
name: 'weather-server',
version: '1.0.0'
});
Step 3: Add geocoding functionality
This helper function converts city names to coordinates using Open-Meteo’s geocoding API. Add this next to index.ts
async function getCoordinates(location: string) {
const response = await axios.get(
'https://geocoding-api.open-meteo.com/v1/search',
{ params: { name: location, count: 1 } }
);
if (response.data.results?.length > 0) {
const result = response.data.results[0];
return {
latitude: result.latitude,
longitude: result.longitude,
displayName: `${result.name}, ${result.country}`
};
}
throw new Error(`Location "${location}" not found`);
}
Step 4: Define the weather tool
Now we create the tool that Claude will use. Notice how the schema describes exactly what the tool expects, add this to index.ts
server.tool(
'get_weather',
{
location: z.string().describe('City name, optionally with country (e.g., "London", "Berlin, Germany")')
},
async ({ location }) => {
try {
const coords = await getCoordinates(location);
const response = await axios.get(
'https://api.open-meteo.com/v1/forecast',
{
params: {
latitude: coords.latitude,
longitude: coords.longitude,
current: 'temperature_2m,relative_humidity_2m,windspeed_10m',
timezone: 'auto'
}
}
);
const current = response.data.current;
return {
content: [{
type: 'text',
text: `Weather in ${coords.displayName}: ${current.temperature_2m}°C, ${current.relative_humidity_2m}% humidity, Wind: ${current.windspeed_10m} km/h`
}]
};
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
return {
content: [{
type: 'text',
text: `Error: ${message}`
}]
};
}
}
);
Step 5: Connect the transport
Finally, connect the server to its transport mechanism, add this to the bottom of index.ts
const transport = new StdioServerTransport();
await server.connect(transport);
Step 6: Connect to Claude Desktop
Compile your TypeScript with npm run build
in the project directory, then add this configuration to claude_desktop_config.json
:
Configuration file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"weather-server": {
"command": "node",
"args": ["./build/index.js"],
"cwd": "//path//to//your//weather-mcp-server"
}
}
}
Restart Claude Desktop. You’ll see an MCP server indicator in the chat interface.
Test it: “What’s the weather in London?” Claude will use your server to get coordinates, fetch weather data, and present the results conversationally.
Why this approach works
This simple example demonstrates the key principles that make MCP servers powerful:
Schema-driven design: The z.string().describe()
pattern tells Claude exactly what the tool expects. AI models can read these descriptions and use tools appropriately without hardcoded instructions.
Transport abstraction: The same server logic works with different transport methods — stdio for local development, HTTP for remote deployment. You write the logic once.
Error isolation: Each tool handles its own errors. If geocoding fails, weather fetching still works. If the weather API is down, other tools remain functional.
Scaling up: the Segmind MCP server
The weather server teaches the fundamentals. The Segmind NPM library shows how these concepts scale to handle multiple AI models and complex operations.
Multiple tools, single server
Instead of one get_weather
tool, the Segmind server exposes several:
generate_image
– Creates images from text descriptionscreate_video
– Generates short video clipscompose_music
– Creates background music trackstext_to_speech
– Converts text to natural-sounding voice
Each tool maintains the same structure as our weather example: schema definition, parameter validation, API calls, and error handling.
Handling API complexity
The Segmind server adds several production concerns:
Authentication management: API keys are handled at the server level, never exposed to Claude or users.
Credit tracking: The server monitors Segmind’s credit-based pricing and provides clear error messages when credits run low.
Timeout handling: AI generation can take minutes. The server includes appropriate timeouts and progress feedback.
File processing: Generated content is returned as base64 data or URLs, depending on the content type and size.
Back to that WordPress moment
Remember the product images Claude generated automatically? Here’s what actually happened:
When I mentioned needing placeholder images, Claude analysed the context: WooCommerce theme, product categories, professional appearance. It understood that the Segmind server’s generate_image
tool could solve this problem.
Claude crafted specific prompts: “modern wireless headphones on white background, product photography style” for electronics. “elegant wooden dining table, studio lighting, minimalist composition” for furniture.
Each image came back properly sized and consistently styled because the server’s schema included parameters for dimensions and style preferences.
I didn’t write prompts. I didn’t visit external sites. I didn’t manage file uploads. Claude handled the entire workflow through natural conversation.
This is what MCP servers enable: AI that doesn’t just chat, but acts on your behalf using real tools.
Practical applications beyond content generation
The same patterns apply to any API integration:
Database operations: Natural language queries that generate SQL, execute safely, and return formatted results.
CRM automation: “Create a follow-up task for leads from yesterday’s webinar” becomes automated contact management.
Development workflows: “Deploy the staging branch and run integration tests” executes complex CI/CD pipelines.
Business intelligence: “Show me conversion rates by traffic source this month” queries analytics APIs and generates reports.
Security considerations for production
Production MCP servers need careful security design:
Credential isolation: API keys and secrets stay at the server level. Claude never sees sensitive authentication data.
Permission boundaries: Tools should implement least-privilege access. A weather tool doesn’t need database write permissions.
Rate limiting: Prevent abuse with both API-level and tool-level rate limits. Monitor usage patterns for anomalies.
Audit trails: Log all tool usage for compliance and debugging. Include user context, parameters, and outcomes.
Three key lessons learned
Building these MCP servers taught me three important lessons:
1. Schema design is everything
The quality of your tool descriptions directly impacts how effectively Claude can use them. Specific, clear schemas enable sophisticated automation. Vague descriptions lead to confused interactions.
2. Natural language APIs change user expectations
When Claude can generate images through conversation, users stop thinking about “API calls” and start thinking about “getting things done.” This shift requires rethinking how we design integrations.
3. Context awareness amplifies value
The WooCommerce example worked because Claude understood the context – theme development, product categories, professional requirements. MCP servers that leverage context create better user experiences than those that don’t.
Next steps: build your own
If you want to try building MCP servers:
Start simple: Build the weather server first. Understand the core concepts before tackling complex integrations.
Pick relevant APIs: Choose services you actually use. Internal tools, frequently-accessed APIs, or workflow automation candidates work best.
Focus on schemas: Spend time crafting clear tool descriptions. This investment pays dividends in usability.
Test conversationally: Don’t just test API calls – test natural language interactions. Ask Claude to use your tools the way real users would.
The complete Segmind MCP server code is available on NPM with full documentation and examples.
MCP servers represent a fundamental shift in how we interact with APIs. Instead of learning interfaces, we describe what we want. Instead of manual integration, we get intelligent automation.
The question isn’t whether this approach will become standard – it’s how quickly you’ll adopt it.