Math MCP Server
Provides mathematical operations through MCP, enabling addition and multiplication of number lists with comprehensive input validation and error handling.
README
MCP Server Project
A collection of Model Context Protocol (MCP) server implementations and utilities for mathematical operations and LLM interactions.
📦 Modules
1. Math MCP Server (mcp_math_server.py)
A Model Context Protocol server that provides mathematical operations through a standardized interface.
2. LLM Service Module (mcp_llm_server.py)
A utility module for interacting with local LLM servers via OpenAI-compatible APIs.
🚀 Features
Math MCP Server
- Addition Tool: Sum any list of numbers
- Multiplication Tool: Multiply any list of numbers
- Comprehensive Error Handling: Validates inputs for type safety and edge cases
- FastMCP Framework: Built on the efficient FastMCP server framework
- Stdio Transport: Uses standard input/output for client-server communication
LLM Service Module
- Chat Completion: Full-featured LLM API interactions with complete control
- Simple Chat: Quick interface for single prompts with optional system messages
- Conversational Chat: Multi-turn dialogue with conversation history management
- OpenAI-Compatible: Works with any OpenAI-compatible local LLM server
- Flexible Configuration: Customizable model, temperature, max tokens, and streaming
📋 Requirements
- Python 3.8+
- Required packages:
mcp- Model Context Protocol libraryfastmcp- Fast MCP server implementationrequests- HTTP library for LLM API communication
🔧 Installation
- Clone the repository:
git clone https://github.com/y-nihat/mcp.git
cd mcp
- Install dependencies:
pip install -r requirements.txt
🏃 Usage
Using the Math MCP Server
The server runs as a subprocess and communicates via stdio:
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
server_params = StdioServerParameters(
command="python",
args=["mcp_math_server.py"],
env=None
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Call the add tool
result = await session.call_tool("add", {"numbers": [10, 20, 30]})
print(result.content[0].text) # Output: 60.0
# Call the multiply tool
result = await session.call_tool("multiply", {"numbers": [7, 8]})
print(result.content[0].text) # Output: 24.0
Using the LLM Service Module
The LLM service provides utility functions to interact with a local LLM server.
Prerequisites
Ensure you have a local LLM server running at http://localhost:1234 with OpenAI-compatible API (e.g., LM Studio, LocalAI, Ollama with OpenAI compatibility, etc.).
Basic Usage
from mcp_llm_server import simple_chat, chat_completion, conversational_chat
# Simple single-prompt chat
response = simple_chat(
prompt="What is Python?",
system_message="You are a helpful programming tutor."
)
print(response)
# Full control with chat_completion
response = chat_completion(
messages=[
{"role": "system", "content": "Always answer in rhymes. Today is Thursday"},
{"role": "user", "content": "What day is it today?"}
],
model="qwen/qwen3-4b-2507",
temperature=0.7,
max_tokens=-1
)
print(response)
# Multi-turn conversation
history = []
result = conversational_chat(
conversation_history=history,
new_message="Hello! How are you?"
)
print(result["response"])
history = result["updated_history"]
# Continue the conversation
result = conversational_chat(
conversation_history=history,
new_message="Tell me a joke"
)
print(result["response"])
Testing the LLM Module
Run the module directly to test with the example:
python mcp_llm_server.py
Available Tools
Math MCP Server Tools
add(numbers: List[float]) -> float
Sums all numbers in the provided list.
Parameters:
numbers: List of numeric values (int or float)
Returns:
- Sum of all numbers as float
Example:
result = await session.call_tool("add", {"numbers": [1, 2, 3, 4, 5]})
# Returns: 15.0
multiply(numbers: List[float]) -> float
Multiplies all numbers in the provided list.
Parameters:
numbers: List of numeric values (int or float)
Returns:
- Product of all numbers as float
Example:
result = await session.call_tool("multiply", {"numbers": [2, 3, 4]})
# Returns: 24.0
LLM Service Module Functions
chat_completion(messages, model=None, temperature=0.7, max_tokens=-1, stream=False) -> str
Send a chat completion request to the local LLM server with full control over all parameters.
Parameters:
messages: List of message dictionaries with 'role' and 'content' keysmodel: The model to use (default: "qwen/qwen3-4b-2507")temperature: Controls randomness, 0.0 to 1.0 (default: 0.7)max_tokens: Maximum tokens to generate, -1 for unlimited (default: -1)stream: Whether to stream the response (default: False)
Returns:
- The assistant's response as a string
Example:
response = chat_completion(
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain Python in one sentence."}
],
temperature=0.5
)
simple_chat(prompt, system_message=None) -> str
Simple chat interface for quick single-prompt interactions.
Parameters:
prompt: The user's message/questionsystem_message: Optional system message to set context or behavior
Returns:
- The assistant's response as a string
Example:
response = simple_chat(
prompt="What is machine learning?",
system_message="You are an expert data scientist."
)
conversational_chat(conversation_history, new_message) -> Dict
Continue a conversation with context from previous messages.
Parameters:
conversation_history: List of previous message dictionariesnew_message: New user message to add to the conversation
Returns:
- Dictionary with 'response' (str) and 'updated_history' (list)
Example:
history = []
result = conversational_chat(history, "Hello!")
print(result["response"])
history = result["updated_history"]
result = conversational_chat(history, "Tell me more")
print(result["response"])
🛡️ Error Handling
Math MCP Server Validation
The Math MCP server includes comprehensive validation for all inputs:
Validation Checks
- ✅ Type Validation: Ensures input is a list
- ✅ Non-Empty List: Rejects empty lists
- ✅ Numeric Values: Validates all elements are int or float
- ✅ NaN Detection: Catches Not-a-Number values
- ✅ Infinity Detection: Catches infinite values (±inf)
Math Server Error Types
ValueError - Raised for:
- Empty lists
- NaN values
- Infinite values
TypeError - Raised for:
- Non-list inputs
- Non-numeric elements in the list
Math Server Error Examples
# Empty list
await session.call_tool("add", {"numbers": []})
# Returns error: "Cannot add an empty list of numbers"
# Non-numeric value
await session.call_tool("add", {"numbers": [1, "two", 3]})
# Returns error: "Element at index 1 must be a number, got str: two"
# NaN value
await session.call_tool("add", {"numbers": [1, float('nan'), 3]})
# Returns error: "Element at index 1 is NaN (Not a Number)"
# Infinite value
await session.call_tool("multiply", {"numbers": [5, float('inf')]})
# Returns error: "Element at index 1 is infinite"
LLM Service Module Validation
The LLM service includes validation for chat messages and API communication:
LLM Validation Checks
- ✅ Message List Validation: Ensures messages is a non-empty list
- ✅ Message Format: Validates each message has 'role' and 'content' keys
- ✅ Role Validation: Ensures role is one of: system, user, assistant
- ✅ API Communication: Handles HTTP errors and timeouts (60s timeout)
- ✅ Response Format: Validates LLM API response structure
LLM Service Error Types
ValueError - Raised for:
- Empty message lists
- Malformed message dictionaries
- Invalid role types
- Unexpected API response format
requests.RequestException - Raised for:
- Network communication errors
- HTTP errors (4xx, 5xx status codes)
- Request timeouts
LLM Service Error Examples
# Empty messages list
chat_completion([])
# Raises: "messages must be a non-empty list"
# Invalid role
chat_completion([{"role": "invalid", "content": "Hello"}])
# Raises: "Invalid role 'invalid' at index 0"
# Missing content key
chat_completion([{"role": "user"}])
# Raises: "Message at index 0 must have 'role' and 'content' keys"
# LLM server not running
simple_chat("Hello")
# Raises: requests.RequestException with connection error details
🧪 Testing
Math MCP Server Tests
The project includes comprehensive test coverage with 20 tests:
- 12 Functional Tests: Verify correct mathematical operations
- 8 Error Handling Tests: Validate error detection and reporting
Running Tests
python test_math_mcp_server.py
Test Coverage
Functional Tests:
- Basic addition and multiplication
- Decimal number operations
- Negative number handling
- Single number operations
- Zero operations
- Large number calculations
- Sequential operations (chaining)
Error Handling Tests:
- Empty list validation
- Non-numeric element detection
- NaN value detection
- Infinity detection (positive and negative)
- Mixed invalid values
- Type error reporting with index information
📁 Project Structure
mcp/
├── mcp_math_server.py # Math MCP server implementation
├── mcp_llm_server.py # LLM service utility module
├── test_math_mcp_server.py # Test suite for math server
├── requirements.txt # Python dependencies
├── README.md # This file
├── CHANGELOG.md # Version history and changes
└── .gitignore # Git ignore rules
🔄 Architecture
Math MCP Server Architecture
The Math MCP server uses a client-server architecture with stdio transport:
┌─────────────┐ ┌──────────────────┐
│ Client │ │ Math MCP Server │
│ │ │ │
│ - Session │ ──── stdio ───────►│ - add() │
│ - call_tool│ ◄─── stdio ───────│ - multiply() │
│ │ │ │
└─────────────┘ └──────────────────┘
- Client spawns server as subprocess
- Communication via standard input/output
- Client sends tool call requests
- Server validates inputs and executes operations
- Server returns results or errors
- Client processes responses
LLM Service Module Architecture
The LLM service module provides utility functions for HTTP-based communication with local LLM servers:
┌─────────────────────┐ ┌──────────────────────┐
│ Python Script │ │ Local LLM Server │
│ │ │ (localhost:1234) │
│ - simple_chat() │ ──── HTTP POST ───►│ │
│ - chat_completion()│ ◄─── JSON ─────────│ - Chat Completions │
│ - conversational() │ │ - OpenAI-compatible │
│ │ │ │
└─────────────────────┘ └──────────────────────┘
- Import functions from
mcp_llm_server - Call functions with messages and parameters
- Module sends HTTP POST to LLM API endpoint
- LLM server processes request and generates response
- Module validates and extracts response content
- Returns assistant's message to caller
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Development Guidelines
- Follow PEP 8 style guidelines
- Add tests for new features
- Update documentation for API changes
- Ensure all tests pass before submitting PR
📝 License
This project is open source and available under the MIT License.
👤 Author
- Nihat
- Email: nihat@yinovasyon.com
- GitHub: @y-nihat
🙏 Acknowledgments
- Built with FastMCP framework
- Uses Model Context Protocol specification
- LLM integration via OpenAI-compatible API endpoints
- HTTP communication powered by Requests library
📚 Additional Resources
- MCP Documentation
- FastMCP Documentation
- Python asyncio Documentation
- OpenAI API Reference
- Requests Documentation
Version: 1.1.0
Last Updated: November 7, 2025
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。