Speelka Agent

Speelka Agent

Universal LLM Agent based on MCP

korchasa

研究与数据
访问服务器

README

Speelka Agent

Universal LLM agent based on the Model Context Protocol (MCP), with the ability to utilize tools from other MCP servers.

flowchart TB
    User["Any MCP Client"] --> |"1. Request"| Agent["Speelka Agent"]
    Agent --> |"2. Format prompt"| LLM["LLM Service"]
    LLM --> |"3. Tool calls"| Agent
    Agent --> |"4. Execute tools"| Tools["External MCP Tools"]
    Tools --> |"5. Return results"| Agent
    Agent --> |"6. Process repeat"| LLM
    Agent --> |"7. Final answer"| User

Key Features

  • Precise Agent Definition: Define detailed agent behavior through prompt engineering
  • Client-Side Context Optimization: Reduce context size on the client side for more efficient token usage
  • LLM Flexibility: Use different LLM providers between client and agent sides
  • Centralized Tool Management: Single point of control for all available tools
  • Multiple Integration Options: Support for MCP stdio, MCP HTTP, and Simple HTTP API
  • Built-in Reliability: Retry mechanisms for handling transient failures
  • Extensibility: System behavior extensions without client-side changes
  • MCP-Aware Logging: Structured logging with MCP notifications
  • Token Management: Automatic token counting and history compaction
  • Flexible Configuration: Support for environment variables, YAML, and JSON configuration files

Getting Started

Prerequisites

  • Go 1.19 or higher
  • LLM API credentials (OpenAI or Anthropic)
  • External MCP tools (optional)

Installation

git clone https://github.com/korchasa/speelka-agent-go.git
cd speelka-agent-go
go build ./cmd/server

Configuration

Configuration can be provided using YAML, JSON, or environment variables.

Note: The ./examples directory is deprecated and will be removed in a future version. Please use the examples in the ./site/examples directory instead.

Example configuration files are available in the site/examples directory:

  • site/examples/simple.yaml: Basic agent configuration in YAML format (preferred)
  • site/examples/ai-news.yaml: AI news agent configuration in YAML format (preferred)
  • site/examples/simple.json: Basic agent configuration in JSON format
  • site/examples/simple.env: Basic agent configuration as environment variables

Here's a simple YAML configuration example:

agent:
  name: "simple-speelka-agent"
  version: "1.0.0"

  # Tool configuration
  tool:
    name: "process"
    description: "Process tool for handling user queries with LLM"
    argument_name: "input"
    argument_description: "The user query to process"

  # LLM configuration
  llm:
    provider: "openai"
    api_key: ""  # Set via environment variable instead for security
    model: "gpt-4o"
    temperature: 0.7
    prompt_template: "You are a helpful AI assistant. Respond to the following request: {{input}}. Provide a detailed and helpful response. Available tools: {{tools}}"

  # MCP Server connections
  connections:
    mcpServers:
      time:
        command: "docker"
        args: ["run", "-i", "--rm", "mcp/time"]

      filesystem:
        command: "mcp-filesystem-server"
        args: ["/path/to/directory"]

# Runtime configuration
runtime:
  log:
    level: "info"

  transports:
    stdio:
      enabled: true

Using Environment Variables

All environment variables are prefixed with SPL_:

Environment Variable Default Value Description
Agent Configuration
SPL_AGENT_NAME Required Name of the agent
SPL_AGENT_VERSION "1.0.0" Version of the agent
Tool Configuration
SPL_TOOL_NAME Required Name of the tool provided by the agent
SPL_TOOL_DESCRIPTION Required Description of the tool functionality
SPL_TOOL_ARGUMENT_NAME Required Name of the argument for the tool
SPL_TOOL_ARGUMENT_DESCRIPTION Required Description of the argument for the tool
LLM Configuration
SPL_LLM_PROVIDER Required Provider of LLM service (e.g., "openai", "anthropic")
SPL_LLM_API_KEY Required API key for the LLM provider
SPL_LLM_MODEL Required Model name (e.g., "gpt-4o", "claude-3-opus-20240229")
SPL_LLM_MAX_TOKENS 0 Maximum tokens to generate (0 means no limit)
SPL_LLM_TEMPERATURE 0.7 Temperature parameter for randomness in generation
SPL_LLM_PROMPT_TEMPLATE Required Template for system prompts (must include placeholder matching the SPL_TOOL_ARGUMENT_NAME value and {{tools}})
Chat Configuration
SPL_CHAT_MAX_ITERATIONS 25 Maximum number of LLM iterations
SPL_CHAT_MAX_TOKENS 0 Maximum tokens in chat history (0 means based on model)
SPL_CHAT_COMPACTION_STRATEGY "delete-old" Strategy for compacting chat history ("delete-old", "delete-middle")
LLM Retry Configuration
SPL_LLM_RETRY_MAX_RETRIES 3 Maximum number of retry attempts for LLM API calls
SPL_LLM_RETRY_INITIAL_BACKOFF 1.0 Initial backoff time in seconds
SPL_LLM_RETRY_MAX_BACKOFF 30.0 Maximum backoff time in seconds
SPL_LLM_RETRY_BACKOFF_MULTIPLIER 2.0 Multiplier for increasing backoff time
MCP Servers Configuration
SPL_MCPS_0_ID "" Identifier for the first MCP server
SPL_MCPS_0_COMMAND "" Command to execute for the first server
SPL_MCPS_0_ARGS "" Command arguments as space-separated string
SPL_MCPS_0_ENV_* "" Environment variables for the server (prefix with SPL_MCPS_0_ENV_)
SPL_MCPS_1_ID, etc. "" Configuration for additional servers (increment index)
MCP Retry Configuration
SPL_MSPS_RETRY_MAX_RETRIES 3 Maximum number of retry attempts for MCP server connections
SPL_MSPS_RETRY_INITIAL_BACKOFF 1.0 Initial backoff time in seconds
SPL_MSPS_RETRY_MAX_BACKOFF 30.0 Maximum backoff time in seconds
SPL_MSPS_RETRY_BACKOFF_MULTIPLIER 2.0 Multiplier for increasing backoff time
Runtime Configuration
SPL_LOG_LEVEL "info" Log level (debug, info, warn, error)
SPL_LOG_OUTPUT "stderr" Log output destination (stdout, stderr, file path)
SPL_RUNTIME_STDIO_ENABLED true Enable stdin/stdout transport
SPL_RUNTIME_STDIO_BUFFER_SIZE 8192 Buffer size for stdio transport
SPL_RUNTIME_HTTP_ENABLED false Enable HTTP transport
SPL_RUNTIME_HTTP_HOST "localhost" Host for HTTP server
SPL_RUNTIME_HTTP_PORT 3000 Port for HTTP server

For more detailed information about configuration options, see Environment Variables Reference.

Running the Agent

Daemon Mode (HTTP Server)

./speelka-agent --daemon [--config config.yaml]

CLI Mode (Standard Input/Output)

./speelka-agent [--config config.yaml]

Usage Examples

HTTP API

When running in daemon mode, the agent exposes HTTP endpoints:

# Send a request to the agent
curl -X POST http://localhost:3000/message -H "Content-Type: application/json" -d '{
  "method": "tools/call",
  "params": {
    "name": "process",
    "arguments": {
      "input": "Your query here"
    }
  }
}'

External Tool Integration

Connect to external tools using the MCP protocol in your YAML configuration:

agent:
  # ... other agent configuration ...
  connections:
    mcpServers:
      # MCP server for Playwright browser automation
      playwright:
        command: "mcp-playwright"
        args: []

      # MCP server for filesystem operations
      filesystem:
        command: "mcp-filesystem-server"
        args: ["."]

Or using environment variables:

# MCP server for Playwright browser automation
export SPL_MCPS_0_ID="playwright"
export SPL_MCPS_0_COMMAND="mcp-playwright"
export SPL_MCPS_0_ARGS=""

# MCP server for filesystem operations
export SPL_MCPS_1_ID="filesystem"
export SPL_MCPS_1_COMMAND="mcp-filesystem-server"
export SPL_MCPS_1_ARGS="."

Supported LLM Providers

  • OpenAI: GPT-3.5, GPT-4, GPT-4o
  • Anthropic: Claude models

Documentation

For more detailed information, see:

Development

Running Tests

go test ./...

Helper Commands

The run script provides commands for common operations:

# Development
./run build        # Build the project
./run test         # Run tests with coverage
./run check        # Run all checks
./run lint         # Run linter

# Interaction
./run call         # Test with simple query
./run call-multistep # Test with multi-step query
./run call-news    # Test news agent
./run fetch_url    # Fetch a URL using MCP

# Inspection
./run inspect      # Run with MCP inspector

See Command Reference for more options.

License

MIT License

推荐服务器

Crypto Price & Market Analysis MCP Server

Crypto Price & Market Analysis MCP Server

一个模型上下文协议 (MCP) 服务器,它使用 CoinCap API 提供全面的加密货币分析。该服务器通过一个易于使用的界面提供实时价格数据、市场分析和历史趋势。 (Alternative, slightly more formal and technical translation): 一个模型上下文协议 (MCP) 服务器,利用 CoinCap API 提供全面的加密货币分析服务。该服务器通过用户友好的界面,提供实时价格数据、市场分析以及历史趋势数据。

精选
TypeScript
MCP PubMed Search

MCP PubMed Search

用于搜索 PubMed 的服务器(PubMed 是一个免费的在线数据库,用户可以在其中搜索生物医学和生命科学文献)。 我是在 MCP 发布当天创建的,但当时正在度假。 我看到有人在您的数据库中发布了类似的服务器,但还是决定发布我的。

精选
Python
mixpanel

mixpanel

连接到您的 Mixpanel 数据。从 Mixpanel 分析查询事件、留存和漏斗数据。

精选
TypeScript
Sequential Thinking MCP Server

Sequential Thinking MCP Server

这个服务器通过将复杂问题分解为顺序步骤来促进结构化的问题解决,支持修订,并通过完整的 MCP 集成来实现多条解决方案路径。

精选
Python
Nefino MCP Server

Nefino MCP Server

为大型语言模型提供访问德国可再生能源项目新闻和信息的能力,允许按地点、主题(太阳能、风能、氢能)和日期范围进行筛选。

官方
Python
Vectorize

Vectorize

将 MCP 服务器向量化以实现高级检索、私有深度研究、Anything-to-Markdown 文件提取和文本分块。

官方
JavaScript
Mathematica Documentation MCP server

Mathematica Documentation MCP server

一个服务器,通过 FastMCP 提供对 Mathematica 文档的访问,使用户能够从 Wolfram Mathematica 检索函数文档和列出软件包符号。

本地
Python
kb-mcp-server

kb-mcp-server

一个 MCP 服务器,旨在实现便携性、本地化、简易性和便利性,以支持对 txtai “all in one” 嵌入数据库进行基于语义/图的检索。任何 tar.gz 格式的 txtai 嵌入数据库都可以被加载。

本地
Python
Research MCP Server

Research MCP Server

这个服务器用作 MCP 服务器,与 Notion 交互以检索和创建调查数据,并与 Claude Desktop Client 集成以进行和审查调查。

本地
Python
Cryo MCP Server

Cryo MCP Server

一个API服务器,实现了模型补全协议(MCP),用于Cryo区块链数据提取。它允许用户通过任何兼容MCP的客户端查询以太坊区块链数据。

本地
Python