mcp-skill-server

mcp-skill-server

Build agent skills where you work — write a script, add a SKILL.md, and use it in Claude Code/codex/Cursor immediately. The same fixed entry point that runs locally deploys to production without a rewrite.

Category
访问服务器

README

<!-- mcp-name: io.github.jcc-ne/mcp-skill-server -->

Most coding assistants now support skills natively, so an MCP server just for skill discovery isn't necessary. Where this package adds value is making skills' execution deterministic and deployable — with a fixed entry point and controlled execution, skills developed in your editor can run in non-sandboxed production environments. It also supports incremental loading, so agents discover skills on demand instead of loading everything upfront.


MCP Skill Server

CI PyPI Python License

Build agent skills where you work. Write a Python script, add a SKILL.md, and your agent can use it immediately. Iterate in real-time as part of your daily workflow. When it's ready, deploy the same skill to production — no rewrite needed.

Why?

Most skill development looks like this: write code → deploy → test in a staging agent → realize it's wrong → redeploy → repeat. It's slow, and you never get to actually use the skill while building it.

MCP Skill Server flips this. It runs on your machine, inside your editor — Claude Code, Cursor, or Claude Desktop. You develop a skill and use it in your real work at the same time. That tight feedback loop (edit → save → use) means you discover what's missing naturally, not through artificial test scenarios. The premise is if the skill doesn't work well with Claude Code, it's unlikely to work with a less sophisticated agent.

How skills mature to survive in the outside world

Claude skills can already have companion scripts, but there's no formalized entry point — the agent decides how to invoke them. That works for local use, but it's not deployable: a production MCP server can't reliably call a skill if the execution path isn't fixed.

MCP Skill Server enforces a declared entry field in your SKILL.md frontmatter (e.g. entry: uv run python my_script.py). This gives you a single, fixed entry point that the server controls. Commands and parameters are discovered from the script's --help output — that's the source of truth, not the LLM's interpretation of your code.

1. Claude/coding agent skill                → SKILL.md + scripts, but no fixed entry — agent decides how to run them
2. Local MCP skill (+ entry)   → Fixed entry point, schema from --help, usable daily via this server
3. Production                  → Same skill, same entry — deployed to your enterprise MCP server

Sharpen locally, then harden for production

Every agent that connects to the MCP server gets the same interface — list_skills, get_skill, run_skill — so the skill's description, parameter names, and help text are identical regardless of which agent calls them. That said, different agents have different strengths — a skill that works locally still needs testing with your production agent.

  1. Use it yourself — build the skill, use it daily via Claude Code or Cursor. Fix descriptions and param names when the agent misuses the skill.
  2. Test with a weaker model — try a smaller model to surface interface ambiguity.
  3. Add a deterministic entry point — declare entry in SKILL.md for reliable, secure execution. Use skill init to scaffold it, skill validate to check readiness.
  4. Test with your production agent — verify end-to-end in your target environment, then deploy.

Install

Claude Desktop (one-click)

Install with Claude Desktop

After installing, edit the skills path in your Claude Desktop config to point to your skills directory.

Claude Code

claude mcp add skills -- uvx mcp-skill-server serve /path/to/my/skills

Cursor

Add to .cursor/mcp.json in your project (or Settings → MCP → Add Server):

{
  "mcpServers": {
    "skills": {
      "command": "uvx",
      "args": ["mcp-skill-server", "serve", "/path/to/my/skills"]
    }
  }
}

Manual install

# From PyPI (recommended)
uv pip install mcp-skill-server

# Or from source
git clone https://github.com/jcc-ne/mcp-skill-server
cd mcp-skill-server && uv sync

# Run the server
uvx mcp-skill-server serve /path/to/my/skills

Then add to your editor's MCP config:

{
  "mcpServers": {
    "skills": {
      "command": "uvx",
      "args": ["mcp-skill-server", "serve", "/path/to/my/skills"]
    }
  }
}

Creating a Skill

Option A: Use skill init (recommended)

# Create a new skill
uv run mcp-skill-server init ./my_skills/hello -n "hello" -d "A friendly greeting"

# Or use the standalone command
uv run mcp-skill-init ./my_skills/hello -n "hello" -d "A friendly greeting"

# Promote an existing prompt-only Claude skill to a runnable MCP skill
uv run mcp-skill-init ./existing_claude_skill

Option B: Manual setup

1. Create a folder with your script

my_skills/
└── hello/
    ├── SKILL.md
    └── hello.py

2. Add SKILL.md with frontmatter

---
name: hello
description: A friendly greeting skill
entry: uv run python hello.py
---

# Hello Skill

Greets the user by name.

3. Write your script with argparse

# hello.py
import argparse

parser = argparse.ArgumentParser(description="Greeting skill")
parser.add_argument("--name", default="World", help="Name to greet")
args = parser.parse_args()

print(f"Hello, {args.name}!")

That's it. The server auto-discovers commands and parameters from your --help output — no config needed.

Validating for Deployment

When a skill is ready to graduate to production:

uv run mcp-skill-server validate ./my_skills/hello
# or
uv run mcp-skill-validate ./my_skills/hello

Checks:

  • Required frontmatter fields (name, description, entry)
  • Entry command uses allowed runtime
  • Script file exists
  • Commands discoverable via --help

How It Works

MCP Tools

The server exposes four tools to your agent:

Tool Description
list_skills List all available skills
get_skill Get details about a skill (commands, parameters)
run_skill Execute a skill with parameters
refresh_skills Reload skills after you make changes

Schema Discovery

The server automatically discovers your skill's interface by parsing --help output:

# Subcommands become separate commands
subparsers = parser.add_subparsers(dest='command')
analyze = subparsers.add_parser('analyze', help='Run analysis')

# Arguments become parameters with inferred types
analyze.add_argument('--year', type=int, required=True)  # int, required
analyze.add_argument('--file', type=str)                  # string, optional

Output Files

Files saved to output/ are automatically detected. Alternatively, print OUTPUT_FILE:/path/to/file to stdout.

Plugins

Output Handlers

Process files generated by skills (upload, copy, transform, etc.):

from mcp_skill_server.plugins import OutputHandler, LocalOutputHandler

# Default: tracks local file paths
handler = LocalOutputHandler()

# Optional GCS handler (requires `uv sync --extra gcs`)
from mcp_skill_server.plugins import GCSOutputHandler
handler = GCSOutputHandler(
    bucket_name="my-bucket",
    folder_prefix="skills/outputs/",
)

Response Formatters

Customize how execution results are formatted in MCP tool responses:

from mcp_skill_server.plugins import ResponseFormatter

class CustomFormatter(ResponseFormatter):
    def format_execution_result(self, result, skill, command):
        return f"Result: {result.stdout}"

# Use with create_server()
from mcp_skill_server import create_server
server = create_server(
    "/path/to/skills",
    response_formatter=CustomFormatter()
)

Development

git clone https://github.com/jcc-ne/mcp-skill-server
cd mcp-skill-server
uv sync --dev
uv run pytest
uv run mcp-skill-server serve examples/

Further Reading

  • Tool Design for LLMs — Why skills use a list/get/run pattern instead of exposing raw tools, and how it affects LLM accuracy

License

MIT

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选