Semantic Context MCP

Semantic Context MCP

Enables AI assistants to save, load, and search conversation context with AI-powered summarization and auto-tagging. Demonstrates semantic intent patterns and hexagonal architecture for maintainable AI-assisted development.

Category
访问服务器

README

Semantic Context MCP

License: MIT CI Tests TypeScript Node.js

Semantic Intent Reference Implementation Hexagonal Architecture PRs Welcome Code of Conduct

Reference implementation of Semantic Intent as Single Source of Truth patterns

A Model Context Protocol (MCP) server demonstrating semantic anchoring, intent preservation, and observable property patterns for AI-assisted development.

📚 Table of Contents

🎯 What Makes This Different

This isn't just another MCP server—it's a reference implementation of proven semantic intent patterns:

  • Semantic Anchoring: Decisions based on meaning, not technical characteristics
  • Intent Preservation: Semantic contracts maintained through all transformations
  • Observable Properties: Behavior anchored to directly observable semantic markers
  • Domain Boundaries: Clear semantic ownership across layers

Built on research from Semantic Intent as Single Source of Truth, this implementation demonstrates how to build maintainable, AI-friendly codebases that preserve intent.


🚀 Quick Start

Prerequisites

  • Node.js 20.x or higher
  • Cloudflare account (free tier works)
  • Wrangler CLI: npm install -g wrangler

Installation

  1. Clone the repository

    git clone https://github.com/semanticintent/semantic-context-mcp.git
    cd semantic-context-mcp
    
  2. Install dependencies

    npm install
    
  3. Configure Wrangler

    Copy the example configuration:

    cp wrangler.jsonc.example wrangler.jsonc
    

    Create a D1 database:

    wrangler d1 create mcp-context
    

    Update wrangler.jsonc with your database ID:

    {
      "d1_databases": [{
        "database_id": "your-database-id-from-above-command"
      }]
    }
    
  4. Run database migrations

    # Local development
    wrangler d1 execute mcp-context --local --file=./migrations/0001_initial_schema.sql
    
    # Production
    wrangler d1 execute mcp-context --file=./migrations/0001_initial_schema.sql
    
  5. Start development server

    npm run dev
    

Deploy to Production

npm run deploy

Your MCP server will be available at: semantic-context-mcp.<your-account>.workers.dev

📚 Learning from This Implementation

This codebase demonstrates semantic intent patterns throughout:

Architecture Files:

Documentation & Patterns:

Each file includes comprehensive comments explaining WHY decisions preserve semantic intent, not just WHAT the code does.

Connect to Cloudflare AI Playground

You can connect to your MCP server from the Cloudflare AI Playground, which is a remote MCP client:

  1. Go to https://playground.ai.cloudflare.com/
  2. Enter your deployed MCP server URL (remote-mcp-server-authless.<your-account>.workers.dev/sse)
  3. You can now use your MCP tools directly from the playground!

Connect Claude Desktop to your MCP server

You can also connect to your remote MCP server from local MCP clients, by using the mcp-remote proxy.

To connect to your MCP server from Claude Desktop, follow Anthropic's Quickstart and within Claude Desktop go to Settings > Developer > Edit Config.

Update with this configuration:

{
  "mcpServers": {
    "semantic-context": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://localhost:8787/sse"  // or semantic-context-mcp.your-account.workers.dev/sse
      ]
    }
  }
}

Restart Claude and you should see the tools become available.

🏗️ Architecture

This project demonstrates Domain-Driven Hexagonal Architecture with clean separation of concerns:

┌─────────────────────────────────────────────────────────┐
│                   Presentation Layer                     │
│              (MCPRouter - HTTP routing)                  │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                  Application Layer                       │
│     (ToolExecutionHandler, MCPProtocolHandler)          │
│              MCP Protocol & Orchestration                │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                    Domain Layer                          │
│         (ContextService, ContextSnapshot)                │
│                 Business Logic                           │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                Infrastructure Layer                      │
│    (D1ContextRepository, CloudflareAIProvider)          │
│           Technical Adapters (Ports & Adapters)         │
└─────────────────────────────────────────────────────────┘

Layer Responsibilities:

Domain Layer (src/domain/):

  • Pure business logic independent of infrastructure
  • ContextSnapshot: Entity with validation rules
  • ContextService: Core business operations

Application Layer (src/application/):

  • Orchestrates domain operations
  • ToolExecutionHandler: Translates MCP tools to domain operations
  • MCPProtocolHandler: Manages JSON-RPC protocol

Infrastructure Layer (src/infrastructure/):

  • Technical adapters implementing ports (interfaces)
  • D1ContextRepository: Cloudflare D1 persistence
  • CloudflareAIProvider: Workers AI integration
  • CORSMiddleware: Cross-cutting concerns

Presentation Layer (src/presentation/):

  • HTTP routing and request handling
  • MCPRouter: Routes requests to appropriate handlers

Composition Root (src/index.ts):

  • Dependency injection
  • Wires all layers together
  • 74 lines (down from 483 - 90% reduction)

Benefits:

  • Testability: Each layer independently testable
  • Maintainability: Clear responsibilities per layer
  • Flexibility: Swap infrastructure (D1 → Postgres) without touching domain
  • Semantic Intent: Comprehensive documentation of WHY
  • Type Safety: Strong TypeScript contracts throughout

Features

  • save_context: Save conversation context with AI-powered summarization and auto-tagging
  • load_context: Retrieve relevant context for a project
  • search_context: Search contexts using keyword matching

🧪 Testing

This project includes comprehensive unit tests with 70 tests covering all architectural layers.

Run Tests

# Run all tests
npm test

# Run tests in watch mode
npm run test:watch

# Run tests with UI
npm run test:ui

# Run tests with coverage report
npm run test:coverage

Test Coverage

  • Domain Layer: 15 tests (ContextSnapshot validation, ContextService orchestration)
  • Application Layer: 10 tests (ToolExecutionHandler, MCP tool dispatch)
  • Infrastructure Layer: 20 tests (D1Repository, CloudflareAIProvider with fallbacks)
  • Presentation Layer: 12 tests (MCPRouter, CORS, error handling)
  • Integration: 13 tests (End-to-end service flows)

Test Structure

Tests are co-located with source files using the .test.ts suffix:

src/
├── domain/
│   ├── models/
│   │   ├── ContextSnapshot.ts
│   │   └── ContextSnapshot.test.ts
│   └── services/
│       ├── ContextService.ts
│       └── ContextService.test.ts
├── application/
│   └── handlers/
│       ├── ToolExecutionHandler.ts
│       └── ToolExecutionHandler.test.ts
└── ...

All tests use Vitest with mocking for external dependencies (D1, AI services).

Continuous Integration

This project uses GitHub Actions for automated testing and quality checks.

Automated Checks on Every Push/PR:

  • ✅ TypeScript compilation (npm run type-check)
  • ✅ Unit tests (npm test)
  • ✅ Test coverage reports
  • ✅ Code formatting (Biome)
  • ✅ Linting (Biome)

Status Badges:

  • CI status displayed at top of README
  • Automatically updates on each commit
  • Shows passing/failing state

Workflow Configuration: .github/workflows/ci.yml

The CI pipeline runs on Node.js 20.x and ensures code quality before merging.

Database Setup

This project uses Cloudflare D1 for persistent context storage.

Initial Setup

  1. Create D1 Database:

    wrangler d1 create mcp-context
    
  2. Update wrangler.jsonc with your database ID:

    {
      "d1_databases": [
        {
          "binding": "DB",
          "database_name": "mcp-context",
          "database_id": "your-database-id-here"
        }
      ]
    }
    
  3. Run Initial Migration:

    wrangler d1 execute mcp-context --file=./migrations/0001_initial_schema.sql
    

Local Development

For local testing, initialize the local D1 database:

wrangler d1 execute mcp-context --local --file=./migrations/0001_initial_schema.sql

Verify Schema

Check that tables were created successfully:

# Production
wrangler d1 execute mcp-context --command="SELECT name FROM sqlite_master WHERE type='table'"

# Local
wrangler d1 execute mcp-context --local --command="SELECT name FROM sqlite_master WHERE type='table'"

Database Migrations

All database schema changes are managed through versioned migration files in migrations/:

  • 0001_initial_schema.sql - Initial context snapshots table with semantic indexes

See migrations/README.md for detailed migration management guide.

License

This project is licensed under the MIT License - see the LICENSE file for details.

🔬 Research Foundation

This implementation is based on the research paper "Semantic Intent as Single Source of Truth: Immutable Governance for AI-Assisted Development".

Core Principles Applied:

  1. Semantic Over Structural - Use meaning, not technical characteristics
  2. Intent Preservation - Maintain semantic contracts through transformations
  3. Observable Anchoring - Base behavior on directly observable properties
  4. Immutable Governance - Protect semantic integrity at runtime

Related Resources:

🤝 Contributing

We welcome contributions! This is a reference implementation, so contributions should maintain semantic intent principles.

How to Contribute

  1. Read the guidelines: CONTRIBUTING.md
  2. Check existing issues: Avoid duplicates
  3. Follow the architecture: Maintain layer boundaries
  4. Add tests: All changes need test coverage
  5. Document intent: Explain WHY, not just WHAT

Contribution Standards

  • ✅ Follow semantic intent patterns
  • ✅ Maintain hexagonal architecture
  • ✅ Add comprehensive tests
  • ✅ Include semantic documentation
  • ✅ Pass all CI checks

Quick Links:

Community

🔒 Security

Security is a top priority. Please review our Security Policy for:

  • Secrets management best practices
  • What to commit / what to exclude
  • Reporting security vulnerabilities
  • Security checklist for deployment

Found a vulnerability? Email: security@semanticintent.dev

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选