Sigma MCP Server

Sigma MCP Server

Enables users to interact with the Sigma Computing API to analyze documents and manage analytics data via a serverless AWS architecture. It features document search and analytics capabilities with DynamoDB-backed caching for optimized performance.

Category
访问服务器

README

Sigma MCP Server Deployment Guide

Overview

This guide walks you through deploying the Sigma MCP Server to AWS Lambda with API Gateway.

Sample Questions to test the MCP Server

"What's the status of the Sigma MCP server?" "Check the Sigma MCP server connection" "Test the Sigma MCP server connectivity" "Is the Sigma MCP server working?" "Get the current status of the Sigma MCP server" "Verify the Sigma MCP server is operational"

Architecture

Claude/MCP Client → API Gateway → Lambda → DynamoDB (cache)
                                    ↓
                              Secrets Manager (credentials)
                                    ↓
                               Sigma API

Prerequisites

  1. AWS CLI configured with appropriate permissions
  2. Terraform installed (v1.0+)
  3. Node.js 18+ and npm
  4. Sigma API credentials (client ID and secret)

Step 1: Prepare the Lambda Package

# Install dependencies
npm install

# Build the TypeScript code
npm run build

# Create deployment package
npm run package

This creates sigma-mcp-server.zip with your compiled code and dependencies.

Step 2: Configure Terraform Variables

Create a terraform.tfvars file:

aws_region = "us-east-1"
environment = "dev"
sigma_base_url = "https://api.sigmacomputing.com"

Step 3: Deploy Infrastructure

# Initialize Terraform
terraform init

# Plan the deployment
terraform plan

# Apply the changes
terraform apply

Important: After deployment, update the Secrets Manager secret with your actual Sigma credentials:

aws secretsmanager update-secret \
  --secret-id "sigma-api-credentials-dev" \
  --secret-string '{"clientId":"YOUR_ACTUAL_CLIENT_ID","clientSecret":"YOUR_ACTUAL_SECRET"}'

Step 4: Initial Cache Population

The document cache needs to be populated before the MCP server can search documents. You can do this by:

Option A: One-time Script

Create a simple script to populate the cache:

// populate-cache.ts
import { SigmaApiClient } from './src/sigma-client.js';
import { DocumentCache } from './src/document-cache.js';

async function populateCache() {
  const client = new SigmaApiClient({
    baseUrl: process.env.SIGMA_BASE_URL!,
    clientId: process.env.SIGMA_CLIENT_ID!,
    clientSecret: process.env.SIGMA_CLIENT_SECRET!,
  });
  
  const cache = new DocumentCache(process.env.CACHE_TABLE_NAME!);
  
  await client.initialize();
  await cache.initialize();
  await cache.refreshCache(client);
  
  console.log('Cache populated successfully');
}

populateCache().catch(console.error);

Option B: Lambda Function Invocation

You can invoke the Lambda directly to trigger cache refresh (you'd need to add an endpoint for this).

Step 5: Configure Claude Desktop

Add your MCP server to Claude Desktop's configuration:

{
  "mcpServers": {
    "sigma-analytics": {
      "command": "node",
      "args": [
        "path/to/mcp-client-script.js"
      ],
      "env": {
        "API_GATEWAY_URL": "https://your-api-id.execute-api.region.amazonaws.com/dev"
      }
    }
  }
}

You'll need to create a client script that communicates with your API Gateway endpoint instead of stdio.

API Gateway Setup Details

The Terraform creates:

  1. REST API - Main API Gateway resource
  2. Proxy Resource - {proxy+} to catch all paths
  3. ANY Method - Accepts all HTTP methods
  4. Lambda Integration - Routes requests to your Lambda function
  5. Deployment - Creates a stage (dev/prod) with invoke URL

API Gateway Flow:

  1. Client sends HTTP POST with MCP request in body
  2. API Gateway forwards to Lambda via AWS_PROXY integration
  3. Lambda processes MCP request and returns response
  4. API Gateway returns response to client

Environment Variables

The Lambda function uses these environment variables (set by Terraform):

  • SIGMA_BASE_URL - Sigma API endpoint
  • CACHE_TABLE_NAME - DynamoDB table name
  • NODE_ENV - Environment (dev/prod)

Credentials are loaded from AWS Secrets Manager automatically.

Monitoring and Logs

  • CloudWatch Logs: /aws/lambda/sigma-mcp-server-{environment}
  • API Gateway Logs: Can be enabled in the API Gateway console
  • DynamoDB Metrics: Available in CloudWatch

Outputs

After deployment, Terraform provides:

# Get the API Gateway URL
terraform output api_gateway_url

# Get other resource names
terraform output lambda_function_name
terraform output dynamodb_table_name
terraform output secrets_manager_secret_name

Local Testing

Before deploying to AWS, you can test the MCP server locally:

1. Set up Environment Variables

Create a .env file in the project root (this file is already in .gitignore):

# Sigma API Configuration
SIGMA_CLIENT_ID=your_actual_sigma_client_id
SIGMA_CLIENT_SECRET=your_actual_sigma_client_secret
SIGMA_BASE_URL=https://api.sigmacomputing.com

# Cache Configuration
# Set to 'true' to skip caching entirely (for local testing)
# Set to 'false' or omit to use DynamoDB cache (for production)
SKIP_CACHE=true

# AWS Configuration (for local testing, these can be empty or use localstack)
AWS_REGION=us-east-1
CACHE_TABLE_NAME=sigma-documents-cache

# Environment
NODE_ENV=development

2. Install Dependencies

npm install

3. Test the Heartbeat

Run the local test script to verify connectivity:

npm run test:local

This will:

  • Check your environment variables
  • Build the TypeScript code
  • Start the MCP server
  • Send a heartbeat request
  • Display the response with server status

4. Manual Testing

You can also run the server manually and interact with it:

# Build the project
npm run build

# Start the server
npm start

The server will run on stdio and wait for MCP requests.

Note: When using SKIP_CACHE=true, the server will fetch data directly from the Sigma API for each request, which is useful for testing but may be slower than using cached data.

Testing

Test the deployment:

Debugging

The MCP server includes comprehensive debugging capabilities to help troubleshoot issues with REST API calls to Sigma.

Enabling Debug Mode

To enable debug mode, set the DEBUG_MODE environment variable:

export DEBUG_MODE=true

Or add it to your .env file:

DEBUG_MODE=true

Debug Output

When debug mode is enabled, you'll see detailed logging for:

  • MCP Server Operations: Tool calls, resource requests, error handling
  • Sigma API Calls: HTTP requests, responses, authentication
  • Document Analytics: Cache operations, data fetching, parsing
  • Token Management: Token refresh, expiry, authentication status

Testing with Debug Mode

Use the test script to run the analyze_documents tool with debugging:

# Build the project first
npm run build

# Run the test with debugging
node test-analyze-documents.js

Debug Log Format

Debug messages follow this format:

  • 🔍 [DEBUG] - Information and progress
  • ✅ [DEBUG] - Success messages
  • ❌ [DEBUG] - Error messages
  • ⚠️ [DEBUG] - Warnings

Common Debug Scenarios

  1. Authentication Issues: Check token refresh logs
  2. API Call Failures: Look for HTTP status codes and error responses
  3. Data Parsing Issues: Check JSONL parsing logs
  4. Cache Problems: Verify cache hit/miss patterns

Troubleshooting Steps

  1. Check Environment Variables: Ensure all required variables are set
  2. Verify Sigma API Credentials: Test with the heartbeat tool first
  3. Review Network Connectivity: Check if Sigma API endpoints are reachable
  4. Examine Cache Status: Verify document cache is working properly

Security Considerations

  1. API Gateway has no authentication in this prototype - consider adding API keys or IAM auth for production
  2. Secrets Manager stores credentials securely with automatic rotation capability
  3. IAM roles follow least-privilege principle
  4. VPC - Consider deploying Lambda in VPC for additional network security

Scaling and Performance

  • Lambda: Auto-scales, cold starts ~1-2 seconds
  • DynamoDB: On-demand billing scales automatically
  • API Gateway: Handles up to 10,000 requests per second by default
  • Cache Strategy: In-memory cache in Lambda for fast lookups, DynamoDB for persistence

Troubleshooting

Common issues:

  1. "Secret not found" - Update Secrets Manager with real credentials
  2. "Table not found" - Ensure DynamoDB table exists and Lambda has permissions
  3. Cold starts - First request after idle time takes longer
  4. CORS errors - API Gateway includes CORS headers

Check CloudWatch Logs for detailed error information.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选
mcp-server-qdrant

mcp-server-qdrant

这个仓库展示了如何为向量搜索引擎 Qdrant 创建一个 MCP (Managed Control Plane) 服务器的示例。

官方
精选