FetchSERP MCP Server

FetchSERP MCP Server

A Model Context Protocol server that provides AI assistants with access to FetchSERP API capabilities for SEO analysis, SERP data, web scraping, and keyword research.

Category
访问服务器

README

FetchSERP MCP Server

A Model Context Protocol (MCP) server that exposes the FetchSERP API for SEO, SERP analysis, web scraping, and keyword research.

Features

This MCP server provides access to all FetchSERP API endpoints:

SEO & Analysis

  • Domain Analysis: Get backlinks, domain info (DNS, WHOIS, SSL, tech stack)
  • Keyword Research: Search volume, suggestions, long-tail keyword generation
  • SEO Analysis: Comprehensive webpage SEO analysis
  • AI Analysis: AI-powered webpage analysis with custom prompts
  • Moz Integration: Domain authority and Moz metrics

SERP & Search

  • Search Results: Get SERP results from Google, Bing, Yahoo, DuckDuckGo
  • AI Overview: Google's AI overview with JavaScript rendering
  • Enhanced Results: SERP with HTML or text content
  • Ranking Check: Domain ranking for specific keywords
  • Indexation Check: Verify if pages are indexed

Web Scraping

  • Basic Scraping: Scrape webpages without JavaScript
  • JS Scraping: Execute custom JavaScript on pages
  • Proxy Scraping: Scrape with country-specific proxies
  • Domain Scraping: Scrape multiple pages from a domain

User Management

  • Account Info: Check API credits and user information

Installation

No installation required! This MCP server runs directly from GitHub using npx.

Get your FetchSERP API token: Sign up at https://www.fetchserp.com to get your API token. New users get 250 free credits to get started!

Usage

Transport Modes

This MCP server supports two transport modes:

npx mode (Option 1):

  • ✅ Zero installation required
  • ✅ Always gets latest version from GitHub
  • ✅ Perfect for individual users
  • ✅ Runs locally with Claude Desktop

HTTP mode (Option 2):

  • ✅ Remote deployment capability
  • ✅ Multiple clients can connect
  • ✅ Better for enterprise/team environments
  • ✅ Centralized server management
  • ✅ Single API key authentication (FetchSERP token)
  • ✅ Scalable architecture

Configuration

Option 1: Using npx (Local/Remote GitHub) Add this server to your MCP client configuration. For example, in Claude Desktop using github registry :

{
  "mcpServers": {
    "fetchserp": {
      "command": "npx",
      "args": [
        "github:fetchSERP/fetchserp-mcp-server-node"
      ],
      "env": {
        "FETCHSERP_API_TOKEN": "your_fetchserp_api_token_here"
      }
    }
  }
}

or using npm registry

{
  "mcpServers": {
    "fetchserp": {
      "command": "npx",
      "args": ["fetchserp-mcp-server"],
      "env": {
        "FETCHSERP_API_TOKEN": "your_fetchserp_api_token_here"
      }
    }
  }
}

Option 2: Claude API with MCP Server For programmatic usage with Claude's API and your deployed MCP server:

const claudeRequest = {
  model: "claude-sonnet-4-20250514",
  max_tokens: 1024,
  messages: [
    {
      role: "user", 
      content: question
    }
  ],
  // MCP Server Configuration
  mcp_servers: [
    {
      type: "url",
      url: "https://mcp.fetchserp.com/sse",
      name: "fetchserp",
      authorization_token: FETCHSERP_API_TOKEN,
      tool_configuration: {
        enabled: true
      }
    }
  ]
};

const response = await httpRequest('https://api.anthropic.com/v1/messages', {
  method: 'POST',
  headers: {
    'x-api-key': CLAUDE_API_KEY,
    'anthropic-version': '2023-06-01',
    'anthropic-beta': 'mcp-client-2025-04-04',
    'content-type': 'application/json'
  }
}, JSON.stringify(claudeRequest));

Option 3: OpenAI API with MCP Server For programmatic usage with OpenAI's API and your deployed MCP server:

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const res = await openai.responses.create({
  model: "gpt-4.1",
  tools: [
    {
      type: "mcp",
      server_label: "fetchserp",
      server_url: "https://mcp.fetchserp.com/sse",
      headers: {
        Authorization: `Bearer ${FETCHSERP_API_TOKEN}`
      }
    }
  ],
  input: question
});

console.log(res.choices[0].message);

Option 4: Docker Use the pre-built Docker image from GitHub Container Registry for containerized deployment:

{
  "mcpServers": {
    "fetchserp": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "FETCHSERP_API_TOKEN",
        "ghcr.io/fetchserp/fetchserp-mcp-server-node:latest"
      ],
      "env": {
        "FETCHSERP_API_TOKEN": "your_fetchserp_api_token_here"
      }
    }
  }
}

Docker Features:

  • ✅ Containerized deployment
  • ✅ Cross-platform compatibility (ARM64 & AMD64)
  • ✅ Isolated environment
  • ✅ Easy scaling and deployment
  • ✅ Automated builds from GitHub

Manual Docker Usage:

# Pull the latest image
docker pull ghcr.io/fetchserp/fetchserp-mcp-server-node:latest

# Run with environment variable
docker run -i --rm \
  -e FETCHSERP_API_TOKEN="your_token_here" \
  ghcr.io/fetchserp/fetchserp-mcp-server-node:latest

# Or run in HTTP mode on port 8000
docker run -p 8000:8000 \
  -e FETCHSERP_API_TOKEN="your_token_here" \
  -e MCP_HTTP_MODE=true \
  ghcr.io/fetchserp/fetchserp-mcp-server-node:latest

Available Tools

Domain & SEO Analysis

get_backlinks

Get backlinks for a domain

  • domain (required): Target domain
  • search_engine: google, bing, yahoo, duckduckgo (default: google)
  • country: Country code (default: us)
  • pages_number: Pages to search 1-30 (default: 15)

get_domain_info

Get comprehensive domain information

  • domain (required): Target domain

get_domain_emails

Extract emails from a domain

  • domain (required): Target domain
  • search_engine: Search engine (default: google)
  • country: Country code (default: us)
  • pages_number: Pages to search 1-30 (default: 1)

get_webpage_seo_analysis

Comprehensive SEO analysis of a webpage

  • url (required): URL to analyze

get_webpage_ai_analysis

AI-powered webpage analysis

  • url (required): URL to analyze
  • prompt (required): Analysis prompt

get_moz_analysis

Get Moz domain authority and metrics

  • domain (required): Target domain

Keyword Research

get_keywords_search_volume

Get search volume for keywords

  • keywords (required): Array of keywords
  • country: Country code

get_keywords_suggestions

Get keyword suggestions

  • url: URL to analyze (optional if keywords provided)
  • keywords: Array of seed keywords (optional if url provided)
  • country: Country code

get_long_tail_keywords

Generate long-tail keywords

  • keyword (required): Seed keyword
  • search_intent: informational, commercial, transactional, navigational (default: informational)
  • count: Number to generate 1-500 (default: 10)

SERP & Search

get_serp_results

Get search engine results

  • query (required): Search query
  • search_engine: google, bing, yahoo, duckduckgo (default: google)
  • country: Country code (default: us)
  • pages_number: Pages to search 1-30 (default: 1)

get_serp_html

Get SERP results with HTML content

  • Same parameters as get_serp_results

get_serp_text

Get SERP results with text content

  • Same parameters as get_serp_results

get_serp_js_start

Start AI Overview SERP job (returns UUID)

  • query (required): Search query
  • country: Country code (default: us)
  • pages_number: Pages to search 1-10 (default: 1)

get_serp_js_result

Get AI Overview SERP results

  • uuid (required): UUID from start job

check_page_indexation

Check if domain is indexed for keyword

  • domain (required): Target domain
  • keyword (required): Search keyword

get_domain_ranking

Get domain ranking for keyword

  • keyword (required): Search keyword
  • domain (required): Target domain
  • search_engine: Search engine (default: google)
  • country: Country code (default: us)
  • pages_number: Pages to search 1-30 (default: 10)

Web Scraping

scrape_webpage

Scrape webpage without JavaScript

  • url (required): URL to scrape

scrape_domain

Scrape multiple pages from domain

  • domain (required): Target domain
  • max_pages: Maximum pages to scrape, up to 200 (default: 10)

scrape_webpage_js

Scrape webpage with custom JavaScript

  • url (required): URL to scrape
  • js_script (required): JavaScript code to execute

scrape_webpage_js_proxy

Scrape webpage with JavaScript and proxy

  • url (required): URL to scrape
  • country (required): Proxy country
  • js_script (required): JavaScript code to execute

User Management

get_user_info

Get user information and API credits

  • No parameters required

API Token

You need a FetchSERP API token to use this server.

Getting your API token:

  1. Sign up at https://www.fetchserp.com
  2. New users automatically receive 250 free credits to get started
  3. Your API token will be available in your dashboard

Set the token as an environment variable:

export FETCHSERP_API_TOKEN="your_token_here"

Error Handling

The server includes comprehensive error handling:

  • Missing API token validation
  • API response error handling
  • Input validation
  • Proper MCP error responses

Docker deploy

docker build --platform=linux/amd64 -t olivier86/fetchserp-mcp-server-node:latest --push .

docker run -p 8000:8000 olivier86/fetchserp-mcp-server-node:latest

To start tunneling

nohup ngrok http 8000 --domain guinea-dominant-jolly.ngrok-free.app > /var/log/ngrok.log 2>&1 &

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选