Open Targets Platform MCP
Enables AI assistants to interact with the Open Targets Platform API to search and retrieve comprehensive data about target-disease associations, drugs, genes, and drug discovery information through GraphQL queries.
README
Open Targets Platform MCP
⚠️ DISCLAIMER: This project is currently experimental and under active development. Features, APIs, and documentation may change without notice ⚠️
Model Context Protocol (MCP) server for the Open Targets Platform API
This package is the official Open Targets Platform MCP server implementation that enables AI assistants to interact with the Open Targets Platform GraphQL API, a comprehensive resource for target-disease associations and drug discovery data.
Quick Navigation
- Features
- Official MCP Server
- Local Deployment
- Advanced Deployment
- Available Commands
- Server Settings
- Available Tools
- Strategy
- Claude Desktop Setup
- Project Structure
- Testing
- Contributing
- License
Features
- 🔍 GraphQL Schema Access: Fetch and explore the complete Open Targets Platform GraphQL schema with detailed documentation
- 📊 Query Execution: Execute custom GraphQL queries against the Open Targets Platform API
- ⚡ Batch Query Processing: Execute the same query multiple times with different parameters efficiently
- 🔎 Entity Search: Search for entities across multiple types (targets, diseases, drugs, variants, studies)
- 🛠️ CLI Tools: Easy-to-use command-line interface for starting the server
- 🎯 jq Filtering (Optional): Server-side JSON processing using jq to reduce token consumption and improve performance. See jq Filtering for details.
Official MCP Server
The easiest way to use Open Targets Platform MCP is through the hosted service provided by Open Targets infrastructure at https://mcp.platform.opentargets.org/mcp
Local Deployment
Via uvx (Quick Start)
The fastest way to get started is using uvx, which will automatically download and run the package directly from GitHub.
Examples:
# Start HTTP server bound to localhost:8000 (default)
uvx --from git+https://github.com/opentargets/open-targets-platform-mcp otp-mcp
# Get help
uvx --from git+https://github.com/opentargets/open-targets-platform-mcp otp-mcp --help
# With jq filtering enabled
uvx --from git+https://github.com/opentargets/open-targets-platform-mcp otp-mcp --jq
Docker Deployment
You can run the MCP server using the official Docker image:
# Pull the latest image
docker pull ghcr.io/opentargets/open-targets-platform-mcp
# Run as a daemon with HTTP transport
docker run -d \
-p 8000:8000 \
-e OTP_MCP_HTTP_HOST=0.0.0.0 \
ghcr.io/opentargets/open-targets-platform-mcp
# Run as a daemon with jq filtering enabled
docker run -d \
-p 8000:8000 \
-e OTP_MCP_HTTP_HOST=0.0.0.0 \
-e OTP_MCP_JQ_ENABLED=true \
ghcr.io/opentargets/open-targets-platform-mcp
Server Settings
For available CLI arguments and environment variables, see the Server Settings table.
Advanced Deployment
Both advanced deployment options require cloning the repository and setting up the virtual environment first:
# Clone the repository
git clone https://github.com/opentargets/open-targets-platform-mcp.git
cd open-targets-platform-mcp
# Install dependencies
uv sync --python 3.10
FastMCP CLI
For advanced usage and to utilize all FastMCP options, you can use the FastMCP CLI directly with the server module:
# Run using FastMCP CLI
uv run fastmcp run ./src/open_targets_platform_mcp/server.py
Note: For all FastMCP CLI options, see the FastMCP documentation.
Note: Use environment variables (see Server Settings table) to configure the server when using FastMCP CLI.
Development Installation (Editable)
For development or to modify the codebase:
# Run the server
uv run otp-mcp
# Get help
uv run otp-mcp --help
Available Commands
The package provides two command variants:
otp-mcp: Shorter alias (recommended)open-targets-platform-mcp: Full command name
Both commands are functionally identical.
Server Settings
Configure the server using environment variables (all prefixed with OTP_MCP_). The following table shows all available configuration options:
| Environment Variable | CLI Option | Description | Default |
|---|---|---|---|
OTP_MCP_API_ENDPOINT |
--api |
Open Targets Platform API endpoint URL | https://api.platform.opentargets.org/api/v4/graphql |
OTP_MCP_SERVER_NAME |
--name |
Server name displayed in MCP | "Model Context Protocol server for Open Targets Platform" |
OTP_MCP_TRANSPORT |
--transport |
Transport type: stdio or http |
http |
OTP_MCP_HTTP_HOST |
--host |
HTTP server host (only used with http transport) |
localhost |
OTP_MCP_HTTP_PORT |
--port |
HTTP server port (only used with http transport) |
8000 |
OTP_MCP_API_CALL_TIMEOUT |
--timeout |
Request timeout in seconds for API calls | 30 |
OTP_MCP_JQ_ENABLED |
--jq |
Enable jq filtering support | false |
OTP_MCP_RATE_LIMITING_ENABLED |
--rate-limiting |
Enable rate limiting | false |
Examples:
Using environment variables:
export OTP_MCP_TRANSPORT=stdio
export OTP_MCP_JQ_ENABLED=true
otp-mcp
Using CLI options:
otp-mcp --transport stdio --jq
Note: CLI options take precedence over environment variables when both are provided.
Available Tools
The MCP server provides the following tools:
get_open_targets_graphql_schema: Fetch the complete GraphQL schema for the Open Targets Platform API, including detailed documentation for all types and fieldsquery_open_targets_graphql: Execute GraphQL queries to retrieve data about targets, diseases, drugs, and their associationsbatch_query_open_targets_graphql: Execute the same GraphQL query multiple times with different variable sets for efficient batch processingsearch_entities: Search for entities across multiple types (targets, diseases, drugs, variants, studies) and retrieve their standardized IDs
Strategy
The MCP server implements a 3-step workflow that guides the LLM to efficiently retrieve data from the Open Targets Platform:
Step 1: Learn Query Structure from Schema
The LLM calls get_open_targets_graphql_schema to understand the GraphQL API structure. The schema includes detailed documentation for all types and fields, enabling the LLM to construct valid queries.
Key entity types include:
- Targets/Genes: Use ENSEMBL IDs (e.g.,
ENSG00000139618for BRCA2) - Diseases: Use EFO/MONDO IDs (e.g.,
MONDO_0007254for breast cancer) - Drugs: Use ChEMBL IDs (e.g.,
CHEMBL1201583for aspirin) - Variants: Use "chr_pos_ref_alt" format or rsIDs
Step 2: Resolve Identifiers (if needed)
When a user query contains common names (gene symbols, disease names, drug names), the LLM uses search_entities to convert them to standardized IDs required by the API.
Step 3: Execute Query
The LLM constructs and executes GraphQL queries using:
- Standardized IDs from Step 2
- Query structure from the schema
- jq filters (optional, when enabled) to extract only requested fields, minimizing token consumption
Tool selection:
query_open_targets_graphqlfor single queriesbatch_query_open_targets_graphqlfor multiple identical queries with different parameters (reduces latency and tokens)
jq Filtering (Optional)
The MCP server supports optional server-side JSON processing using jq expressions. This feature is disabled by default but can be enabled if you want to reduce token consumption.
Enable jq Filtering When:
- You want to reduce token consumption by extracting only specific fields from API responses
- Working with large API responses where only a subset of data is needed
- The calling LLM is proficient at tool calling and can reliably construct jq filters
Disable jq Filtering When:
- Simplicity is preferred over optimization
- Working with straightforward queries that don't benefit from filtering
- The LLM should receive complete API responses
How jq Filtering Works
When jq filtering is enabled, the query tools expose a jq_filter parameter. The jq filter is applied server-side before the response is returned, extracting only the relevant data and discarding unnecessary fields.
Example: To extract only the gene symbol and ID from a target query:
jq_filter: ".data.target | {id, symbol: .approvedSymbol}"
This significantly reduces token consumption by returning only the requested fields instead of the full API response.
Claude Desktop Setup
For detailed instructions on configuring the Open Targets Platform MCP server with Claude Desktop, including both remote hosted service and local installation configurations, see CLAUDE_DESKTOP.md.
Project Structure
open-targets-platform-mcp/
├── src/open_targets_platform_mcp/
│ ├── __init__.py # Package initialization
│ ├── cli.py # Command-line interface
│ ├── create_server.py # MCP server creation and setup
│ ├── server.py # FastMCP server instance
│ ├── settings.py # Configuration management (pydantic-settings)
│ ├── types.py # Type definitions (TransportType, etc.)
│ ├── client/ # GraphQL client utilities
│ │ ├── __init__.py
│ │ └── graphql.py # GraphQL client implementation
│ ├── model/ # Data models
│ │ └── result.py # Query result models
│ ├── middleware/ # Middleware components
│ │ └── AdaptiveRateLimitingMiddleware.py # Rate limiting middleware
│ ├── tools/ # MCP tools (organized by feature)
│ │ ├── __init__.py # Tool exports
│ │ ├── schema/ # Schema fetching tool
│ │ │ └── schema.py
│ │ ├── query/ # Query execution tool
│ │ │ ├── query.py
│ │ │ ├── with_jq_description.txt
│ │ │ └── without_jq_description.txt
│ │ ├── batch_query/ # Batch query tool
│ │ │ ├── batch_query.py
│ │ │ ├── with_jq_description.txt
│ │ │ └── without_jq_description.txt
│ │ └── search_entities/ # Entity search tool
│ │ ├── search_entities.py
│ │ └── description.txt
│ └── static/ # Static assets
│ └── favicon.png
├── test/ # Test suite
│ ├── conftest.py
│ ├── test_client/
│ │ └── test_graphql.py
│ ├── test_tools/
│ │ ├── test_schema.py
│ │ ├── test_query.py
│ │ └── test_batch_query.py
│ ├── test_config.py
│ └── test_server.py
└── pyproject.toml # Project configuration and dependencies
Testing
Note: The test suite is currently AI-generated and will be reviewed and refined in the near future.
Contributing
Contributions are welcome! Please open an issue or submit a pull request on the GitHub repository.
License
This project is licensed under the terms of the license specified in LICENSE.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。