OpenAlex Research MCP Server
Provides access to OpenAlex's catalog of 240+ million scholarly works through 18 specialized tools for conducting literature reviews, analyzing citations, tracking research trends, and mapping the scholarly landscape including authors, institutions, and collaboration networks.
README
OpenAlex MCP Server
A Model Context Protocol (MCP) server that provides access to OpenAlex, a comprehensive open catalog of scholarly papers, authors, institutions, and more. This server is specifically designed to empower AI assistants to conduct literature reviews, analyze research trends, and map the scholarly landscape.
Features
Access 240+ million scholarly works through 18 specialized tools:
Literature Search & Discovery
- search_works: Advanced search with Boolean operators, filters, and sorting
- get_work: Get detailed metadata for a specific work
- get_related_works: Find similar papers based on citations and topics
- search_by_topic: Explore literature in specific research domains
- autocomplete_search: Fast typeahead search for all entity types
Citation Analysis
- get_work_citations: Forward citation analysis (who cites this work)
- get_work_references: Backward citation analysis (what this work cites)
- get_citation_network: Build complete citation networks for visualization
- get_top_cited_works: Find the most influential papers in a field
Author & Institution Analysis
- search_authors: Find researchers with publication and citation metrics
- get_author_works: Analyze an author's publication history
- get_author_collaborators: Map co-authorship networks
- search_institutions: Find leading academic institutions
Research Landscape & Trends
- analyze_topic_trends: Track research evolution over time
- compare_research_areas: Compare activity across different fields
- get_trending_topics: Discover emerging research areas
- analyze_geographic_distribution: Map global research activity
Entity Lookup
- get_entity: Get detailed information for any OpenAlex entity
- search_sources: Find journals, conferences, and publication venues
Installation
Option 1: Install from npm (Recommended)
# Install globally
npm install -g openalex-research-mcp
# Or use directly with npx (no installation needed)
npx openalex-research-mcp
Option 2: Install from source
# Clone the repository
git clone https://github.com/oksure/openalex-research-mcp.git
cd openalex-research-mcp
# Install dependencies
npm install
# Build the TypeScript code
npm run build
Configuration
Environment Variables (Optional but Recommended)
Set your email to join the "polite pool" for better rate limits:
export OPENALEX_EMAIL="your.email@example.com"
For premium users with an API key:
export OPENALEX_API_KEY="your-api-key"
Claude Desktop Configuration
Add to your Claude Desktop config file:
MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
If you installed via npm/npx:
{
"mcpServers": {
"openalex": {
"command": "npx",
"args": ["-y", "openalex-research-mcp"],
"env": {
"OPENALEX_EMAIL": "your.email@example.com"
}
}
}
}
If you installed from source:
{
"mcpServers": {
"openalex": {
"command": "node",
"args": ["/absolute/path/to/openalex-research-mcp/build/index.js"],
"env": {
"OPENALEX_EMAIL": "your.email@example.com"
}
}
}
}
TypingMind and Other MCP Clients
The same configuration format works for TypingMind and other MCP-compatible clients.
⚠️ TypingMind Users: If you encounter "tool_use_id" errors, see TYPINGMIND.md for troubleshooting steps and best practices. TL;DR: Start a new chat, request fewer results (5-10), and use specific queries with filters.
Usage Examples
Example 1: Literature Review for AI Safety
Find the most influential papers on AI safety published since 2020
The assistant will use get_top_cited_works with appropriate filters to find highly-cited papers in AI safety research. The tool automatically filters for papers with at least 50 citations by default, ensuring results focus on influential work. For the most impactful papers, you can specify a higher threshold like min_citations: 200.
Example 2: Citation Network Analysis
Get the citation network for the paper "Attention Is All You Need" (DOI: 10.48550/arXiv.1706.03762)
The assistant will use get_citation_network to build a network of citing and referenced papers, enabling visualization of research impact.
Example 3: Research Trend Analysis
Show me how quantum computing research has evolved over the past 10 years
The assistant will use analyze_topic_trends to group publications by year and show growth patterns.
Example 4: Finding Collaborators
Who are the main collaborators of Geoffrey Hinton?
The assistant will use get_author_collaborators to analyze co-authorship patterns.
Example 5: Comparative Research Analysis
Compare research activity in "deep learning", "reinforcement learning", and "federated learning" from 2018-2024
The assistant will use compare_research_areas to show relative publication volumes.
Example 6: Geographic Research Mapping
Which countries are leading research in climate change mitigation?
The assistant will use analyze_geographic_distribution to map research activity by country.
Response Format
The MCP server uses a two-tier response system to balance performance and completeness:
Summarized Responses (Search Results)
For list operations (search_works, get_citations, get_author_works, etc.), responses include only essential information:
Included:
- Core identifiers (ID, DOI, title)
- Publication metadata (year, date, type)
- Citation metrics (cited_by_count)
- First 5 authors (with
authors_truncatedflag if more exist) - Primary topic classification
- Open access status and URLs
- Source/journal name
- Abstract preview (first 500 chars)
Excluded to reduce size:
- Full author lists beyond 5 authors
- All secondary topics/concepts
- Complete affiliation details
- Full reference lists
- Detailed bibliographic data
This optimization reduces response sizes by ~80-90% (from ~10 KB to ~1.7 KB per work), making the server compatible with all MCP clients including TypingMind and Claude Desktop.
Full Details (get_work tool)
When you need complete information about a specific paper, use the get_work tool with a work ID or DOI. This returns:
Complete Author Information:
- ALL authors (not just first 5)
- Position indicators (first, middle, last author)
- Institutions and affiliations
- ORCID IDs
- Corresponding author flags
- Country information
Complete Content:
- Full abstract (reconstructed from OpenAlex index)
- All topics (not just primary)
- Complete bibliographic data
- Funding and grant information
- Keywords
- Complete reference and citation lists
Use Cases:
- Identifying PIs (often last author in biomedical fields)
- Finding corresponding authors
- Getting complete author affiliations
- Accessing full abstracts
- Comprehensive paper analysis
Tool Reference
Search Parameters
Most search tools support these common parameters:
- from_year / to_year: Filter by publication year range
- cited_by_count: Filter by citation count (e.g., ">100")
- is_oa: Filter for open access works only
- sort: Sort results (relevance_score, cited_by_count, publication_year)
- page / per_page: Pagination (max 200 per page)
Boolean Search
The search_works and related tools support Boolean operators:
"machine learning" AND (ethics OR fairness)
"climate change" NOT "climate denial"
(AI OR "artificial intelligence") AND safety
Identifiers
OpenAlex accepts multiple identifier formats:
- OpenAlex IDs: W2741809807, A5023888391
- DOIs: 10.1371/journal.pone.0000000
- ORCIDs: 0000-0001-2345-6789
- URLs: Full OpenAlex URLs
API Rate Limits
- Default: 100,000 requests/day, 10 requests/second
- Polite Pool (with email): Better performance and reliability
- Premium (with API key): Higher limits and exclusive filters
Development
# Watch mode for development
npm run watch
# Build
npm run build
# Run
npm start
Data Source
All data comes from OpenAlex, an open and comprehensive catalog of scholarly papers, authors, institutions, and more. OpenAlex indexes:
- 240+ million works (papers, books, datasets)
- 50,000+ new works added daily
- Full citation network and metadata
- Author affiliations and collaboration data
- Publication venues and impact metrics
Use Cases
This MCP server is ideal for:
- Literature Reviews: Systematically search and analyze research papers
- Citation Analysis: Understand research impact and influence
- Trend Analysis: Track how research topics evolve over time
- Collaboration Mapping: Identify research networks and partnerships
- Gap Analysis: Find understudied areas in research
- Comparative Studies: Compare research activity across fields
- Institution Benchmarking: Analyze research output by institution
- Author Profiling: Study researcher publication patterns
License
MIT
Contributing
Contributions are welcome! Please feel free to submit issues or pull requests.
Resources
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。