DevLens MCP

DevLens MCP

An open-source MCP server that provides AI assistants with structured, token-efficient web context through specialized tools for searching, scraping, and deep research. It streamlines developer workflows by delivering optimized Markdown content directly into IDEs like VS Code, eliminating the need for manual browser switching.

Category
访问服务器

README

DevLens MCP

The MCP Server I Built to Kill Alt-Tab. Clean, fast web context, right in your IDE.

Like most developers, I was sick of context-switching between VS Code and the browser for documentation. That was my core frustration. So, I built DevLens: an open-source MCP server because I was curious and wanted a custom solution that was more lightweight than existing tools.

The goal is simple: give your workspace AI (Copilot, Claude, etc.) web access that is structured and token-efficient. DevLens delivers twelve specialized tools via a three-layered architecture built for power and easy deployment.

What is MCP and DevLens's Role?

The MCP (Model Context Protocol) is the standard that lets your AI assistant call external tools (web search, scraping) to act beyond its training data. It gives the AI real-world and real-time ..power.

DevLens's Role is to be the most efficient implementation for web research. DevLens handles the intelligence (Smart Orchestration) and formats the results into clean Markdown. This ensures your workspace AI receives the precise context it needs without the clutter or high token cost of raw HTML.

Why DevLens (Solving the Flow Problem)

DevLens is built on two principles to solve context loss: Technical Composability and Token Efficiency.

Built for the Developer Workflow

  • The Problem Solved: No more useless switching between browser and editor. Your coding flow stays intact.
  • The Technical Edge: Our layered architecture uses simple primitives that combine powerfully. This means more precise and less costly workflows than existing "monolithic" solutions.
  • LLM Context Optimal: Our clean, token-optimized Markdown output is about 70% smaller than raw HTML. This is the secret for fast, accurate AI results in your chat.
  • Seamless IDE Integration: Designed to pair perfectly with VS Code Copilot and GitHub Copilot. Web research is injected directly into your editor.
  • Deployment Ready: Use it locally for your own work, or deploy it on a server to share with others.
  • Smart Orchestration — The system chooses the best tool sequence, automatically.
  • Zero Configuration — Install, run. Done.

Developer Personas & Use Cases

Persona Problem Solved (The Pain) DevLens Solution (The Win)
Nina, the Frontend Developer Needs a quick fix (e.g., that one CORS config snippet) but hates opening 5 Stack Overflow tabs. Uses suggest_workflow or search_web + summarize_page to get the validated code snippet instantly in chat. Flow maintained.
Kenji, the Staff Engineer Must compare three serverless vendors for an architecture decision. Needs a single, definitive data dump. Uses deep_dive to fetch, aggregate, and analyze complex data concurrently. The LLM receives the full, pre-processed report.
Sarah, the DevOps Specialist Has to manually check third-party deployment guides every week for silent, breaking changes. Uses monitor_changes to passively track content hashes on critical docs, sending an alert only when something actually changes.

Tools

DevLens gives you 12 specialized tools—think of it like a camera bag of lenses. Pick one, or let the smart system auto-select:

Layer Metaphor Focus Tools
Primitives Basic Lenses Precision & Reliability search_web, scrape_url, crawl_docs, summarize_page, extract_links
Composed Multi-Lens Systems Convenience & Aggregation deep_dive, compare_sources, find_related, monitor_changes
Meta Auto-Focus Intelligence Guidance & Optimization suggest_workflow, classify_research_intent, get_server_docs

Quick Start (Seriously, It's Fast)

Prerequisites

  • Python 3.12 or newer
  • uv package manager

Installation

# Clone the repository
git clone https://github.com/Y4NN777/devlens-mcp.git
cd devlens-mcp

# Install dependencies
uv sync

# Run the server (STDIO mode)
uv run python -m devlens.server

Configuration du client MCP

Claude Desktop

Add this to claude_desktop_config.json:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Linux: ~/.config/claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Option 1: Using launch script (Recommended - Cross-Platform)

{
  "mcpServers": {
    "devlens": {
      "command": "/absolute/path/to/devlens-mcp/launch_mcp.sh",
      "args": []
    }
  }
}

Option 2: Direct uv command

{
  "mcpServers": {
    "devlens": {
      "command": "uv",
      "args": ["run", "python", "-m", "devlens.server"],
      "cwd": "/absolute/path/to/devlens-mcp"
    }
  }
}

VS Code Copilot (Recommended - Cross-Platform)

Create .vscode/mcp.json in your workspace:

{
  "servers": {
    "devlens": {
      "command": "/absolute/path/to/devlens-mcp/launch_mcp.sh",
      "args": []
    }
  }
}

Note: The launch_mcp.sh script is cross-platform and automatically:

  • Detects your OS (Linux/macOS/Windows)
  • Locates uv installation (checks ~/.local/bin/uv, ~/.cargo/bin/uv, or system PATH)
  • Uses the correct Python from .venv (.venv/bin/python on Unix, .venv/Scripts/python.exe on Windows)
  • No manual configuration needed!

Other MCP Clients

Use STDIO transport:

uv run python -m devlens.server

Verify Installation

Test the server is working:

# Test basic functionality
uv run python -c "from devlens.server import mcp; print('DevLens server loaded successfully')"

Usage Examples

Manual Tool Usage

# Simple search
search_web("FastAPI tutorial", limit=5)

# Scrape with metadata
scrape_url("https://docs.python.org", include_metadata=True)

# Multi-source research
deep_dive("Python async best practices", depth=5, parallel=True)

# Compare perspectives
compare_sources("FastAPI vs Flask", ["url1", "url2"])

Smart Orchestration

# Let DevLens recommend the workflow
suggest_workflow("How to integrate payment API in Burkina Faso?")

# Returns:
# - Primary intent: quick_answer (50% confidence)
# - Workflow: [search_web(limit=3), scrape_url]
# - Suggested parameters optimized for intent
# - Fallback strategies if tools fail

With Context

# Provide known URLs to skip search
context = ResearchContext(known_urls=["https://docs.stripe.com"])
suggest_workflow("Stripe payment integration guide", context)

# DevLens adapts:
# - Skips search (URLs already known)
# - Goes straight to crawl_docs or scrape_url
# - Optimizes parameters based on intent

Architecture

DevLens uses a simple, effective layered architecture—the smart bits guide the reliable bits.

  • Meta Layer (Intelligence) -> suggests workflows
  • Composed Layer (Convenience) -> combines primitives
  • Primitive Layer (Reliability) -> uses adapters
  • External Services (The Actual Internet)

Key Design Principles:

  • Composability — Tiny tools that handle huge tasks.
  • Intelligence at the Edges — Smart brain decides, reliable primitives execute.
  • Token Optimization — Maximum context, minimum token cost.
  • Fail Explicitly — No silent failures. We tell you exactly what broke.
  • Developer Velocity First — If it doesn't make you faster, we don't build it.

See ARCHITECTURE.md for the deep dive.

Library Stack (The Ingredients)

Layer Library Purpose Framework
MCP fastmcp MCP protocol implementation
Scraping crawl4ai JavaScript-enabled web scraping
Search ddgs DuckDuckGo search (no API key)
HTTP httpx Fallback HTTP client
Validation pydantic Input/output schemas

Features

Intelligent Scraping

  • Exponential backoff retry (because the internet is flaky)
  • Metadata extraction (+41% information density)
  • Smart filtering (skips all the login/signup/spam garbage)
  • Markdown conversion (clean text for the AI)
  • Content change detection via hashing

Multi-Source Research

  • Parallel content fetching (3x faster)
  • Domain diversity filtering
  • Comparative analysis across sources
  • Progress tracking with success/failure reporting

Smart Orchestration

  • 7 research intent patterns (e.g., quick_answer, deep_research, comparison)
  • Dynamic workflow generation based on context
  • Parameter optimization (limits/depths automatically set for intent)
  • Fallback strategies when tools fail
  • LRU cache for insane speed (200 entries)

Context Awareness

  • Tracks known URLs (no redundant searches)
  • Records failed tools (so the AI doesn't try the same thing twice)
  • Adapts workflows based on research state

Performance (Proof We Aren't Lying)

Tool Duration Cost Notes
search_web 1-2s Low DuckDuckGo API
scrape_url 2-5s Low Single page fetch
crawl_docs 10-60s High Multi-page crawling (big tasks take big time)
deep_dive 5-15s Medium Parallel scraping
suggest_workflow <50ms Minimal LRU cached

Documentation

  • REQUIREMENTS.md — Project scope and technical requirements
  • ARCHITECTURE.md — Software architecture and design philosophy
  • TOOLS.md — Comprehensive tool reference with examples

Philosophy

The DevLens Philosophy: Make the hard stuff simple and fast.

  • Composability — Build with small, focused primitives that combine
  • Intelligence at the Edges — Smart brain, reliable hands
  • Developer Velocity — If setup takes more than 5 minutes, it's too much.
  • Token Economy — Efficiency is currency.
  • Fail Explicitly — We tell you when something breaks.
  • Context-Aware — It remembers what happened.

Read the full philosophy in ARCHITECTURE.md.

Examples (In Action)

Quick Answer

Query: "What is FastAPI?"
-> suggest_workflow thinks: quick_answer (50%)
-> Workflow: search_web(limit=3) -> scrape_url
-> Result: Fast answer from the top source. Done.

Deep Research

Query: "Comprehensive guide to mobile payments in Africa"
-> suggest_workflow thinks: deep_research (75%)
-> Workflow: search_web(limit=10) -> deep_dive(depth=10, parallel=true)
-> Result: Multi-source aggregated report, ready for planning.

Documentation Learning

Query: "FastAPI documentation" + known_url
-> suggest_workflow thinks: documentation (80%)
-> Workflow: crawl_docs(max_pages=25) (skips search, goes straight to the docs)
-> Result: Complete documentation with TOC.

Comparison Research

Query: "Compare FastAPI vs Flask"
-> suggest_workflow thinks: comparison (65%)
-> Workflow: search_web -> scrape_url (parallel) -> compare_sources
-> Result: Side-by-side analysis ready for your pull request.

Contributing

Contributions welcome! Keep it simple:

  • Add, don't modify — New tools over changing existing ones
  • Document why — Explain your design choices
  • Test everything — All tools must have validation tests
  • Keep it simple — Clarity over cleverness

License

MIT License - See LICENSE for details.

Name origin: DevLens = A developer's lens for viewing the web. Different tools are different lenses (wide-angle, macro, zoom), with smart auto-focus (orchestration) that picks the right lens automatically.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选