Founder Intelligence Engine

Founder Intelligence Engine

Transforms founder profiles from social media into actionable strategic intelligence through automated scraping, LLM analysis, and personalized news tracking. It leverages vector search and caching to provide deep insights and relevant updates on specific founders.

Category
访问服务器

README

Founder Intelligence Engine — MCP Server

A production-grade Model Context Protocol (MCP) server that transforms founder profiles into actionable strategic intelligence.


Architecture

┌───────────────────────────────────────────────────────────┐
│                     MCP Client (Claude, etc.)             │
│                          ▲ stdio                          │
│               ┌──────────┴──────────┐                     │
│               │   MCP Server (Node) │                     │
│               │   3 registered tools│                     │
│               └──────┬──────────────┘                     │
│          ┌───────────┬┼──────────────┐                    │
│          ▼           ▼▼              ▼                    │
│  ┌──────────┐  ┌───────────┐  ┌──────────────┐           │
│  │  Apify   │  │   Groq    │  │  Embeddings  │           │
│  │  Scraping│  │   LLM     │  │  API         │           │
│  └────┬─────┘  └─────┬─────┘  └──────┬───────┘           │
│       └──────────────┬┘──────────────┘                    │
│                      ▼                                    │
│            ┌─────────────────┐                            │
│            │  Supabase       │                            │
│            │  (Postgres +    │                            │
│            │   pgvector)     │                            │
│            └─────────────────┘                            │
└───────────────────────────────────────────────────────────┘

Data Flow

  1. collect_profile — Scrapes LinkedIn + Twitter via Apify → merges data → generates embedding → stores in Supabase
  2. analyze_profile — Fetches stored profile → calls Groq LLM for strategic analysis → caches result
  3. fetch_personalized_news — Checks cache freshness → if stale: generates search queries → scrapes Google News → embeds articles → ranks by cosine similarity → summarizes with Groq → stores; if fresh: returns cached articles

Caching & Cost Optimization

Operation Cost When It Runs
LinkedIn/Twitter scraping High Only on profile creation
Groq profile analysis Medium Once per profile (cached)
Google News + embeddings High Only when news > 24h stale
Read cached articles Free Every subsequent request

The fetch_history table tracks last_profile_scrape and last_news_fetch timestamps. The staleCheck.js module compares these against configurable thresholds.


Setup

1. Prerequisites

  • Node.js 20+
  • Supabase project (with pgvector enabled)
  • API keys: Apify, Groq, OpenAI-compatible Embeddings

2. Install

cd /Users/praveenkumar/Desktop/mcp
cp .env.example .env
# Edit .env with your real keys
npm install

3. Database

Run the migration against your Supabase SQL Editor:

-- Paste contents of migrations/001_init.sql

Or via psql:

psql $DATABASE_URL < migrations/001_init.sql

4. Run MCP Server

node src/index.js

5. Configure MCP Client

Add to your MCP client config (e.g., Claude Desktop claude_desktop_config.json):

{
  "mcpServers": {
    "founder-intelligence": {
      "command": "node",
      "args": ["/Users/praveenkumar/Desktop/mcp/src/index.js"],
      "env": {
        "SUPABASE_URL": "...",
        "SUPABASE_SERVICE_KEY": "...",
        "APIFY_API_TOKEN": "...",
        "GROQ_API_KEY": "...",
        "EMBEDDING_API_URL": "...",
        "EMBEDDING_API_KEY": "..."
      }
    }
  }
}

6. Background Worker (Optional)

# Single run (for cron)
node src/backgroundWorker.js

# Daemon mode
BACKGROUND_LOOP=true node src/backgroundWorker.js

Cron example (every 6 hours):

0 */6 * * * cd /app && node src/backgroundWorker.js >> /var/log/worker.log 2>&1

Project Structure

/Users/praveenkumar/Desktop/mcp/
├── migrations/
│   └── 001_init.sql
├── src/
│   ├── db/
│   │   └── supabaseClient.js
│   ├── services/
│   │   ├── apifyService.js
│   │   ├── embeddingService.js
│   │   └── llmService.js
│   ├── tools/
│   │   ├── collectProfile.js
│   │   ├── analyzeProfile.js
│   │   └── fetchPersonalizedNews.js
│   ├── utils/
│   │   ├── similarity.js
│   │   └── staleCheck.js
│   ├── backgroundWorker.js
│   └── index.js
├── .env.example
├── .gitignore
├── .dockerignore
├── Dockerfile
├── package.json
└── README.md

Docker Deployment

Build & Run

docker build -t founder-intelligence-mcp .
docker run --env-file .env founder-intelligence-mcp

Background Worker Container

docker run --env-file .env founder-intelligence-mcp node src/backgroundWorker.js

Docker Compose (production)

version: '3.8'
services:
  mcp-server:
    build: .
    env_file: .env
    stdin_open: true
    restart: unless-stopped

  worker:
    build: .
    env_file: .env
    command: ["node", "src/backgroundWorker.js"]
    environment:
      - BACKGROUND_LOOP=true
    restart: unless-stopped

Scaling Strategy

Component Strategy
MCP Server One instance per client (stdio-based)
Background Worker Single instance or Cloud Run Job on schedule
Supabase Connection pooling via Supavisor; read replicas for scale
Apify Concurrent actor runs (up to account limit)
Embeddings Batch requests (20 per call) to reduce round trips
Groq Rate-limit aware with retry-after header handling

For high-profile-count deployments:

  • Move background worker to a Cloud Run Job triggered by Cloud Scheduler
  • Use Supabase Edge Functions for scheduled refresh
  • Add a Redis cache layer for hot profile lookups

Security Best Practices

  1. Service-role key only on server side — never expose to clients
  2. All secrets via environment variables — no hardcoded keys
  3. Non-root Docker usermcp user in container
  4. Input validation — Zod schemas on all tool inputs
  5. Row Level Security — enable RLS on Supabase tables for multi-tenant
  6. API token rotation — rotate Apify, Groq, and embedding keys periodically
  7. Rate limiting — built-in retry logic with exponential backoff
  8. No PII logging — profile data stays in Supabase, not console

Cost Optimization

Service Cost Driver Mitigation
Apify Actor compute units Scrape only on creation; cache results
Groq Token usage Analyze once (cached); batch news summaries
Embeddings API calls Batch 20 at a time; embed once per article
Supabase Row count + storage Deduplicate articles by URL; prune old articles

Expected cost per profile lifecycle:

  • Initial setup: ~$0.05–0.15 (scrape + embed + analyze)
  • Daily news refresh: ~$0.02–0.08 (scrape + embed + summarize top 10)
  • Cached reads: $0.00

Future Improvement Roadmap

  1. HTTP/SSE transport — support remote MCP clients over HTTP
  2. Multi-tenant profiles — user-scoped access with RLS
  3. Real-time alerts — push notifications when high-relevance news drops
  4. Competitor tracking — dedicated tool to monitor named competitors
  5. Founder network graph — map connections between analyzed founders
  6. Custom embedding models — fine-tuned models for startup/VC domain
  7. Article full-text extraction — deep content scraping for richer embeddings
  8. A/B prompt testing — experiment with different Groq prompts for analysis quality
  9. Dashboard UI — web interface for browsing intelligence feeds
  10. Webhook integrations — push intelligence to Slack, email, or CRM

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选