Farnsworth

Farnsworth

Farnsworth gives Claude persistent memory and autonomous agent capabilities. It runs locally and provides Hierarchical Memory (Working -> Episodic -> Archival), a Multi-Model Swarm (combining Ollama models for better reasoning), and specialized agents for Web Browsing, Vision (CLIP), and Voice (Whis

Category
访问服务器

README

🧠 Farnsworth: Your Claude Companion AI

9crfy4udrHQo8eP6mP393b5qwpGLQgcxVg9acmdwBAGS <div align="center">

Give Claude superpowers: persistent memory, model swarms, multimodal understanding, and self-evolution.

Version Python License Claude Code Docker Models

DocumentationRoadmapContributingDocker

</div>


🎯 What is Farnsworth?

Farnsworth is a companion AI system that integrates with Claude Code to give Claude capabilities it doesn't have on its own:

Without Farnsworth With Farnsworth
🚫 Claude forgets everything between sessions ✅ Claude remembers your preferences forever
🚫 Claude is a single model Model Swarm: 12+ models collaborate via PSO
🚫 Claude can't see images or hear audio ✅ Multimodal: vision (CLIP/BLIP) + voice (Whisper)
🚫 Claude never learns from feedback ✅ Claude evolves and adapts to you
🚫 Single user only ✅ Team collaboration with shared memory
🚫 High RAM/VRAM requirements ✅ Runs on <2GB RAM with efficient models

All processing happens locally on your machine. Your data never leaves your computer.


✨ What's New in v0.5.0

  • 🐝 Model Swarm - PSO-based collaborative inference with multiple small models
  • 🔮 Proactive Intelligence - Anticipatory suggestions based on context and habits
  • 🚀 12+ New Models - Phi-4-mini, SmolLM2, Qwen3-4B, TinyLlama, BitNet 2B
  • Ultra-Efficient - Run on <2GB RAM with TinyLlama, Qwen3-0.6B
  • 🎯 Smart Routing - Mixture-of-Experts automatically picks best model per task
  • 🔄 Speculative Decoding - 2.5x speedup with draft+verify pairs
  • 📊 Hardware Profiles - Auto-configure based on your available resources

Previously Added (v0.4.0)

  • 🖼️ Vision Module - CLIP/BLIP image understanding, VQA, OCR
  • 🎤 Voice Module - Whisper transcription, speaker diarization, TTS
  • 📦 Docker Support - One-command deployment with GPU support
  • 👥 Team Collaboration - Shared memory pools, multi-user sessions

🐝 Model Swarm: Collaborative Multi-Model Inference

The Model Swarm system enables multiple small models to work together, achieving better results than any single model:

Swarm Strategies

Strategy Description Best For
PSO Collaborative Particle Swarm Optimization guides model selection Complex tasks
Parallel Vote Run 3+ models, vote on best response Quality-critical
Mixture of Experts Route to specialist per task type General use
Speculative Ensemble Fast model drafts, strong model verifies Speed + quality
Fastest First Start fast, escalate if confidence low Low latency
Confidence Fusion Weighted combination of outputs High reliability

🏗️ Architecture & Privacy

Farnsworth runs 100% locally on your machine.

  • No Server Costs: You do not need to pay for hosting.
  • Your Data: All memories and files stay on your computer.
  • How it connects: The Claude Desktop App spawns Farnsworth as a background process using the Model Context Protocol (MCP).

Supported Models (Jan 2025)

Model Params RAM Strengths
Phi-4-mini-reasoning 3.8B 6GB Rivals o1-mini in math/reasoning
Phi-4-mini 3.8B 6GB GPT-3.5 class, 128K context
DeepSeek-R1-1.5B 1.5B 4GB o1-style reasoning, MIT license
Qwen3-4B 4B 5GB MMLU-Pro 74%, multilingual
SmolLM2-1.7B 1.7B 3GB Best quality at size
Qwen3-0.6B 0.6B 2GB Ultra-light, 100+ languages
TinyLlama-1.1B 1.1B 2GB Fastest, edge devices
BitNet-2B 2B 1GB Native 1-bit, 5-7x CPU speedup
Gemma-3n-E2B 2B eff 4GB Multimodal (text/image/audio)
Phi-4-multimodal 5.6B 8GB Vision + speech + reasoning

Hardware Profiles

Farnsworth auto-configures based on your hardware:

minimal:     # <4GB RAM: TinyLlama, Qwen3-0.6B
cpu_only:    # 8GB+ RAM, no GPU: BitNet, SmolLM2
low_vram:    # 2-4GB VRAM: DeepSeek-R1, Qwen3-0.6B
medium_vram: # 4-8GB VRAM: Phi-4-mini, Qwen3-4B
high_vram:   # 8GB+ VRAM: Full swarm with verification

⚡ Quick Start

📦 Option 1: One-Line Install (Recommended)

Farnsworth is available on PyPI. This is the easiest way to get started.

pip install farnsworth-ai

Running the Server:

# Start the MCP server
farnsworth-server

# Or customize configuration
farnsworth-server --debug --port 8000

🐳 Option 2: Docker

git clone https://github.com/timowhite88/Farnsworth.git
cd Farnsworth
docker-compose -f docker/docker-compose.yml up -d

🛠️ Option 3: Source (For Developers)

git clone https://github.com/timowhite88/Farnsworth.git
cd Farnsworth
pip install -r requirements.txt

🔌 Configure Claude Code

Add to your Claude Code MCP settings (usually found in claude_desktop_config.json):

For PyPI Install:

{
  "mcpServers": {
    "farnsworth": {
      "command": "farnsworth-server",
      "args": [],
      "env": {
        "FARNSWORTH_LOG_LEVEL": "INFO"
      }
    }
  }
}

📖 Full Installation Guide →


🌟 Key Features

🧠 Advanced Memory System

Claude finally remembers! Multi-tier hierarchical memory:

Memory Type Description
Working Memory Current conversation context
Episodic Memory Timeline of interactions, "on this day" recall
Semantic Layers 5-level abstraction hierarchy
Knowledge Graph Entities, relationships, temporal edges
Archival Memory Permanent vector-indexed storage
Memory Dreaming Background consolidation during idle time

🤖 Agent Swarm (11 Specialists)

Claude can delegate tasks to AI agents:

Core Agents Description
Code Agent Programming, debugging, code review
Reasoning Agent Logic, math, step-by-step analysis
Research Agent Information gathering, summarization
Creative Agent Writing, brainstorming, ideation
Advanced Agents (v0.3+) Description
Planner Agent Task decomposition, dependency tracking
Critic Agent Quality scoring, iterative refinement
Web Agent Intelligent browsing, form filling
FileSystem Agent Project understanding, smart search
Collaboration (v0.3+) Description
Agent Debates Multi-perspective synthesis
Specialization Learning Skill development, task routing
Hierarchical Teams Manager coordination, load balancing

🖼️ Vision Understanding (v0.4+)

See and understand images:

  • CLIP Integration - Zero-shot classification, image embeddings
  • BLIP Integration - Captioning, visual question answering
  • OCR - Extract text from images (EasyOCR)
  • Scene Graphs - Extract objects and relationships
  • Image Similarity - Compare and search images

🎤 Voice Interaction (v0.4+)

Hear and speak:

  • Whisper Transcription - Real-time and batch processing
  • Speaker Diarization - Identify different speakers
  • Text-to-Speech - Multiple voice options
  • Voice Commands - Natural language control
  • Continuous Listening - Hands-free mode

👥 Team Collaboration (v0.4+)

Work together with shared AI:

  • Shared Memory Pools - Team knowledge bases
  • Multi-User Support - Individual profiles and preferences
  • Permission System - Role-based access control
  • Collaborative Sessions - Real-time multi-user interaction
  • Audit Logging - Compliance-ready access trails

📈 Self-Evolution

Farnsworth learns from your feedback and improves automatically:

  • Fitness Tracking - Monitors task success, efficiency, satisfaction
  • Genetic Optimization - Evolves better configurations over time
  • User Avatar - Builds a model of your preferences
  • LoRA Evolution - Adapts model weights to your usage

🔍 Smart Retrieval (RAG 2.0)

Self-refining retrieval that gets better at finding relevant information:

  • Hybrid Search - Semantic + BM25 keyword search
  • Query Understanding - Intent classification, expansion
  • Multi-hop Retrieval - Complex question answering
  • Context Compression - Token-efficient memory injection
  • Source Attribution - Confidence scoring

🛠️ Architecture

┌─────────────────────────────────────────────────────────────┐
│                      Claude Code                             │
│              (Your AI Programming Partner)                   │
└─────────────────────────────────────────────────────────────┘
                              │ MCP Protocol
                              ▼
┌─────────────────────────────────────────────────────────────┐
│                  Farnsworth MCP Server                       │
│  ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐       │
│  │ Memory   │ │ Agent    │ │Evolution │ │Multimodal│       │
│  │ Tools    │ │ Tools    │ │ Tools    │ │ Tools    │       │
│  └──────────┘ └──────────┘ └──────────┘ └──────────┘       │
└─────────────────────────────────────────────────────────────┘
          │                │                │
          ▼                ▼                ▼
┌──────────────┐  ┌──────────────┐  ┌──────────────┐
│   Memory     │  │    Agent     │  │  Multimodal  │
│   System     │  │    Swarm     │  │   Engine     │
│              │  │              │  │              │
│ • Episodic   │  │ • Planner    │  │ • Vision     │
│ • Semantic   │  │ • Critic     │  │   (CLIP/BLIP)│
│ • Knowledge  │  │ • Web        │  │ • Voice      │
│   Graph v2   │  │ • FileSystem │  │   (Whisper)  │
│ • Archival   │  │ • Debates    │  │ • OCR        │
│ • Sharing    │  │ • Teams      │  │ • TTS        │
└──────────────┘  └──────────────┘  └──────────────┘
          │                │                │
          ▼                ▼                ▼
┌──────────────┐  ┌──────────────┐  ┌──────────────┐
│  Evolution   │  │Collaboration │  │   Storage    │
│   Engine     │  │   System     │  │   Backends   │
│              │  │              │  │              │
│ • Genetic    │  │ • Multi-User │  │ • FAISS      │
│   Optimizer  │  │ • Shared     │  │ • ChromaDB   │
│ • Fitness    │  │   Memory     │  │ • Redis      │
│   Tracker    │  │ • Sessions   │  │ • SQLite     │
│ • LoRA       │  │ • Permissions│  │              │
└──────────────┘  └──────────────┘  └──────────────┘
          │                │                │
          └────────────────┴────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────┐
│                   Model Swarm (v0.5+)                        │
│  ┌─────────────────────────────────────────────────────┐   │
│  │              PSO Collaborative Engine                │   │
│  │   • Particle positions = model configs              │   │
│  │   • Velocity = adaptation direction                 │   │
│  │   • Global/personal best tracking                   │   │
│  └─────────────────────────────────────────────────────┘   │
│                           │                                 │
│  ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐       │
│  │ Phi-4    │ │DeepSeek  │ │ Qwen3    │ │ SmolLM2  │       │
│  │ mini     │ │ R1-1.5B  │ │ 0.6B/4B  │ │ 1.7B     │       │
│  └──────────┘ └──────────┘ └──────────┘ └──────────┘       │
│  ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐       │
│  │TinyLlama │ │ BitNet   │ │ Gemma    │ │ Cascade  │       │
│  │ 1.1B     │ │ 2B(1-bit)│ │ 3n-E2B   │ │ (hybrid) │       │
│  └──────────┘ └──────────┘ └──────────┘ └──────────┘       │
└─────────────────────────────────────────────────────────────┘

🔧 Tools Available to Claude

Once connected, Claude has access to these tools:

Tool Description
farnsworth_remember(content, tags) Store information in long-term memory
farnsworth_recall(query, limit) Search and retrieve relevant memories
farnsworth_delegate(task, agent_type) Delegate to specialist agent
farnsworth_evolve(feedback) Provide feedback for system improvement
farnsworth_status() Get system health and statistics
farnsworth_vision(image, task) Analyze images (caption, VQA, OCR)
farnsworth_voice(audio, task) Process audio (transcribe, diarize)
farnsworth_collaborate(action, ...) Team collaboration operations
farnsworth_swarm(prompt, strategy) NEW: Multi-model collaborative inference

📦 Docker Deployment

Multiple deployment profiles available:

# Basic deployment
docker-compose -f docker/docker-compose.yml up -d

# With GPU support
docker-compose -f docker/docker-compose.yml --profile gpu up -d

# With Ollama + ChromaDB
docker-compose -f docker/docker-compose.yml --profile ollama --profile chromadb up -d

# Development mode (hot reload + debugger)
docker-compose -f docker/docker-compose.yml --profile dev up -d

See docker/docker-compose.yml for all options.


📊 Dashboard

Farnsworth includes a Streamlit dashboard for visualization:

python main.py --ui
# Or with Docker:
docker-compose -f docker/docker-compose.yml --profile ui-only up -d

<details> <summary>📸 Dashboard Features</summary>

  • Memory Browser - Search and explore all stored memories
  • Episodic Timeline - Visual history of interactions
  • Knowledge Graph - 3D entity relationships
  • Agent Monitor - Active agents and task history
  • Evolution Dashboard - Fitness metrics and improvement trends
  • Team Collaboration - Shared pools and active sessions
  • Model Swarm Monitor - PSO state, model performance, strategy stats

</details>


🚀 Roadmap

See ROADMAP.md for detailed plans.

Completed ✅

  • v0.1.0 - Core memory, agents, evolution
  • v0.2.0 - Enhanced memory (episodic, semantic, sharing)
  • v0.3.0 - Advanced agents (planner, critic, web, filesystem, debates, teams)
  • v0.4.0 - Multimodal (vision, voice) + collaboration + Docker
  • v0.5.0 - Model Swarm + 12 new models + hardware profiles

Coming Next

  • 🎬 Video understanding and summarization
  • 🔐 Encryption at rest (AES-256)
  • ☁️ Cloud deployment templates (AWS, Azure, GCP)
  • 📊 Performance optimization (<100ms recall)

💡 Why "Farnsworth"?

Named after Professor Hubert J. Farnsworth from Futurama - a brilliant inventor who created countless gadgets and whose catchphrase "Good news, everyone!" perfectly captures what we hope you'll feel when using this tool with Claude.


📋 Requirements

Minimum Recommended With Full Swarm
Python 3.10+ Python 3.11+ Python 3.11+
4GB RAM 8GB RAM 16GB RAM
2-core CPU 4-core CPU 8-core CPU
5GB storage 20GB storage 50GB storage
- 4GB VRAM 8GB+ VRAM

Supported Platforms: Windows 10+, macOS 11+, Linux

Optional Dependencies:

  • ollama - Local LLM inference (recommended)
  • llama-cpp-python - Direct GGUF inference
  • torch - GPU acceleration
  • transformers - Vision/Voice models
  • playwright - Web browsing agent
  • whisper - Voice transcription

📄 License

Farnsworth is dual-licensed:

Use Case License
Personal / Educational / Non-commercial FREE
Commercial (revenue > $1M or enterprise) Commercial License Required

See LICENSE for details. For commercial licensing, contact via GitHub.


🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Priority Areas:

  • Video understanding module
  • Cloud deployment templates
  • Performance benchmarks
  • Additional model integrations
  • Documentation improvements

📚 Documentation


🔗 Research References

Model Swarm implementation inspired by:


⭐ Star History

If Farnsworth helps you, consider giving it a star! ⭐


<div align="center">

Built with ❤️ for the Claude community

"Good news, everyone!" - Professor Farnsworth

Report BugRequest FeatureGet Commercial License

</div>

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选