hebbian-mind-enterprise
Hebbian learning MCP server with neural memory graphs, eligibility traces, and three-factor reward signals. Associative memory that strengthens through use.
README
Hebbian Mind Enterprise
<!-- mcp-name: io.github.For-Sunny/hebbian-mind-enterprise -->
<a href="https://glama.ai/mcp/servers/For-Sunny/hebbian-mind-enterprise"> <img width="380" height="200" src="https://glama.ai/mcp/servers/For-Sunny/hebbian-mind-enterprise/badge" /> </a>
Memory that learns. Connections that fade.
An MCP server that builds knowledge graphs through use. Concepts connect when they activate together. Unused connections decay. The more you use it, the smarter it gets.
What It Does
- Associative Memory - Save content. Query content. Related concepts surface automatically.
- Hebbian Learning - Edges strengthen through co-activation. No manual linking required.
- Concept Nodes - 100+ pre-defined enterprise concepts across Systems, Security, Data, Operations, and more.
- MCP Native - Works with Claude Desktop, Claude Code, any MCP-compatible client.
Installation
Three paths. Pick what fits.
Windows (Native)
# Clone the repo
git clone https://github.com/For-Sunny/hebbian-mind-enterprise.git
cd hebbian-mind-enterprise
# Install with pip
pip install -e .
# Verify
python -m hebbian_mind.server
The server runs on stdio. Press Ctrl+C to stop.
Linux / macOS (Native)
# Clone the repo
git clone https://github.com/For-Sunny/hebbian-mind-enterprise.git
cd hebbian-mind-enterprise
# Install with pip (use a virtual environment if you prefer)
pip install -e .
# Verify
python -m hebbian_mind.server
Linux gets automatic RAM disk support via /dev/shm when enabled.
Docker (Teams / Enterprise)
# Clone the repo
git clone https://github.com/For-Sunny/hebbian-mind-enterprise.git
cd hebbian-mind-enterprise
# Copy environment template
cp .env.example .env
# Build and start
docker-compose up -d
# View logs
docker-compose logs -f hebbian-mind
For RAM disk optimization:
docker-compose --profile ramdisk up -d
Claude Desktop Integration
Add to your claude_desktop_config.json:
Native Install:
{
"mcpServers": {
"hebbian-mind": {
"command": "python",
"args": ["-m", "hebbian_mind.server"]
}
}
}
Docker Install:
{
"mcpServers": {
"hebbian-mind": {
"command": "docker",
"args": ["exec", "-i", "hebbian-mind", "python", "-m", "hebbian_mind.server"]
}
}
}
Restart Claude Desktop. The tools appear automatically.
Configuration
Environment variables control behavior. Set them before running, or use .env with Docker.
Core Settings
| Variable | Default | Description |
|---|---|---|
HEBBIAN_MIND_BASE_DIR |
./hebbian_mind_data |
Data storage location |
HEBBIAN_MIND_RAM_DISK |
false |
Enable RAM disk for faster reads |
HEBBIAN_MIND_RAM_DIR |
/dev/shm/hebbian_mind (Linux) |
RAM disk path |
Hebbian Learning
| Variable | Default | Description |
|---|---|---|
HEBBIAN_MIND_THRESHOLD |
0.3 |
Activation threshold (0.0-1.0) |
HEBBIAN_MIND_MAX_WEIGHT |
10.0 |
Maximum edge weight cap |
Deprecated:
HEBBIAN_MIND_EDGE_FACTORis no longer used. The asymptotic learning formula (LEARNING_RATE = 0.1) replaced the old harmonic strengthening factor. The env var still loads without error but has no effect on edge weights.
Optional Integrations
| Variable | Default | Description |
|---|---|---|
HEBBIAN_MIND_FAISS_ENABLED |
false |
Enable FAISS semantic search |
HEBBIAN_MIND_FAISS_HOST |
localhost |
FAISS tether host |
HEBBIAN_MIND_FAISS_PORT |
9998 |
FAISS tether port |
HEBBIAN_MIND_PRECOG_ENABLED |
false |
Enable PRECOG concept extraction |
MCP Tools
Eight tools. All available through any MCP client.
save_to_mind
Store content with automatic concept activation and edge strengthening.
{
"content": "Microservices architecture enables independent deployment",
"summary": "Optional summary",
"source": "ARCHITECTURE_DOCS",
"importance": 0.8
}
Activates matching concept nodes. Strengthens edges between co-activated concepts.
query_mind
Query memories by concept nodes.
{
"nodes": ["architecture", "deployment"],
"limit": 20
}
Returns memories that activated those concepts.
analyze_content
Preview which concepts would activate without saving.
{
"content": "API authentication using JWT tokens",
"threshold": 0.3
}
get_related_nodes
Get concepts connected via Hebbian edges.
{
"node": "security",
"min_weight": 0.1
}
Returns the neighborhood graph - concepts that have fired together with "security".
list_nodes
List all concept nodes, optionally filtered.
{
"category": "Security"
}
mind_status
Server health and statistics.
{}
Returns node count, edge count, memory count, strongest connections, dual-write status.
faiss_search
Semantic search via external FAISS tether (if enabled).
{
"query": "authentication patterns",
"top_k": 10
}
faiss_status
Check FAISS tether connection status.
Temporal Decay
Memories and edges both decay over time unless reinforced.
Memory decay: Same formula as CASCADE and PyTorch Memory. Memories lose effective importance over time. Accessed memories reset their clock. Immortal memories (importance >= 0.9) never decay.
Edge decay: Connections between concepts weaken if not co-activated. This is the inverse of Hebbian learning -- "neurons that stop firing together, stop wiring together." Edges decay toward a minimum weight (0.1), never to zero, preserving the structure of learned associations.
Decay Configuration
| Variable | Default | Description |
|---|---|---|
HEBBIAN_MIND_DECAY_ENABLED |
true |
Enable memory decay |
HEBBIAN_MIND_DECAY_BASE_RATE |
0.01 |
Base exponential decay rate |
HEBBIAN_MIND_DECAY_THRESHOLD |
0.1 |
Memories below this are hidden |
HEBBIAN_MIND_DECAY_IMMORTAL_THRESHOLD |
0.9 |
Memories at or above this never decay |
HEBBIAN_MIND_DECAY_SWEEP_INTERVAL |
60 |
Minutes between sweep cycles |
HEBBIAN_MIND_EDGE_DECAY_ENABLED |
true |
Enable edge weight decay |
HEBBIAN_MIND_EDGE_DECAY_RATE |
0.005 |
Edge decay rate (slower than memory decay) |
HEBBIAN_MIND_EDGE_DECAY_MIN_WEIGHT |
0.1 |
Minimum edge weight floor |
Decayed memories are hidden from query_mind by default. Pass include_decayed: true to retrieve them.
Architecture
Dual-Write Pattern
- Write: Disk first (crash-safe) -> RAM second (speed)
- Read: RAM (instant) with disk fallback
- Startup: Copies disk to RAM if RAM is empty
Disk commits before RAM updates. If the RAM write fails, the data is already on disk -- the failure gets logged but nothing is lost. This order guarantees durability. A power loss mid-write never leaves you with RAM-only data that never reached disk.
RAM disk is optional. Without it, reads and writes go directly to SQLite on disk.
Concept Nodes
100+ pre-defined nodes across categories:
- Systems & Architecture - service, api, component, integration
- Security - authentication, authorization, encryption, access
- Data & Memory - database, cache, persistence, schema
- Logic & Reasoning - pattern, rule, validation, analysis
- Operations - workflow, pipeline, monitoring, health
- Quality - performance, reliability, scalability, test
Nodes have keywords and prototype phrases. Content activates nodes when keywords match.
Hebbian Learning
When concepts co-activate (appear in the same saved content):
- Edge created if none exists (initial weight: 0.15)
- Existing edges strengthen via asymptotic formula:
delta = (MAX_WEIGHT - current_weight) * LEARNING_RATE
new_weight = current_weight + delta
Each co-activation closes 10% of the gap between current weight and MAX_WEIGHT (10.0). An edge at 2.0 gains 0.8. An edge at 9.0 gains 0.1. Edges approach the ceiling but never hit it -- no saturation, no runaway weights.
Combined with time-based decay (idle edges lose 2% per tick) and homeostatic scaling (total edge weight per node stays near 50.0), the graph self-regulates. Active paths strengthen. Neglected paths fade. The topology stays meaningful.
"Neurons that fire together, wire together."
Troubleshooting
Server won't start
Check Python version (requires 3.10+):
python --version
Verify MCP SDK installed:
pip install mcp
No activations on save
Content must match node keywords above threshold. Lower the threshold:
export HEBBIAN_MIND_THRESHOLD=0.2
Or check what would activate:
{"tool": "analyze_content", "content": "your text here"}
Docker container won't connect
Ensure container is running:
docker ps | grep hebbian-mind
Check logs:
docker-compose logs hebbian-mind
High memory with RAM disk
Check node/edge counts via mind_status. Consider increasing HEBBIAN_MIND_THRESHOLD to activate fewer nodes, or lower HEBBIAN_MIND_MAX_WEIGHT to limit edge growth.
Performance
| Metric | Value | Notes |
|---|---|---|
| Save latency | <10ms | Includes activation, Hebbian strengthening, and commit |
| Query latency | <5ms | Node lookup + JOIN + sort |
| RAM disk reads | <1ms | When HEBBIAN_MIND_RAM_DISK=true |
| Analyze latency | <1ms | Content analysis without save |
| Memory per node | ~1KB | SQLite row with keywords and phrases |
| Memory per edge | ~100 bytes | SQLite row with weight and timestamps |
| Startup (100 nodes) | <1 second | Schema creation + node loading + edge initialization |
Reproducing Benchmarks
A benchmark script is included to verify these claims on your hardware:
python benchmarks/benchmark_performance.py
The script creates an isolated temp database, runs 200 iterations of each operation, and reports mean/median/P95/P99 latencies. Results are saved to benchmarks/latest_results.json with full system info for reproducibility.
Test conditions: Disk-only mode (no RAM disk), WAL journal mode, 20 enterprise nodes, single-threaded. RAM disk mode will produce faster read latencies.
Testing
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run with coverage
pytest --cov=hebbian_mind
Support
- Documentation: cipscorps.io/docs/hebbian-mind
- Email: support@cipscorps.io
- Issues: GitHub Issues
License
MIT License. See LICENSE for terms.
Memory that learns. Concepts that connect. The more you use it, the smarter it gets.
Made by CIPS Corp
Website | Store | GitHub | glass@cipscorps.io
Enterprise cognitive infrastructure for AI systems: PyTorch Memory, Soul Matrix, CMM, and the full CIPS Stack.
Copyright (c) 2025-2026 C.I.P.S. LLC
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。