OpenEnded Philosophy MCP Server
Enables philosophical reasoning and concept analysis through NARS non-axiomatic logic integration, supporting multi-perspective synthesis, epistemic uncertainty tracking, and contextual semantic exploration with built-in truth maintenance.
README
OpenEnded Philosophy MCP Server with NARS Integration
A sophisticated philosophical reasoning system that combines OpenEnded Philosophy with Non-Axiomatic Reasoning System (NARS) for enhanced epistemic analysis, truth maintenance, and multi-perspective synthesis.
Core Integration: Philosophy + NARS
This server uniquely integrates:
- NARS/ONA: Non-axiomatic reasoning with truth maintenance and belief revision
- Philosophical Pluralism: Multi-perspective analysis without privileging any single view
- Epistemic Humility: Built-in uncertainty quantification and revision conditions
- Coherence Dynamics: Emergent conceptual landscapes with stability analysis
Theoretical Foundation
Core Philosophical Architecture:
- Epistemic Humility: Every insight carries inherent uncertainty metrics
- Contextual Semantics: Meaning emerges through language games and forms of life
- Dynamic Pluralism: Multiple interpretive schemas coexist without hierarchical privileging
- Pragmatic Orientation: Efficacy measured through problem-solving capability
Computational Framework
1. Emergent Coherence Dynamics
C(t) = Σ_{regions} (R_i(t) × Stability_i) + Perturbation_Response(t)
Where:
C(t): Coherence landscape at time tR_i(t): Regional coherence patternsStability_i: Local stability coefficientsPerturbation_Response(t): Adaptive response to new experiences
2. Fallibilistic Inference Engine
P(insight|evidence) = Confidence × (1 - Uncertainty_Propagation)
Key Components:
- Evidence limitation assessment
- Context dependence calculation
- Unknown unknown estimation
- Revision trigger identification
Quiick Start
{
"mcpServers": {
"openended-philosophy": {
"command": "uv",
"args": [
"--directory",
"/path/to/openended-philosophy-mcp",
"run",
"openended-philosophy-server"
],
"env": {
"PYTHONPATH": "/path/to/openended-philosophy-mcp",
"LOG_LEVEL": "INFO"
}
}
}
}
System Architecture
┌─────────────────────────────────────────┐
│ OpenEnded Philosophy Server │
├─────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────────┐ │
│ │ Coherence │ │ Language │ │
│ │ Landscape │ │ Games │ │
│ └──────┬──────┘ └────────┬────────┘ │
│ │ │ │
│ ┌──────▼──────────────────▼────────┐ │
│ │ Dynamic Pluralism Framework │ │
│ └──────────────┬───────────────────┘ │
│ │ │
│ ┌──────────────▼───────────────────┐ │
│ │ Fallibilistic Inference Core │ │
│ └──────────────────────────────────┘ │
└─────────────────────────────────────────┘
NARS Integration Features
Non-Axiomatic Logic (NAL)
- Truth Values: (frequency, confidence) pairs for nuanced belief representation
- Evidence-Based Reasoning: Beliefs strengthen with converging evidence
- Temporal Reasoning: Handle time-dependent truths and belief projection
- Inference Rules: Deduction, induction, abduction, analogy, and revision
Enhanced Capabilities
- Truth Maintenance: Automatic belief revision when contradictions arise
- Memory System: Semantic embeddings + NARS attention buffer
- Reasoning Patterns: Multiple inference types for comprehensive analysis
- Uncertainty Tracking: Epistemic uncertainty propagation through inference chains
Installation
-
Clone the repository:
git clone https://github.com/angrysky56/openended-philosophy-mcp cd openended-philosophy-mcp -
Install dependencies with uv:
uv syncThis will install all required Python packages, including
ona(OpenNARS for Applications).
### For Direct Usage (without MCP client)
If you want to run the philosophy server directly using uv:
#### Prerequisites
1. Install uv if you haven't already:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
- Restart your shell or run:
source $HOME/.cargo/env
Installation
-
Clone this repository:
git clone https://github.com/angrysky56/openended-philosophy-mcp cd openended-philosophy-mcp -
Install dependencies with uv:
uv sync
Running the Server
-
Activate the virtual environment:
source .venv/bin/activate -
Run the MCP server:
python -m openended_philosophy.server
The server will start and listen for MCP protocol messages on stdin/stdout. You can interact with it programmatically or integrate it with other MCP-compatible tools.
Available Tools
ask_philosophical_question: Ask deep philosophical questions and receive thoughtful responsesexplore_philosophical_topic: Explore philosophical topics in depth with guided discussion
Usage via MCP
Available Tools
1. analyze_concept
Analyzes a concept through multiple interpretive lenses without claiming ontological priority.
{
"concept": "consciousness",
"context": "neuroscience",
"confidence_threshold": 0.7
}
2. explore_coherence
Maps provisional coherence patterns in conceptual space.
{
"domain": "ethics",
"depth": 3,
"allow_revision": true
}
3. contextualize_meaning
Derives contextual semantics through language game analysis.
{
"expression": "truth",
"language_game": "scientific_discourse",
"form_of_life": "research_community"
}
4. generate_insights
Produces fallibilistic insights with built-in uncertainty quantification.
{
"phenomenon": "quantum_consciousness",
"perspectives": ["physics", "philosophy_of_mind", "information_theory"],
"openness_coefficient": 0.9
}
Philosophical Methodology
Wittgensteinian Therapeutic Approach
- Dissolve Rather Than Solve: Recognizes category mistakes
- Language Game Awareness: Context-dependent semantics
- Family Resemblance: Non-essentialist categorization
Pragmatist Orientation
- Instrumental Truth: Measured by problem-solving efficacy
- Fallibilism: All knowledge provisional
- Pluralism: Multiple valid perspectives
Information-Theoretic Substrate
- Pattern Recognition: Without ontological commitment
- Emergence: Novel properties from interactions
- Complexity: Irreducible to simple principles
Development Philosophy
This server embodies its own philosophical commitments:
- Open Source: Knowledge emerges through community
- Iterative Development: Understanding grows through use
- Bug-as-Feature: Errors provide learning opportunities
- Fork-Friendly: Multiple development paths encouraged
NARS Configuration & Setup
Installation Support
The server now exclusively supports pip-installed ONA (OpenNARS for Applications).
uv add ona
Configuration via Environment Variables
Create a .env file from the template:
cp .env.example .env
Key configuration options:
NARS_MEMORY_SIZE: Concept memory size (default: 1000)NARS_INFERENCE_STEPS: Inference depth (default: 50)NARS_SILENT_MODE: Suppress ONA output (default: true)NARS_DECISION_THRESHOLD: Decision confidence threshold (default: 0.6)
Testing NARS Integration
Verify your installation:
uv run python tests/test_nars_integration.py
Process Management
The improved NARS manager includes:
- Robust cleanup patterns preventing process leaks
- Signal handling for graceful shutdown (SIGTERM, SIGINT)
- Automatic recovery from subprocess failures
- Cross-platform support (Linux, macOS, Windows)
Troubleshooting
See docs/NARS_INSTALLATION.md for detailed troubleshooting guide.
Contributing
We welcome contributions that:
- Enhance epistemic humility features
- Add new interpretive schemas
- Improve contextual understanding
- Challenge existing assumptions
- Strengthen NARS integration capabilities
License
MIT License - In the spirit of open-ended inquiry
Project Status: Towards Functional AI Philosophical Reasoning
This project is a highly promising research platform or advanced prototype for AI philosophical reasoning. It lays a strong architectural and conceptual groundwork for computational philosophy.
Strengths & Progress:
- Robust NARS Integration: The
NARSManagerprovides a reliable interface to the NARS engine, crucial for non-axiomatic reasoning. - Modular Design: Clear separation of concerns (e.g.,
core,nars,llm_semantic_processor) enhances maintainability and extensibility. - Conceptual Graphing (
networkx): Effective use ofnetworkxfor representing coherence landscapes and conceptual relationships provides structured, machine-readable data for AI processing. - Philosophically Informed Prompts: The
LLMSemanticProcessordemonstrates a good understanding of philosophical concepts in its prompt crafting.
Current Limitations & Path to "Functional and Useful":
To evolve into a truly "functional and useful tool" for independent, deep, and novel philosophical reasoning, the following areas require significant development:
- Depth of NLP: Current semantic similarity metrics are simplistic. Achieving nuanced philosophical reasoning demands more advanced NLP (e.g., contextual embeddings, fine-tuned models) to understand subtle semantic differences.
- Transparency of Synthesis: While
FallibilisticInferenceperforms complex synthesis, the AI needs to understand how insights are synthesized to truly reason philosophically, rather than just receiving a result. This implies making the synthesis process more transparent and controllable by the AI. - Explicit AI Interaction with Graphs: The AI needs explicit tools or APIs to actively query, manipulate, and reason over the
networkxgraphs, moving beyond them being merely data structures. - Emergent Philosophical Insight: The ultimate goal is for the AI to generate novel philosophical insights or arguments that extend beyond its programmed rules or NARS's current capabilities. This is the most challenging aspect and represents the frontier of this project.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。