Adaptive Graph of Thoughts MCP Server

Adaptive Graph of Thoughts MCP Server

A scientific reasoning framework that leverages graph structures and the Model Context Protocol (MCP) to process complex scientific queries through an Advanced Scientific Reasoning Graph-of-Thoughts (ASR-GoT) approach.

Category
访问服务器

README

🧠 Adaptive Graph of Thoughts

<div align="center">

                    ╔══════════════════════════════════════╗
                    ║                                      ║
                    ║   🧠 Adaptive Graph of Thoughts 🧠  ║
                    ║                                      ║
                    ║       Intelligent Scientific         ║
                    ║         Reasoning through            ║
                    ║         Graph-of-Thoughts            ║
                    ║                                      ║
                    ╚══════════════════════════════════════╝

Intelligent Scientific Reasoning through Graph-of-Thoughts

Version Python License <!-- Assuming LICENSE file will be added --> Docker FastAPI neo4j Last Updated smithery badge <!-- Add a GitHub Actions badge for documentation build once active --> <!-- [![Docs](https://github.com/sapta-dey/Adaptive Graph of Thoughts-2.0/actions/workflows/gh-pages.yml/badge.svg)](https://github.com/sapta-dey/Adaptive Graph of Thoughts-2.0/actions/workflows/gh-pages.yml) -->

</div>

<div align="center"> <p><strong>🚀 Next-Generation AI Reasoning Framework for Scientific Research</strong></p> <p><em>Leveraging graph structures to transform how AI systems approach scientific reasoning</em></p> </div>

📚 Documentation

For comprehensive information on Adaptive Graph of Thoughts, including detailed installation instructions, usage guides, configuration options, API references, contribution guidelines, and the project roadmap, please visit our full documentation site:

➡️ Adaptive Graph of Thoughts Documentation Site (Note: This link will be active once the GitHub Pages site is deployed via the new workflow.)

🔍 Overview

Adaptive Graph of Thoughts leverages a Neo4j graph database to perform sophisticated scientific reasoning, with graph operations managed within its pipeline stages. It implements the Model Context Protocol (MCP) to integrate with AI applications like Claude Desktop, providing an Advanced Scientific Reasoning Graph-of-Thoughts (ASR-GoT) framework designed for complex research tasks.

Key highlights:

  • Process complex scientific queries using graph-based reasoning
  • Dynamic confidence scoring with multi-dimensional evaluations
  • Built with modern Python and FastAPI for high performance
  • Dockerized for easy deployment
  • Modular design for extensibility and customization
  • Integration with Claude Desktop via MCP protocol

📂 Project Structure

The project is organized as follows (see the documentation site for more details):

Adaptive Graph of Thoughts/
├── 📁 .github/                           # GitHub specific files (workflows)
├── 📁 config/                            # Configuration files (settings.yaml)
├── 📁 docs_src/                          # Source files for MkDocs documentation
├── 📁 src/                               # Source code
│   └── 📁 adaptive_graph_of_thoughts     # Main application package
├── 📁 tests/                             # Test suite
├── Dockerfile                            # Docker container definition
├── docker-compose.yml                    # Docker Compose for development
├── docker-compose.prod.yml               # Docker Compose for production
├── mkdocs.yml                            # MkDocs configuration
├── poetry.lock                           # Poetry dependency lock file
├── pyproject.toml                        # Python project configuration (Poetry)
├── pyrightconfig.json                    # Pyright type checker configuration
├── README.md                             # This file
└── setup_claude_connection.py            # Script for Claude Desktop connection setup (manual run)

🚀 Getting Started

Deployment Prerequisites

Before running Adaptive Graph of Thoughts (either locally or via Docker if not using the provided docker-compose.prod.yml which includes Neo4j), ensure you have:

  • A running Neo4j Instance: Adaptive Graph of Thoughts requires a connection to a Neo4j graph database.

    • APOC Library: Crucially, the Neo4j instance must have the APOC (Awesome Procedures On Cypher) library installed. Several Cypher queries within the application's reasoning stages utilize APOC procedures (e.g., apoc.create.addLabels, apoc.merge.node). Without APOC, the application will not function correctly. You can find installation instructions on the official APOC website.
    • Configuration: Ensure that your config/settings.yaml (or corresponding environment variables) correctly points to your Neo4j instance URI, username, and password.
    • Indexing: For optimal performance, ensure appropriate Neo4j indexes are created. See Neo4j Indexing Strategy for details.

    Note: The provided docker-compose.yml (for development) and docker-compose.prod.yml (for production) already include a Neo4j service with the APOC library pre-configured, satisfying this requirement when using Docker Compose.

Prerequisites

  • Python 3.11+ (as specified in pyproject.toml, e.g., the Docker image uses Python 3.11.x or 3.12.x, 3.13.x)
  • Poetry: For dependency management
  • Docker and Docker Compose: For containerized deployment

Installation and Setup (Local Development)

  1. Clone the repository:

    git clone https://github.com/SaptaDey/Adaptive Graph of Thoughts.git
    cd Adaptive Graph of Thoughts
    
  2. Install dependencies using Poetry:

    poetry install
    

    This creates a virtual environment and installs all necessary packages specified in pyproject.toml.

  3. Activate the virtual environment:

    poetry shell
    
  4. Configure the application:

    # Copy example configuration
    cp config/settings.example.yaml config/settings.yaml
    
    # Edit configuration as needed
    vim config/settings.yaml
    
  5. Set up environment variables (optional):

    # Create .env file for sensitive configuration
    echo "LOG_LEVEL=DEBUG" > .env
    echo "API_HOST=0.0.0.0" >> .env
    echo "API_PORT=8000" >> .env
    
  6. Run the development server:

    python src/asr_got_reimagined/main.py
    

    Alternatively, for more control:

    uvicorn asr_got_reimagined.main:app --reload --host 0.0.0.0 --port 8000
    

    The API will be available at http://localhost:8000.

Docker Deployment

graph TB
    subgraph "Development Environment"
        A[👨‍💻 Developer] --> B[🐳 Docker Compose]
    end
    
    subgraph "Container Orchestration"
        B --> C[📦 Adaptive Graph of Thoughts Container]
        B --> D[📊 Monitoring Container]
        B --> E[🗄️ Database Container]
    end
    
    subgraph "Adaptive Graph of Thoughts Application"
        C --> F[⚡ FastAPI Server]
        F --> G[🧠 ASR-GoT Engine]
        F --> H[🔌 MCP Protocol]
    end
    
    subgraph "External Integrations"
        H --> I[🤖 Claude Desktop]
        H --> J[🔗 Other AI Clients]
    end
    
    style A fill:#e1f5fe
    style B fill:#f3e5f5
    style C fill:#e8f5e8
    style F fill:#fff3e0
    style G fill:#ffebee
    style H fill:#f1f8e9
  1. Quick Start with Docker Compose:

    # Build and run all services
    docker-compose up --build
    
    # For detached mode (background)
    docker-compose up --build -d
    
    # View logs
    docker-compose logs -f adaptive-graph-of-thoughts
    
  2. Individual Docker Container:

    # Build the image
    docker build -t adaptive-graph-of-thoughts:latest .
    
    # Run the container
    docker run -p 8000:8000 -v $(pwd)/config:/app/config adaptive-graph-of-thoughts:latest
    
  3. Production Deployment:

    # Use production compose file
    docker-compose -f docker-compose.prod.yml up --build -d
    

Notes on Specific Deployment Platforms

  • Smithery.ai: Deployment to the Smithery.ai platform typically involves using the provided Docker image directly.
    • Consult Smithery.ai's specific documentation for instructions on deploying custom Docker images.
    • Port Configuration: Ensure that the platform is configured to expose port 8000 (or the port configured via APP_PORT if overridden) for the Adaptive Graph of Thoughts container, as this is the default port used by the FastAPI application.
    • Health Checks: Smithery.ai may use health checks to monitor container status. The Adaptive Graph of Thoughts Docker image includes a HEALTHCHECK instruction that verifies the /health endpoint (e.g., http://localhost:8000/health). Ensure Smithery.ai is configured to use this endpoint if it requires a specific health check path.
    • The provided Dockerfile and docker-compose.prod.yml serve as a baseline for understanding the container setup. Adapt as per Smithery.ai's requirements.
  1. Access the Services:
    • API Documentation: http://localhost:8000/docs
    • Health Check: http://localhost:8000/health
    • MCP Endpoint: http://localhost:8000/mcp

🔌 API Endpoints

The primary API endpoints exposed by Adaptive Graph of Thoughts are:

  • MCP Protocol Endpoint: POST /mcp

    • This endpoint is used for communication with MCP clients like Claude Desktop.
    • Example Request for the asr_got.query method:
      {
        "jsonrpc": "2.0",
        "method": "asr_got.query",
        "params": {
          "query": "Analyze the relationship between microbiome diversity and cancer progression.",
          "parameters": {
            "include_reasoning_trace": true,
            "include_graph_state": false
          }
        },
        "id": "123"
      }
      
    • Other supported MCP methods include initialize and shutdown.
  • Health Check Endpoint: GET /health

    • Provides a simple health status of the application.
    • Example Response:
      {
        "status": "healthy",
        "version": "0.1.0" 
      }
      
      (Note: The timestamp field shown previously is not part of the current health check response.)

The advanced API endpoints previously listed (e.g., /api/v1/graph/query) are not implemented in the current version and are reserved for potential future development.

Session Handling (session_id)

Currently, the session_id parameter available in API requests (e.g., for asr_got.query) and present in responses serves primarily to identify and track a single, complete query-response cycle. It is also used for correlating progress notifications (like got/queryProgress) with the originating query.

While the system generates and utilizes session_ids, Adaptive Graph of Thoughts does not currently support true multi-turn conversational continuity where the detailed graph state or reasoning context from a previous query is automatically loaded and reused for a follow-up query using the same session_id. Each query is processed independently at this time.

Future Enhancement: Persistent Sessions

A potential future enhancement for Adaptive Graph of Thoughts is the implementation of persistent sessions. This would enable more interactive and evolving reasoning processes by allowing users to:

  1. Persist State: Store the generated graph state and relevant reasoning context from a query, associated with its session_id, likely within the Neo4j database.
  2. Reload State: When a new query is submitted with an existing session_id, the system could reload this saved state as the starting point for further processing.
  3. Refine and Extend: Allow the new query to interact with the loaded graph—for example, by refining previous hypotheses, adding new evidence to existing structures, or exploring alternative reasoning paths based on the established context.

Implementing persistent sessions would involve developing robust strategies for:

  • Efficiently storing and retrieving session-specific graph data in Neo4j.
  • Managing the lifecycle (e.g., creation, update, expiration) of session data.
  • Designing sophisticated logic for how new queries merge with, modify, or extend pre-existing session contexts and graphs.

This is a significant feature that could greatly enhance the interactive capabilities of Adaptive Graph of Thoughts. Contributions from the community in designing and implementing persistent session functionality are welcome.

Future Enhancement: Asynchronous and Parallel Stage Execution

Currently, the 8 stages of the Adaptive Graph of Thoughts reasoning pipeline are executed sequentially. For complex queries or to further optimize performance, exploring asynchronous or parallel execution for certain parts of the pipeline is a potential future enhancement.

Potential Areas for Parallelism:

  • Hypothesis Generation: The HypothesisStage generates hypotheses for each dimension identified by the DecompositionStage. The process of generating hypotheses for different, independent dimensions could potentially be parallelized. For instance, if three dimensions are decomposed, three parallel tasks could work on generating hypotheses for each respective dimension.
  • Evidence Integration (Partial): Within the EvidenceStage, if multiple hypotheses are selected for evaluation, the "plan execution" phase (simulated evidence gathering) for these different hypotheses might be performed concurrently.

Challenges and Considerations:

Implementing parallel stage execution would introduce complexities that need careful management:

  • Data Consistency: Concurrent operations, especially writes to the Neo4j database (e.g., creating multiple hypothesis nodes or evidence nodes simultaneously), must be handled carefully to ensure data integrity and avoid race conditions. Unique ID generation schemes would need to be robust for parallel execution.
  • Transaction Management: Neo4j transactions for concurrent writes would need to be managed appropriately.
  • Dependency Management: Ensuring that stages (or parts of stages) that truly depend on the output of others are correctly sequenced would be critical.
  • Resource Utilization: Parallel execution could increase resource demands (CPU, memory, database connections).
  • Complexity: The overall control flow of the GoTProcessor would become more complex.

While the current sequential execution ensures a clear and manageable data flow, targeted parallelism in areas like hypothesis generation for independent dimensions could offer performance benefits for future versions of Adaptive Graph of Thoughts. This remains an open area for research and development.

🧪 Testing & Quality Assurance

<div align="center"> <table> <tr> <td align="center">🧪<br><b>Testing</b></td> <td align="center">🔍<br><b>Type Checking</b></td> <td align="center">✨<br><b>Linting</b></td> <td align="center">📊<br><b>Coverage</b></td> </tr> <tr> <td align="center"> <pre>poetry run pytest</pre> <pre>make test</pre> </td> <td align="center"> <pre>poetry run mypy src/</pre> <pre>pyright src/</pre> </td> <td align="center"> <pre>poetry run ruff check .</pre> <pre>poetry run ruff format .</pre> </td> <td align="center"> <pre>poetry run pytest --cov=src</pre> <pre>coverage html</pre> </td> </tr> </table> </div>

Development Commands

# Run full test suite with coverage using Poetry
poetry run pytest --cov=src --cov-report=html --cov-report=term

# Or using Makefile for the default test run
make test

# Run specific test categories (using poetry)
poetry run pytest tests/unit/stages/          # Stage-specific tests
poetry run pytest tests/integration/         # Integration tests
poetry run pytest -k "test_confidence"       # Tests matching pattern

# Type checking and linting (can also be run via Makefile targets: make lint, make check-types)
poetry run mypy src/ --strict                # Strict type checking
poetry run ruff check . --fix                # Auto-fix linting issues
poetry run ruff format .                     # Format code

# Pre-commit hooks (recommended)
poetry run pre-commit install                # Install hooks
poetry run pre-commit run --all-files       # Run all hooks

# See Makefile for other useful targets like 'make all-checks'.

🗺️ Roadmap and Future Directions

We have an exciting vision for the future of Adaptive Graph of Thoughts! Our roadmap includes plans for enhanced graph visualization, integration with more data sources like Arxiv, and further refinements to the core reasoning engine.

For more details on our planned features and long-term goals, please see our Roadmap (also available on the documentation site).

🗺️ Roadmap and Future Directions

We have an exciting vision for the future of Adaptive Graph of Thoughts! Our roadmap includes plans for enhanced graph visualization, integration with more data sources like Arxiv, and further refinements to the core reasoning engine.

For more details on our planned features and long-term goals, please see our Roadmap.

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines (also available on the documentation site) for details on how to get started, our branching strategy, code style, and more.

📄 License

This project is licensed under the Apache License 2.0. License.

🙏 Acknowledgments

  • NetworkX community for graph analysis capabilities
  • FastAPI team for the excellent web framework
  • Pydantic for robust data validation
  • The scientific research community for inspiration and feedback

<div align="center"> <p><strong>Built with ❤️ for the scientific research community</strong></p> <p><em>Adaptive Graph of Thoughts - Advancing scientific reasoning through intelligent graph structures</em></p> </div>

Aptive-Graph-of-Thought-MCP-

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选