Memory Forensics MCP Server
AI-powered memory dump analysis using Volatility 3 for digital forensics investigations. Enables process analysis, malware detection, network forensics, timeline generation, and anomaly detection with support for Claude, Llama, and other LLMs.
README
Memory Forensics MCP Server
AI-powered memory analysis using Volatility 3 and MCP.
Features
Core Forensics
- Process Analysis: List processes, detect hidden processes, analyze process trees
- Code Injection Detection: Identify malicious code injection using malfind
- Network Analysis: Correlate network connections with processes
- Command Line Analysis: Extract process command lines
- DLL Analysis: Examine loaded DLLs per process
Advanced Capabilities
- Command Provenance: Full audit trail of all Volatility commands executed
- File Integrity: MD5/SHA1/SHA256 hashing of memory dumps
- Timeline Analysis: Chronological event ordering for incident reconstruction
- Anomaly Detection: Automated detection of suspicious process behavior
- Multi-Format Export: JSON, CSV, and HTML report generation
- Process Extraction: Extract detailed process information for offline analysis
Architecture
Memory Dump -> Volatility 3 -> SQLite Cache -> MCP Server -> LLM Client
(Claude Code/Local LLM)
LLM Compatibility
This MCP server works with any LLM The server is LLM-agnostic and communicates via the Model Context Protocol (MCP).
Supported LLMs
| LLM | Client | Best For |
|---|---|---|
| Claude (Opus/Sonnet) | Claude Code | Higher quality analysis |
| Llama (via Ollama) | Custom client (included) | Local/offline LLM setup, confidential investigations |
| GPT-4 | Custom client | OpenAI ecosystem users |
| Mistral, Phi, others | Custom client | Custom configs |
Quick Setup by LLM
Claude (Easiest):
- Official Claude Code client with native tool calling support
- Uses
~/.claude/mcp.jsonconfiguration - See Quick Start section below for setup instructions
Llama / Ollama:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3.1:70b
# Start Ollama
ollama serve
# Run the included client
cd examples
pip install -r requirements.txt
python ollama_client.py
Custom LLM:
- See
examples/ollama_client.pyfor reference implementation - Adapt to your LLM's API
- Full guide: MULTI_LLM_GUIDE.md
LLM Profiles
Optimize tool descriptions for different LLM capabilities:
# For Llama 3.1 70B+
export MCP_LLM_PROFILE=llama70b
# For smaller models (8B-13B)
export MCP_LLM_PROFILE=llama13b
# For minimal models
export MCP_LLM_PROFILE=minimal
See MULTI_LLM_GUIDE.md for comprehensive multi-LLM setup instructions.
Quick Start
Prerequisites
- Python 3.8+
- Volatility 3 installed and accessible
- Memory dumps (supported formats: .zip, .raw, .mem, .dmp, .vmem)
Installation
-
Clone or download this repository:
cd /path/to/your/projects git clone <repository-url> cd memory-forensics-mcp -
Create virtual environment:
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate -
Install dependencies:
pip install -r requirements.txtThis installs all required dependencies including Volatility 3 from PyPI.
-
Configure memory dumps directory (edit
config.py):# Set your memory dumps directory DUMPS_DIR = Path("/path/to/your/memdumps")
Advanced: Using Custom Volatility 3 Installation
If you need to use a custom Volatility 3 build (e.g., bleeding edge from git):
# Set environment variable
export VOLATILITY_PATH=/path/to/custom/volatility3
# Or edit config.py directly
# The system will automatically detect and use your custom installation
Configure for Claude Code
Add to ~/.claude/mcp.json:
{
"mcpServers": {
"memory-forensics": {
"command": "/absolute/path/to/memory-forensics-mcp/venv/bin/python",
"args": ["/absolute/path/to/memory-forensics-mcp/server.py"]
}
}
}
Replace /absolute/path/to/memory-forensics-mcp with your actual installation path.
Basic Usage with Claude Code
# Start Claude Code
claude
# Example commands:
"List available memory dumps"
"Process the Win11Dump memory dump"
"Get metadata and hashes for Win11Dump"
"Detect anomalies in Win11Dump"
"Generate a timeline for Win11Dump"
"Export data to JSON format"
Basic Usage with Ollama
# In one terminal: Start Ollama
ollama serve
# In another terminal: Run the MCP client
cd examples
export MCP_LLM_PROFILE=llama70b
python ollama_client.py
Available Tools
Core Analysis (8 tools)
| Tool | Description |
|---|---|
list_dumps |
List available memory dumps |
process_dump |
Process a dump with Volatility 3 |
list_processes |
List all processes |
analyze_process |
Deep dive into specific process |
detect_code_injection |
Find injected code |
network_analysis |
Analyze network connections |
detect_hidden_processes |
Find rootkit-hidden processes |
get_process_tree |
Show parent-child relationships |
Advanced Features (6 tools)
| Tool | Description |
|---|---|
get_dump_metadata |
Get file hashes, OS info, and statistics |
export_data |
Export to JSON, CSV, or HTML formats |
get_command_history |
View full command provenance/audit trail |
generate_timeline |
Create chronological event timeline |
detect_anomalies |
Find suspicious process behavior |
extract_process |
Extract detailed process info to file |
Workflow
Standard Investigation
- List dumps: See what memory dumps are available
- Process dump: Extract artifacts using Volatility 3 (this takes time!)
- Get metadata: View file hashes and dump statistics
- Detect anomalies: Automated suspicious behavior detection
- Generate timeline: Understand the sequence of events
- Export results: Save findings in JSON/CSV/HTML format
Example Investigation
User: "Analyze the Win11Dump memory dump for signs of compromise"
AI: "Let me process that dump first..."
> Calls process_dump("Win11Dump")
> Processes 103 processes, 101 network connections
AI: "Let me check for anomalies..."
> Calls detect_anomalies("Win11Dump")
AI: "Found critical anomalies:
- cmd.exe spawned by winword.exe (possible macro exploit)
- Process 'svch0st.exe' similar to 'svchost.exe' (typosquatting)
Let me check the timeline..."
> Calls generate_timeline("Win11Dump")
AI: "Timeline shows suspicious sequence:
1. winword.exe opens document
2. cmd.exe spawns 30 seconds later
3. powershell.exe downloads from external IP
Let me analyze that PowerShell process..."
> Calls analyze_process("Win11Dump", 2048)
AI: "This PowerShell process has:
- Unbacked executable memory (code injection detected)
- Network connection to suspicious IP 192.0.2.100
- Base64 encoded command line
This indicates a fileless malware infection via macro exploit.
Exporting full report..."
> Calls export_data("Win11Dump", format="html")
AI: "Complete analysis exported to Win11Dump_report.html"
Data Storage
- Dumps: Configured via
DUMPS_DIRinconfig.py(default:<project-dir>/dumps/) - Cache:
<install-dir>/data/artifacts.db(SQLite database) - Exports:
<install-dir>/data/exports/(JSON, CSV, HTML reports) - Extracted Files:
<install-dir>/data/extracted/(extracted process data) - Temp extractions:
/tmp/memdump_*(auto-cleaned)
Using with Local LLMs
The MCP server works with any LLM via the Model Context Protocol. For local analysis:
Quick Start with Ollama
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull Llama model
ollama pull llama3.1:70b
# Start Ollama server
ollama serve
# In another terminal, run the included client
cd /path/to/memory-forensics-mcp/examples
pip install -r requirements.txt
python ollama_client.py
Customization
- Example client: See
examples/ollama_client.pyfor a complete reference implementation - LLM profiles: Use
MCP_LLM_PROFILEenvironment variable to optimize for different model sizes - Full guide: See MULTI_LLM_GUIDE.md for comprehensive setup instructions for Llama, GPT-4, and other LLMs
Benefits of local LLMs:
- Complete privacy - no data sent to cloud services
- Free to use after initial setup (no API costs)
- Suitable for confidential investigations and offline environments
Performance Notes
- Initial processing of a dump (2-3 GB) takes 5-15 minutes
- Results are cached in SQLite for instant subsequent queries
- Consider processing dumps offline, then analyze interactively
Troubleshooting
"Volatility import error"
- Ensure volatility3 is installed:
pip install -r requirements.txt - For custom installations, check VOLATILITY_PATH environment variable or config.py
- Verify import works:
python -c "import volatility3; print('OK')"
"No dumps found"
- Check
DUMPS_DIRinconfig.py - Supported formats: .zip, .raw, .mem, .dmp, .vmem
"Processing very slow"
- Normal for large dumps
- Consider running
process_dumponce, then all queries are fast - Use smaller test dumps for development
License
This is a research/educational tool. Ensure you have authorization before analyzing any memory dumps.
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。