LibreModel MCP Server
Bridges Claude Desktop with local LLM instances running via llama-server, enabling full conversation support with complete parameter control and health monitoring. Allows users to chat with their local models directly through Claude Desktop with configurable sampling parameters.
README
LibreModel MCP Server 🤖
A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.
Features
- 💬 Full conversation support with Local Model through Claude Desktop
- 🎛️ Complete parameter control (temperature, max_tokens, top_p, top_k)
- ✅ Health monitoring and server status checks
- 🧪 Built-in testing tools for different capabilities
- 📊 Performance metrics and token usage tracking
- 🔧 Easy configuration via environment variables
Quick Start
npm install @openconstruct/llama-mcp-server
A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.
Features
- 💬 Full conversation support with LibreModel through Claude Desktop
- 🎛️ Complete parameter control (temperature, max_tokens, top_p, top_k)
- ✅ Health monitoring and server status checks
- 🧪 Built-in testing tools for different capabilities
- 📊 Performance metrics and token usage tracking
- 🔧 Easy configuration via environment variables
Quick Start
1. Install Dependencies
cd llama-mcp
npm install
2. Build the Server
npm run build
3. Start Your LibreModel
Make sure llama-server is running with your model:
./llama-server -m lm37.gguf -c 2048 --port 8080
4. Configure Claude Desktop
Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):
{
"mcpServers": {
"libremodel": {
"command": "node",
"args": ["/home/jerr/llama-mcp/dist/index.js"]
}
}
}
5. Restart Claude Desktop
Claude will now have access to LibreModel through MCP!
Usage
Once configured, you can use these tools in Claude Desktop:
💬 chat - Main conversation tool
Use the chat tool to ask LibreModel: "What is your name and what can you do?"
🧪 quick_test - Test LibreModel capabilities
Run a quick_test with type "creative" to see if LibreModel can write poetry
🏥 health_check - Monitor server status
Use health_check to see if LibreModel is running properly
Configuration
Set environment variables to customize behavior:
export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URL
Available Tools
| Tool | Description | Parameters |
|---|---|---|
chat |
Converse with MOdel | message, temperature, max_tokens, top_p, top_k, system_prompt |
quick_test |
Run predefined capability tests | test_type (hello/math/creative/knowledge) |
health_check |
Check server health and status | None |
Resources
- Configuration: View current server settings
- Instructions: Detailed usage guide and setup instructions
Development
# Install dependencies
npm install # LibreModel MCP Server 🤖
A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.
## Features
- 💬 **Full conversation support** with LibreModel through Claude Desktop
- 🎛️ **Complete parameter control** (temperature, max_tokens, top_p, top_k)
- ✅ **Health monitoring** and server status checks
- 🧪 **Built-in testing tools** for different capabilities
- 📊 **Performance metrics** and token usage tracking
- 🔧 **Easy configuration** via environment variables
## Quick Start
### 1. Install Dependencies
```bash
cd llama-mcp
npm install
2. Build the Server
npm run build
3. Start Your LibreModel
Make sure llama-server is running with your model:
./llama-server -m lm37.gguf -c 2048 --port 8080
4. Configure Claude Desktop
Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):
{
"mcpServers": {
"libremodel": {
"command": "node",
"args": ["/home/jerr/llama-mcp/dist/index.js"]
}
}
}
5. Restart Claude Desktop
Claude will now have access to LibreModel through MCP!
Usage
Once configured, you can use these tools in Claude Desktop:
💬 chat - Main conversation tool
Use the chat tool to ask LibreModel: "What is your name and what can you do?"
🧪 quick_test - Test LibreModel capabilities
Run a quick_test with type "creative" to see if LibreModel can write poetry
🏥 health_check - Monitor server status
Use health_check to see if LibreModel is running properly
Configuration
Set environment variables to customize behavior:
export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URL
Available Tools
| Tool | Description | Parameters |
|---|---|---|
chat |
Converse with MOdel | message, temperature, max_tokens, top_p, top_k, system_prompt |
quick_test |
Run predefined capability tests | test_type (hello/math/creative/knowledge) |
health_check |
Check server health and status | None |
Resources
- Configuration: View current server settings
- Instructions: Detailed usage guide and setup instructions
Development
# Install dependencies
npm install openconstruct/llama-mcp-server
# Development mode (auto-rebuild)
npm run dev
# Build for production
npm run build
# Start the server directly
npm start
Architecture
Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model
The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.
Troubleshooting
"Cannot reach LLama server"
- Ensure llama-server is running on the configured port
- Check that the model is loaded and responding
- Verify firewall/network settings
"Tool not found in Claude Desktop"
- Restart Claude Desktop after configuration changes
- Check that the path to
index.jsis correct and absolute - Verify the MCP server builds without errors
Poor response quality
- Adjust temperature and sampling parameters
- Try different system prompts
License
CC0-1.0 - Public Domain. Use freely!
Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet4
Development mode (auto-rebuild)
npm run dev
Build for production
npm run build
Start the server directly
npm start
## Architecture
Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model
The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.
## Troubleshooting
**"Cannot reach LLama server"**
- Ensure llama-server is running on the configured port
- Check that the model is loaded and responding
- Verify firewall/network settings
**"Tool not found in Claude Desktop"**
- Restart Claude Desktop after configuration changes
- Check that the path to `index.js` is correct and absolute
- Verify the MCP server builds without errors
**Poor response quality**
- Adjust temperature and sampling parameters
- Try different system prompts
## License
CC0-1.0 - Public Domain. Use freely!
---
Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet4
### 1. Install Dependencies
```bash
cd llama-mcp
npm install
2. Build the Server
npm run build
3. Start Your LibreModel
Make sure llama-server is running with your model:
./llama-server -m lm37.gguf -c 2048 --port 8080
4. Configure Claude Desktop
Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):
{
"mcpServers": {
"libremodel": {
"command": "node",
"args": ["/home/jerr/llama-mcp/dist/index.js"]
}
}
}
5. Restart Claude Desktop
Claude will now have access to LibreModel through MCP!
Usage
Once configured, you can use these tools in Claude Desktop:
💬 chat - Main conversation tool
Use the chat tool to ask LibreModel: "What is your name and what can you do?"
🧪 quick_test - Test LibreModel capabilities
Run a quick_test with type "creative" to see if LibreModel can write poetry
🏥 health_check - Monitor server status
Use health_check to see if LibreModel is running properly
Configuration
Set environment variables to customize behavior:
export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URL
Available Tools
| Tool | Description | Parameters |
|---|---|---|
chat |
Converse with MOdel | message, temperature, max_tokens, top_p, top_k, system_prompt |
quick_test |
Run predefined capability tests | test_type (hello/math/creative/knowledge) |
health_check |
Check server health and status | None |
Resources
- Configuration: View current server settings
- Instructions: Detailed usage guide and setup instructions
Development
# Install dependencies
npm install
# Development mode (auto-rebuild)
npm run dev
# Build for production
npm run build
# Start the server directly
npm start
Architecture
Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model
The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.
Troubleshooting
"Cannot reach LLama server"
- Ensure llama-server is running on the configured port
- Check that the model is loaded and responding
- Verify firewall/network settings
"Tool not found in Claude Desktop"
- Restart Claude Desktop after configuration changes
- Check that the path to
index.jsis correct and absolute - Verify the MCP server builds without errors
Poor response quality
- Adjust temperature and sampling parameters
- Try different system prompts
License
CC0-1.0 - Public Domain. Use freely!
Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet4
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。