Houtini-lm

Houtini-lm

Houtini LM - LM Studio MCP Server with Expert Prompt Library and Custom Prompting - Offload tasks to LM Studio from Claude Desktop.

Category
访问服务器

README

Houtini LM - LM Studio MCP Server with Expert Prompt Library and Custom Prompting

Your unlimited AI companion: This MCP server connects Claude to LM Studio for code analysis, generation, and creativity

Transform your development workflow with our expert-level prompt library for code analysis, professional documentation generation, and creative project scaffolding - all running locally without API costs. For developers, vibe coders and creators alike.

What This Does

Houtini LM saves your Claude context window by offloading detailed analysis tasks to LM Studio locally or on your company network whilst Claude focuses on strategy and complex problem-solving. Think of it as your intelligent coding assistant that never runs out of tokens.

Perfect for:

  • 🔍 Code analysis - Deep insights into quality, security, and architecture
  • 📝 Documentation generation - Professional docs from code analysis
  • 🏗️ Project scaffolding - Complete applications, themes, and components
  • 🎮 Creative projects - Games, CSS art, and interactive experiences
  • 🛡️ Security audits - OWASP compliance and vulnerability detection

Quick Start Prompt Guide

Once installed, simply use natural language prompts with Claude:

Use houtini-lm to analyse the code quality in C:/my-project/src/UserAuth.js
Generate comprehensive unit tests using houtini-lm for my React component at C:/components/Dashboard.jsx
Use houtini-lm to create a WordPress plugin called "Event Manager" with custom post types and admin interface
Audit the security of my WordPress theme using houtini-lm at C:/themes/my-theme
Create a CSS art generator project using houtini-lm with space theme and neon colours
Use houtini-lm to convert my JavaScript file to TypeScript with strict mode enabled
Generate responsive HTML components using houtini-lm for a pricing card with dark mode support

Prerequisites

Essential Requirements:

  1. LM Studio - Download from lmstudio.ai

    • Must be running at ws://127.0.0.1:1234
    • Model loaded and ready (13B+ parameters recommended)
  2. Desktop Commander MCP - Essential for file operations

  3. Node.js 24.6.0 or later - For MCP server functionality

  4. Claude Desktop - For the best experience

Installation

1. Install Dependencies

# Clone the repository
git clone https://github.com/houtini-ai/lm.git
cd lm

# Install Node.js dependencies
npm install

2. Configure Claude Desktop

Add to your Claude Desktop configuration file:

Windows: %APPDATA%/Claude/claude_desktop_config.json
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

{
  "mcpServers": {
    "houtini-lm": {
      "command": "node",
      "args": ["path/to/houtini-lm/index.js"],
      "env": {
        "LLM_MCP_ALLOWED_DIRS": "C:/your-projects,C:/dev,C:/websites"
      }
    }
  }
}

3. Start LM Studio

  1. Launch LM Studio
  2. Load a model (13B+ parameters recommended for best results)
  3. Start the server at ws://127.0.0.1:1234
  4. Verify the model is ready and responding

4. Verify Installation

Restart Claude Desktop, then test with:

Use houtini-lm health check to verify everything is working

Available Functions

🔍 Analysis Functions (17 functions)

  • analyze_single_file - Deep code analysis and quality assessment
  • count_files - Project structure with beautiful markdown trees
  • find_unused_files - Dead code detection with risk assessment
  • security_audit - OWASP compliance and vulnerability scanning
  • analyze_dependencies - Circular dependencies and unused imports
  • And 12 more specialized analysis tools...

🛠️ Generation Functions (10 functions)

  • generate_unit_tests - Comprehensive test suites with framework patterns
  • generate_documentation - Professional docs from code analysis
  • convert_to_typescript - JavaScript to TypeScript with type safety
  • generate_wordpress_plugin - Complete WordPress plugin creation
  • generate_responsive_component - Accessible HTML/CSS components
  • And 5 more generation tools...

🎮 Creative Functions (3 functions)

  • css_art_generator - Pure CSS art and animations
  • arcade_game - Complete playable HTML5 games
  • create_text_adventure - Interactive fiction with branching stories

⚙️ System Functions (5 functions)

  • health_check - Verify LM Studio connection
  • list_functions - Discover all available functions
  • resolve_path - Path analysis and suggestions
  • And 2 more system utilities...

Context Window Management

Houtini LM implements intelligent context window management to maximize the efficiency of your local LM models while ensuring reliable processing of large files and complex analysis tasks.

Dynamic Context Allocation

Adaptive Context Utilization: Unlike systems with hardcoded token limits, Houtini LM dynamically detects your model's context window and allocates 95% of available tokens for optimal performance:

// Context detection from your loaded model
const contextLength = await model.getContextLength(); // e.g., 16,384 tokens

// Dynamic allocation - 95% utilization
const responseTokens = Math.floor(contextLength * 0.95); // 15,565 tokens available

Benefits:

  • Maximum efficiency - No wasted context space
  • Model-agnostic - Works with any context size (4K, 16K, 32K+)
  • Future-proof - Automatically adapts to larger models

Three-Stage Prompt System

Houtini LM uses a sophisticated prompt architecture that separates concerns for optimal token management:

Stage 1: System Context - Expert persona and analysis methodology
Stage 2: Data Payload - Your code, files, or project content
Stage 3: Output Instructions - Structured response requirements

┌─────────────────────┐
│   System Context   │  ← Expert role, methodologies
├─────────────────────┤
│   Data Payload     │  ← Your files/code (chunked if needed)
├─────────────────────┤
│ Output Instructions │  ← Response format, requirements
└─────────────────────┘

Intelligent Processing:

  • Small files → Single-stage execution for speed
  • Large files → Automatic chunking with coherent aggregation
  • Multi-file projects → Optimized batch processing

Automatic Chunking Capability

When files exceed available context space, Houtini LM automatically chunks content while maintaining analysis quality:

Smart Chunking Features:

  • 🔍 Natural boundaries - Splits at logical sections, not arbitrary points
  • 🔄 Context preservation - Maintains analysis continuity across chunks
  • 📊 Intelligent aggregation - Combines chunk results into coherent reports
  • Performance optimization - Parallel processing where possible

Example Chunking Process:

Large File (50KB) → Context Analysis → Exceeds Limit
    ↓
Split into 3 logical chunks → Process each chunk → Aggregate results
    ↓
Single comprehensive analysis report

Timeout Configuration

Houtini LM uses 120-second timeouts (2 minutes) to accommodate thorough analysis on lower-powered systems:

Why Extended Timeouts:

  • 🔍 Complex analysis - Security audits, architecture analysis, and comprehensive code reviews take time
  • 💻 System compatibility - Works reliably on older hardware and resource-constrained environments
  • 🧠 Model processing - Larger local models (13B-33B parameters) require more inference time
  • 📊 Quality over speed - Comprehensive reports are worth the wait

Timeout Guidelines:

  • Simple analysis (100 lines): 15-30 seconds
  • Medium files (500 lines): 30-60 seconds
  • Large files (1000+ lines): 60-120 seconds
  • Multi-file projects: 90-180 seconds

Performance Tips:

  • Use faster models (13B vs 33B) for quicker responses
  • Enable GPU acceleration in LM Studio for better performance
  • Consider using analysisDepth="basic" for faster results when appropriate

Memory Efficiency

Intelligent Caching: Results are cached to prevent redundant processing
Resource Management: Automatic cleanup of large contexts after processing
Streaming Responses: Progressive output delivery for better user experience

This architecture ensures Houtini LM can handle everything from small utility functions to entire enterprise codebases while maintaining consistent quality and performance across different hardware configurations.

Documentation

Complete guides available:

Recommended Setup

For Professional Development:

  • CPU: 8-core or better (for local LLM processing)
  • RAM: 32GB (24GB for model, 8GB for development)
  • Storage: SSD with 100GB+ free space
  • Model: Qwen2.5-Coder-14B-Instruct or similar

Performance Tips:

  • Use 13B+ parameter models for professional-quality results
  • Configure LLM_MCP_ALLOWED_DIRS to include your project directories
  • Install Desktop Commander MCP for complete file operation support
  • Keep LM Studio running and model loaded for instant responses

Version History

Version 1.0.0 (Current)

  • ✅ Complete function library (35+ functions)
  • ✅ Professional documentation system
  • ✅ WordPress-specific tools and auditing
  • ✅ Creative project generators
  • ✅ Comprehensive security analysis
  • ✅ TypeScript conversion and test generation
  • ✅ Cross-file integration analysis

License

MIT License - Use this project freely for personal and commercial projects. See LICENSE for details.

Contributing

We welcome contributions! Please see our Contributing Guidelines for details on:

  • Code standards and patterns
  • Testing requirements
  • Documentation updates
  • Issue reporting

Support


Ready to supercharge your development workflow? Install Houtini LM and start building amazing things with unlimited local AI assistance.

Built for developers who think clearly but can't afford to think expensively.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选