Superglue MCP
A service that enables agents to build deterministic workflows across apps, databases and APIs using natural language, handling schema mapping, drift detection, and execution of integration pipelines automatically.
README
<p align="center"> <img src="https://github.com/user-attachments/assets/be0e65d4-dcd8-4133-9841-b08799e087e7" width="350" alt="superglue_logo_white"> </p>
<h2 align="center">Integrate & Orchestrate APIs with natural language.</h2> <div align="center">
</div> <h3 align="center"> Now live: let agents build deterministic workflows across apps, databases and APIs using the superglue MCP<br> Let's glue.<br>
Read the docs 🍯🍯🍯</h3>
what is superglue?
superglue orchestrates APIs from natural language. Tell it what you want to do in your CRM, ERP and co. and superglue builds, runs and executes the integration pipelines automatically. Comes with automated schema mapping, drift detection, retries and remappings so your API workflows keep running no matter what. superglue makes agents reliable in prod by letting them build deterministic workflows across any SaaS app, API and data source. Use the superglue MCP instead of hard-coding tools and let your agent use APIs the way they want to, not the way they were written.
- Lightweight proxy: point it at any REST / GraphQL / SQL / postgres / file endpoint.
- LLM‑assisted mapping during config; cached Javascript transforms at runtime (no LLM latency).
- Self‑heals schema drift: when the upstream API or schema changes, superglue regenerates the transform automatically, and keeps the pipeline running.
- Security‑first: zero data stored; run fully on‑prem or use our hosted version.
quick start
hosted version
-
Run on our cloud-hosted version
-
Install the superglue js/ts client:
npm install @superglue/client
- Configure your first api call:
import { SuperglueClient } from "@superglue/client";
const superglue = new SuperglueClient({
apiKey: "************"
});
const workflowResult = await superglue.executeWorkflow({
// input can be an ID of a pre-saved workflow or a WorkflowInput object
workflow: {
id: "myTodoUserWorkflow",
steps: [
{
id: "fetchTodos", // Unique ID for this step
apiConfig: {
id: "jsonplaceholderTodos",
urlHost: "https://jsonplaceholder.typicode.com",
urlPath: "/todos",
method: HttpMethod.GET,
instruction: "Fetch a list of todos. We only need the first one for this example.",
},
},
{
id: "fetchUser",
apiConfig: {
id: "jsonplaceholderUsers",
urlHost: "https://jsonplaceholder.typicode.com",
urlPath: "/users/<<$.fetchTodos[0].userId>>", // JSONata path parameter for first userId
method: HttpMethod.GET,
instruction: "Fetch user details by user ID for the first todo."
},
},
],
// Transform the results of the steps into the final desired output. If not given, this will be generated from the reponse schema
finalTransform: "$",
responseSchema: { // define the expected final output structure
type: "object",
description: "first todo",
properties: {
todoTitle: { type: "string" },
userName: { type: "string" }
}
}
},
// `payload` could be used to pass initial data to the first step if needed. E.g. IDs to fetch, filters, etc. In short, things that can change across calls.
// payload: { userId: 1 },
// `credentials` can be used to authenticate requests. They need to be referenced in the api config (e.g. "headers": {"Authorization": "Bearer <<hubspot_api_key>>"})
// credentials: { hubspot_api_key: "pa_xxx" },
});
console.log(JSON.stringify(workflowResult, null, 2));
what people build with superglue
- Voice assistants: reliably map intent to tool usage
- Extended GPT: offer more data sources and a whitelabel agent builder inside your internal GPT
- Extend AI assistant/co-pilot: offer more actions than search
- Ship connectors 10x faster, without the maintenance overhead
- Simple interface for legacy API pipelines
- CMS or cloud migration
- Transforming SQL queries into Rest API calls
- And many more...
key features
- API Proxy: Configure APIs and intercept responses in real-time with minimal added latency
- LLM-Powered Data Mapping: Automatically generate data transformations using large language models
- Schema Validation: Ensure data compliance with your specified schemas
- File Processing: Handle various file formats (CSV, JSON, XML) with automatic decompression
- Flexible Authentication: Support for various auth methods including header auth, api keys, oauth, and more
- Smart Pagination: Handle different pagination styles automatically
- Caching & Retry Logic: Built-in caching and configurable retry strategies
📖 Documentation
For detailed documentation, visit docs.superglue.cloud.
🤝 contributing
We love contributions! Feel free to open issues for bugs or feature requests.
license
superglue is GPL licensed. The superglue client SDKs are MIT licensed. See LICENSE for details.
Next Steps
推荐服务器
Baidu Map
百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
Playwright MCP Server
一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。
Magic Component Platform (MCP)
一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。
Audiense Insights MCP Server
通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。
VeyraX
一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。
graphlit-mcp-server
模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。
Kagi MCP Server
一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。
e2b-mcp-server
使用 MCP 通过 e2b 运行代码。
Neon MCP Server
用于与 Neon 管理 API 和数据库交互的 MCP 服务器
Exa MCP Server
模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。