ZS (Zobr Script)

ZS (Zobr Script)

A cognitive scripting language for structured reasoning with LLMs. 12 built-in operations (survey, ground, assert, doubt, contrast, synthesize, etc.) for guiding AI through dialectical analysis, impact assessment, and other thinking patterns.

Category
访问服务器

README

<p align="center"> <img src="logo.png" width="200" alt="ZS — Zobr Script"> </p>

<h1 align="center">ZS (Zobr Script)</h1>

<p align="center"> A cognitive scripting language for structured reasoning with LLMs </p>


ZS provides formal constructs for describing reasoning processes — not as rigid instructions, but as composable cognitive operations with variables, control flow, and result formatting.

Think of it as SQL for thinking: you define what cognitive steps to take, the LLM decides how to execute them.

Scripts are executed by an LLM as interpreter: the model reads a .zobr file, executes operations step by step, tracks variables, follows control flow, and produces structured output.

Quick Example

task: "Evaluate risks of AI in education"

risks = survey("main risks of AI in education", count: 4)
evidence = for r in risks {
  concrete = ground(r, extract: [examples, studies])
  yield { risk: r, evidence: concrete }
}
overview = synthesize(evidence, method: "rank by severity")

result = conclude {
  top_risks: list
  most_critical: string
  recommendation: string
  confidence: low | medium | high
}

12 Built-in Cognitive Operations

Operations are organized into five categories:

Discovery — explore and extract

Operation Description
survey(topic, count?) Explore a topic and identify key elements — positions, factors, perspectives
ground(claim, extract?) Connect a claim to concrete evidence, facts, or experience

Argument — reason and challenge

Operation Description
assert(thesis, based_on?) State a position with reasoning
doubt(target) Problematize a claim — find weaknesses, hidden assumptions, edge cases
contrast(target, with?) Find or construct the strongest opposing position or counterexample
analogy(target, from?) Transfer understanding from another domain to reveal hidden structure

Synthesis — combine and transform

Operation Description
synthesize(sources, method?) Combine multiple findings into emergent insight (not just a summary)
reframe(target, lens?) Reformulate a problem in different terms, change the analytical lens

Meta — reflect and steer

Operation Description
assess(scale?) Reflective pause — evaluate the current state of reasoning (open/converging/stuck)
pivot(reason) Explicitly change reasoning strategy when the current approach is insufficient
scope(narrow|wide) Control analytical zoom — from specific mechanisms to systemic connections

Output

Operation Description
conclude { ... } Define the structure and format of the final result

Plus: variables, for/if/loop control flow, user-defined functions (define), yield, import, @last/@N references.

zobr-check — Static Validator

The package includes a CLI tool for static validation of .zobr scripts:

# Install from source
git clone https://github.com/docxi-org/zobr-script.git
cd zobr-script
npm install && npm run build

# Validate a script
node dist/cli.js script.zobr

The validator checks:

  • Syntax correctness (PEG grammar)
  • Undefined variable references
  • Correct operation signatures (positional/named argument counts)
  • Unused variables (warnings)
  • Reserved word misuse

How It Works

In the current version, a ZS script is executed by an LLM as interpreter:

  1. Provide the language spec and system prompt as context — together they define the full operation semantics, control flow rules, and output format
  2. Pass a .zobr script as the task
  3. The LLM executes operations step by step, tracking variables and following control flow
  4. Output is structured according to the conclude block

MCP Server

Connect ZS to Claude, Claude Desktop, or any MCP client — no installation needed.

MCP endpoint: https://zobr-script-mcp.docxi-next.workers.dev/mcp

In claude.ai: Settings → Connectors → Add custom connector → paste the URL above.

Tools provided:

  • zs_execute — feed a script, get full spec + interpreter context injected automatically
  • zs_validate — full PEG parser + semantic validation (same as zobr-check)
  • zs_operations — quick reference for all 12 operations

Also available on Smithery.

Benchmark Results

Tested with three Claude models across 5 tasks of increasing complexity:

Model Composite Score Structural Compliance Content Quality Generation Quality
Claude Opus 4.6 9.4 / 10 9.8 9.4 9.0
Claude Sonnet 4.6 9.3 / 10 9.7 9.3 9.0
Claude Haiku 4.5 7.9 / 10 9.3 7.0 7.5

Key findings:

  • Structure compresses the capability gap: Sonnet achieves near-parity with Opus (9.3 vs 9.4) — when reasoning structure is provided by the script, the model's job shifts from organizing thought to filling containers with content
  • Even the smallest model follows scripts with 93% structural fidelity: ZS is a reasoning amplifier, not a capability test
  • All models generate valid scripts: Task 05 (script generation) produced 0 syntax errors across all models

Full results: benchmark reportinfographicна русскоминфографика

Use Cases

  • Repeatable analysis patterns — encode your best analytical workflow once as a .zobr script, apply it to any new input
  • Quality assurance for AI reasoning — auditable operations with visible variable flow, not black-box responses
  • Cost optimization via model routing — use smaller models for structural tasks, larger models only where depth matters
  • Knowledge capture — distill exceptional AI reasoning into reusable .zobr artifacts
  • Education & critical thinking — externalize the structure of rigorous thinking: survey before asserting, doubt your own claims, contrast with the strongest counter
  • Multi-agent cognitive workflows — scripts as shared protocols between agents

What ZS Is Not

  • Not a prompt template engine (see POML)
  • Not an LLM orchestration framework (see DSPy, LangChain)
  • Not chain-of-thought prompting

ZS operates at a different level: it formalizes cognitive operations themselves as first-class language constructs.

Documentation

Status

Spec v0.1. Benchmark complete (3 models × 5 tasks). Static validator shipped.

License

Apache License 2.0 — see LICENSE


Part of the Black Zobr ecosystem.

推荐服务器

Baidu Map

Baidu Map

百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。

官方
精选
JavaScript
Playwright MCP Server

Playwright MCP Server

一个模型上下文协议服务器,它使大型语言模型能够通过结构化的可访问性快照与网页进行交互,而无需视觉模型或屏幕截图。

官方
精选
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

一个由人工智能驱动的工具,可以从自然语言描述生成现代化的用户界面组件,并与流行的集成开发环境(IDE)集成,从而简化用户界面开发流程。

官方
精选
本地
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

通过模型上下文协议启用与 Audiense Insights 账户的交互,从而促进营销洞察和受众数据的提取和分析,包括人口统计信息、行为和影响者互动。

官方
精选
本地
TypeScript
VeyraX

VeyraX

一个单一的 MCP 工具,连接你所有喜爱的工具:Gmail、日历以及其他 40 多个工具。

官方
精选
本地
graphlit-mcp-server

graphlit-mcp-server

模型上下文协议 (MCP) 服务器实现了 MCP 客户端与 Graphlit 服务之间的集成。 除了网络爬取之外,还可以将任何内容(从 Slack 到 Gmail 再到播客订阅源)导入到 Graphlit 项目中,然后从 MCP 客户端检索相关内容。

官方
精选
TypeScript
Kagi MCP Server

Kagi MCP Server

一个 MCP 服务器,集成了 Kagi 搜索功能和 Claude AI,使 Claude 能够在回答需要最新信息的问题时执行实时网络搜索。

官方
精选
Python
e2b-mcp-server

e2b-mcp-server

使用 MCP 通过 e2b 运行代码。

官方
精选
Neon MCP Server

Neon MCP Server

用于与 Neon 管理 API 和数据库交互的 MCP 服务器

官方
精选
Exa MCP Server

Exa MCP Server

模型上下文协议(MCP)服务器允许像 Claude 这样的 AI 助手使用 Exa AI 搜索 API 进行网络搜索。这种设置允许 AI 模型以安全和受控的方式获取实时的网络信息。

官方
精选