Skip to main content

Building an X07 Service: From Scaffold to Certificate

· 15 min read

Previously: How to Trust X07 Code Written by Coding Agents

The previous posts explained why X07 exists and how its trust model works. This post walks through doing it — scaffolding a service, adding domain logic, wiring tests and contracts, and producing the kind of certificate bundle that lets a reviewer approve a change without reading the whole source tree.

Everything below is something a coding agent can execute in one session. That is the point.

How to Trust X07 Code Written by Coding Agents

· 11 min read

Series navigation: Previous: How X07 Was Designed for 100% Agentic Coding · Post 3 of 3

Most code written by coding agents should not be trusted on sight.

That is not because agents are useless. It is because normal languages and normal toolchains were built for human review, not for machine-checkable trust. So the default reaction is still, "I need to read the code." X07 changes that by changing what counts as evidence.

Two ideas from other engineering fields make this possible. Formal verification means using mathematical proof to show that code does exactly what its specification says — not "we ran some tests and they passed," but "we can prove this function never returns a negative number under any input." Code certification takes that further: it bundles proofs, test results, architecture checks, and runtime evidence into a structured package — a certificate — that a reviewer can inspect and approve without reading every line of source. Think of it like a building inspection report: you do not need to watch every nail go in if you trust the inspection process, the inspector's credentials, and the evidence they collected. The idea is not new in principle: Clover showed that verification can act as a strong filter in a closed loop, with up to 87% acceptance on correct CloverBench examples and no false positives on adversarial incorrect ones in that evaluation setting. The lesson is not "trust the model." The lesson is "make the checker honest, explicit, and useful." (arXiv)

Build Your First MCP Server in X07

· 5 min read

The Model Context Protocol is now the standard way to connect coding agents to tools and data sources.

X07 ships an MCP kit through x07-mcp, and the core x07 CLI delegates template scaffolding and conformance tooling to it.

This tutorial uses the current HTTP template because it is the easiest one to probe with curl. If you want stdio transport instead, use mcp-server-stdio. If you need long-running task APIs, use mcp-server-http-tasks.

Local LLMs Can Generate Structurally Valid X07 Programs

· 5 min read

One of the most common failure modes with local code models is not bad logic. It is broken structure.

The model gets close, but the output does not parse, the imports drift, the syntax is incomplete, or the generated file shape is not valid for the target language.

X07 takes a different path: the canonical source format is x07AST JSON, and the toolchain can export both a JSON Schema and a grammar bundle for that structure. In X07 docs, that export surface is called Genpack.

That matters for local models because constrained decoding can now target the language's real source form instead of a best-effort textual approximation.

Why Coding Agents Write Plausible but Broken Code

· 6 min read

A lot of AI-generated code fails in the same frustrating way: it looks reasonable, maybe even passes a few checks, and then breaks when the real system touches it.

The easy explanation is "the models are not good enough yet."

The harder and more useful explanation is that most programming languages still assume a human is carrying the missing context in their head.

If the language gives the agent five equivalent patterns, prose-only errors, implicit side effects, and flaky test surfaces, the model has to improvise at the exact points where you need it to be mechanical.

That is why recent discussion around agent-first languages matters. Armin Ronacher's essay A Language For Agents made the thesis explicit. My view is slightly more practical: the reliability gap shows up wherever the language and toolchain leave too much ambiguity at the repair boundary.

Why I'm Building a Programming Language for AI Agents

· 5 min read

I have spent a lot of time watching coding agents fail in quiet ways.

Not the obvious failures. Not the "I cannot do that" failures.

The quiet ones.

The function that looks clean but uses the wrong boundary encoding. The patch that compiles but drifts away from the repo's architecture. The generated test that passes because the fixture is too weak, not because the code is right.

Those failures changed how I think about programming languages.

X07: A Compiled Language for Agentic Coding

· 5 min read

X07 is a compiled systems language built around a simple constraint:

coding agents are much more reliable when the language and toolchain stop asking them to improvise at critical boundaries.

Most mainstream languages were optimized for humans carrying context in their heads. Agents work differently. They do better when the source form is canonical, the diagnostics are structured, the effect boundaries are explicit, and the repair loop is deterministic.

That is the design space X07 is exploring.

How X07 Was Designed for 100% Agentic Coding

· 8 min read

Series navigation: Previous: Programming With Coding Agents Is Not Human Programming With Better Autocomplete · Post 2 of 3 · Next: How to Trust X07 Code Written by Coding Agents

Most languages are trying to make humans flexible.

X07 is trying to make agents reliable.

That sounds like a small wording difference, but it changes almost everything: the source format, the diagnostics, the execution model, the testing story, the architecture tooling, and even the surrounding ecosystem.

The official X07 docs describe an agent-first systems language, and the current toolchain surface is built around deterministic worlds, record and replay, schema derivation, state machines, property-based testing, function contracts with bounded verification, and review or trust artifacts. That is not an AI plugin bolted onto a normal language. It is a language and toolchain shaped around machine-driven repair loops from the start.

Programming With Coding Agents Is Not Human Programming With Better Autocomplete

· 7 min read

Series navigation: Post 1 of 3 · Next: How X07 Was Designed for 100% Agentic Coding

For the last twenty years, most programming languages and most software practices were designed around a simple assumption: a human is the one holding the whole thing together.

A human reads code, remembers conventions, notices weirdness, and makes judgment calls when the codebase offers five equally valid ways to solve the same problem.

A coding agent works differently.

It is very good at wide edits. It is very good at following explicit contracts. It is very good at retry loops. But it is much worse than a strong engineer at carrying a large unstated architecture around in its head.

That is why modern programming with coding agents is not normal programming, but faster. It is a different optimization problem.