Skip to main content

2 posts tagged with "reliability"

View All Tags

Why Coding Agents Write Plausible but Broken Code

· 6 min read

A lot of AI-generated code fails in the same frustrating way: it looks reasonable, maybe even passes a few checks, and then breaks when the real system touches it.

The easy explanation is "the models are not good enough yet."

The harder and more useful explanation is that most programming languages still assume a human is carrying the missing context in their head.

If the language gives the agent five equivalent patterns, prose-only errors, implicit side effects, and flaky test surfaces, the model has to improvise at the exact points where you need it to be mechanical.

That is why recent discussion around agent-first languages matters. Armin Ronacher's essay A Language For Agents made the thesis explicit. My view is slightly more practical: the reliability gap shows up wherever the language and toolchain leave too much ambiguity at the repair boundary.

Programming With Coding Agents Is Not Human Programming With Better Autocomplete

· 7 min read

Series navigation: Post 1 of 3 · Next: How X07 Was Designed for 100% Agentic Coding

For the last twenty years, most programming languages and most software practices were designed around a simple assumption: a human is the one holding the whole thing together.

A human reads code, remembers conventions, notices weirdness, and makes judgment calls when the codebase offers five equally valid ways to solve the same problem.

A coding agent works differently.

It is very good at wide edits. It is very good at following explicit contracts. It is very good at retry loops. But it is much worse than a strong engineer at carrying a large unstated architecture around in its head.

That is why modern programming with coding agents is not normal programming, but faster. It is a different optimization problem.