Why Coding Agents Write Plausible but Broken Code
A lot of AI-generated code fails in the same frustrating way: it looks reasonable, maybe even passes a few checks, and then breaks when the real system touches it.
The easy explanation is "the models are not good enough yet."
The harder and more useful explanation is that most programming languages still assume a human is carrying the missing context in their head.
If the language gives the agent five equivalent patterns, prose-only errors, implicit side effects, and flaky test surfaces, the model has to improvise at the exact points where you need it to be mechanical.
That is why recent discussion around agent-first languages matters. Armin Ronacher's essay A Language For Agents made the thesis explicit. My view is slightly more practical: the reliability gap shows up wherever the language and toolchain leave too much ambiguity at the repair boundary.