#Programming Blog

New code, trusted code, used code

ai software productivity

AI tooling is not hitting the software industry in a uniform way. The YC startup bubble gets most of the attention and that distorts the picture. Move fast and break things is simply not viable in big parts of the industry.

Take heavily regulated industries like finance. It’s easy to mischaracterize these companies as slow moving or even lazy, when really they’re solving a harder problem. Generating plausible code faster is clearly useful. But a bank is not being asked to optimize for plausible code. It’s being asked to produce changes that can be explained, reviewed, approved, operated, and defended later. Regulation doesn’t make that pressure go away. It changes the terms on which you are allowed to respond to it.

That is why I think the important distinction is not regulated versus unregulated. It is this:

  • new code
  • trusted code
  • used code

New code is cheap to produce now. It may be clean, well-structured, and even well-tested. It may also be wrong, weakly understood, or poorly owned.

Trusted code is code that has been reviewed, exercised, and understood well enough that people are willing to stand behind it.

Used code goes one step further. It has survived contact with reality. It has been lived with.

This distinction applies pretty broadly, but it matters more in finance because several things are not optional: explainability, policy alignment, audit trail, accountable ownership and operational resilience.

Compliance is part of the product, not overhead sitting next to it. A change is not real just because it compiles and passes a few tests. It has to fit a whole system of controls, approvals, evidence, and ongoing accountability.

That is why AI use in finance should be judged less by how much code it can generate and more by whether it helps produce better changes and better evidence. Used well, these tools can help with test generation, summarisation, documentation, change explanations, evidence drafting, and exploring implementation options before humans commit to one.

Used badly, they can flood an organisation with polished but weakly owned output. The danger isn’t just incorrect code. It’s opaque systems that nobody can properly explain later, plus a false sense of progress because the artifacts look finished.

That is also why older codebases are not automatically a disadvantage here. In regulated environments, code that has already been exercised, reviewed, and lived with is often a stronger foundation than fresh output, however elegant that fresh output looks. Trusted and used code carries operational memory.

That is what I mean when I say regulation changes the game, not the pressure.

In the AI era, proof of work gets cheaper.

Proof of usage matters more.