$ cat Martin_Trojer
#Programming Blog

Regulation changes the game, not the pressure

ai software productivity

AI tooling impact on the software industry is not uniform. The ‘YC startup bubble’ gets most attention and the commentary distorts the picture. Move fast and break things is simply not viable in many parts of the industry.

Take heavily regulated industries like finance. It’s easy to mischaracterize these companies as slow moving and even lazy, when in fact they are solving a harder problem. Generating plausible code faster is clearly useful. But a bank is not being asked to optimize for plausible code. It is being asked to produce changes that can be explained, reviewed, approved, operated, and defended later. Regulation does not make that pressure disappear. It changes the terms on which you are allowed to respond to it.

That is why I think the important distinction is not regulated versus unregulated. It is this:

  • new code
  • trusted code
  • used code

New code is cheap to produce now. It may be clean, well-structured, and even well-tested. It may also be wrong, weakly understood, or poorly owned.

Trusted code is code that has been reviewed, exercised, and understood well enough that people are willing to stand behind it.

Used code goes one step further. It has survived contact with reality. It has been lived with.

While this distinction is generally applicable, it matters more in finance because several things are not optional; explainability, policy alignment, audit trail, accountable ownership and operational resilience.

Compliance is part of the product, not overhead sitting next to it. A change is not real just because it compiles and passes a few tests. It has to fit a system of controls, approvals, evidence, and ongoing accountability.

That is why AI use in finance should be judged less by how much code it can generate and more by whether it helps produce better changes and better evidence. Used well, these tools can help with test generation, summarisation, documentation, change explanations, evidence drafting, exploring implementation options before humans commit to one.

Used badly, they can flood an organisation with polished but weakly owned output. The danger is not just incorrect code. It is opaque systems that nobody can properly explain later, paired with a false sense of progress because the artifacts look finished.

That is also why older codebases are not automatically a disadvantage here. In regulated environments, code that has already been exercised, reviewed, and lived with is often a stronger foundation than fresh output, however elegant the fresh output appears. Trusted and used code carries operational memory.

That is what I mean when I say regulation changes the game, not the pressure.

In the AI era, proof of work gets cheaper.

Proof of usage matters more.