Essay #03 · 2026

When Teams Stop Being the Unit

Software engineering once optimized for individuals.
Then it learned to optimize for teams.

That shift was necessary. Large systems made it unavoidable. Coordination, not syntax, became the limiting factor. Practices evolved accordingly. Collaboration stopped being overhead and became infrastructure.

For a time, that was enough.

The arrival of autonomous AI agents signals that it no longer is.

When execution becomes abundant, the center of gravity moves. Code can be produced cheaply. Variants can be explored continuously. Verification can be automated. What remains scarce is judgment—deciding what should change, what should persist, and what must never happen.

Teams were designed to coordinate humans. Agents do not fit neatly into that structure. They do not negotiate. They do not tire. They execute whatever the system allows them to execute.

This exposes something teams were never forced to make explicit: boundaries.

Ownership, authority, escalation, and limits were once social agreements. With agents, they become architectural requirements. Ambiguity no longer slows work down; it compounds it.

As a result, workflows begin to matter more than code. The way work is planned, reviewed, and corrected becomes the system’s memory. Agents execute these workflows faithfully. Humans modify them when reality changes.

Context does not scale through memory. It scales through artifacts. Decisions that are externalized can be enforced. Invariants that are written down can be checked. What remains implicit becomes a source of drift.

Governance, in this setting, is not control. It is clarity. Without it, autonomy becomes noise.

The unit of optimization is shifting again.
From teams to learning systems.

Systems that can observe themselves, encode what they learn, and change how they operate. Not occasionally, but continuously.

Software will still be written.
But the competitive advantage will lie elsewhere.