Growth vs throughput mindsets

I've talked about this one to various managers over the years when the topic of work-scheduling humans comes up. There are two big frameworks for deciding who works which tickets:

  • Throughput: Assign the person who will complete the task fastest to the ticket or story. Helpdesks often work on this model, since they're frequently quite under water.
  • Growth: Assign the second most knowledgeable person to the ticket/story (or third, or fourth if you have that many people) so they can get experience solving the problem. This method solves the cross-training problem.

Both approaches are valid, and which approach a team takes is balanced against the incentives the team operates in. For a Helpdesk team, which is often under-staffed, throughput tends to be highly prioritized. For a Dev team with on-call responsibilities, growth tends to be the norm to make sure problems can be handled by anyone and to stop burning out your senior talent. Security teams tend to be a blend of both; with ticket-based security reviews urging throughput mindsets, where incident-response prioritizes growth. For Platform teams that tend to be involved in a lot of incidents due to running the systems that problems happen on, even if the problem wasn't actually the platform system, throughput mindsets are hard to avoid.

Now add coding LLMs to the mix.

The on-label reason to use agents such as Claude is throughput: spend less time working on tasks to improve your throughput. For Engineering Directors this is a great thing, because you can get throughput advances in your reporting teams without sacrificing engineering maturity, letting you lean on the product roadmap a little harder.

I'd argue that the throughput nature of LLM agents actually sacrifices some growth at population-scale when used in currently typical tech-companies. By moving from a purely generative mindset when building code to a revisionary mindset, going from writing code to reviewing and editing generated code, engineers spend less time learning problem-spaces in detail. Less time learning means taking more calendar time building up true domain knowledge. Coding agents are a somewhat different thing than the decades of "just add another abstraction layer and forget about the lower level details" we've been doing since 1970. It is true that most engineers in SaaS companies aren't spending lots of time hand-tuning assembly for execution on their ISA, and doing so is actually very bad practice unless you're in extremely specific problem domains. The difference between yet another abstraction and coding agent is the difference between abstraction and synthesis.

Abstraction is a distillation of a problem-space into an API, which can represented by grpc protobufs or function-calls, or whatever. You have inputs, expected actions, and expected results; the abstraction handles the details so you don't have to and often someone else is responsible for updates over time.

Synthesis is building novel functionality through combining multiple things to create a new thing. Humans traditionally have been the synthesis engines of code-production, but coding agents are beginning to automate portions of this.

Writing code is synthesis. Until the advent of coding agents, your IDE would give you support through syntax highlighting and API short-cuts, reducing the cognitive load of bare syntax problems to let you focus on higher level problems like how a function should handle bad inputs. The IDE gives you support for abstractions, but leaves the synthesis to you.

Once coding agents get added in, you shift from generating code, synthesis, to reviewing automatically generated code whenever the agent creates something for you. This is when cognitive biases come into play. Remember the mindset thing I opened this post with? An engineer under throughput stress is going to be less diligent about checking generated code in detail, and will shift more of their domain learning to troubleshooting test-failures and incident-response. Less domain knowledge will be developed during the initial writing. An engineer with no throughput stress, perhaps they're doing some for-fun coding, is more likely to take a fine toothed comb to generated code to learn what the generated code is trying to do, why it works the way it does, and what best-practices it seems to be following; in this throughput-free case the coding agent is a driver of growth.

Coding agents end up magnifying the biases present in the team's environment, enhancing positive feedback loops. The advertised reason to adopt coding agents is to increase developer throughput! The throughput bias is baked into the technology and its marketing, which means organizations requiring coding agent use need to take steps to provide some negative feedback loop enhancement to avoid large-scale degradation in coding quality and incident severity. Use of these agents make an organization more sensitive to growth/throughput bias shifts prompted by org-chart and quarterly goal shifts.

Combine the bias-enhancing quality of coding agents with the industry-wide retraction in US-based jobs for economic reasons, and you should be seeing a general retraction in overall growth mindsets among US-based engineers.