The Steinberger Threshold

Most leaders are asking the wrong question about AI.

They ask whether their teams are using it. They ask which model to standardize on. They ask whether agents are ready for production. They ask how quickly they can drive adoption.

That is all downstream. The real question is simpler and far more revealing: who on your team can actually direct AI, and who is starting to be directed by it? That is the divide I keep seeing in product and engineering organizations.

Some people use AI to expand their judgment. Others use it to avoid judgment. Some get faster while staying in control. Others get busy, impressed, and strangely passive. On the surface, both groups can look productive. Both can generate output. Both can show progress.

Only one of them is actually becoming more valuable. That is what is being referred to by some as The Steinberger Threshold.

I am borrowing the phrase from recent discussion around Peter Steinberger, but what matters is not the label. What matters is the shift it names. Steinberger is worth paying attention to because he has lived through multiple eras of software building, from deep technical craftsmanship to AI-native execution. The lesson embedded in his public writing and interviews is clear: the advantage is no longer just in doing the work yourself. The advantage is in framing the work, shaping the environment, inspecting the result, and deciding what happens next.

That is not prompt engineering. That is modern judgment. And that is why this matters more than another generic debate about AI productivity.

We are moving into a world where the cost of execution is falling fast. Agents can increasingly read codebases, edit files, run tests, summarize options, and handle meaningful chunks of delivery work. As that happens, the bottleneck shifts.

When execution gets cheaper, judgment gets more expensive.

That changes who stands out. It changes who scales. It changes who should lead.

The people who thrive in this environment will not be the ones who simply know how to use AI. That bar is dropping quickly. The people who thrive will be the ones who can define intent clearly, give the agent enough structure to move fast, and still know when the machine is wrong, shallow, overconfident, or drifting off mission.

That is the threshold. Below it, people let the agent set the pace, shape the work, and quietly narrow their thinking. Above it, people use the agent as leverage while keeping hold of direction, standards, and accountability.

This is not a tooling issue. It is a leadership issue.

The biggest mistake I see companies making is assuming AI adoption and AI capability are the same thing. They are not. Giving people access to powerful models tells you almost nothing about whether they can use them well. In fact, broad access can hide the problem for a while. Everyone suddenly looks more productive. More documents appear. More prototypes show up. More code gets written. More tickets move.

But velocity is a bad metric when the system can generate convincing motion on demand.

That is where executives get trapped. They see acceleration and assume capability has risen with it. Sometimes it has. Sometimes they are just watching the organization become more dependent on machine output without improving its ability to set direction or judge quality.

That is the real risk.

The person below the Steinberger Threshold is not necessarily junior. They are not necessarily non-technical. They are simply no longer fully in command once AI enters the loop. They delegate too early. They trust polished output too quickly. They confuse completeness with correctness. They let the system define the path instead of using the system to execute against a path they have defined.

The person above the threshold behaves very differently. They treat the agent like fast, tireless, sometimes brilliant labor. They know what outcome they want. They know where ambiguity is useful and where it is dangerous. They know when to tighten the frame. They know what needs review and what can be safely skimmed. Most importantly, they stay accountable for the result.

That last point matters more than people admit.

The best agent operators are usually not the ones writing the fanciest prompts. They are the ones with the clearest standards. They know what good looks like. They can spot weak reasoning. They can tell when the agent is optimizing for fluency instead of truth, or speed instead of soundness. They do not need to inspect every line, but they know exactly which lines matter.

This is why I think the rise of agents will reshuffle status inside product and engineering teams more than most people expect.

Some managers will struggle because they were already operating through abstraction without enough contact with the actual work. AI will expose that quickly. If you cannot define success in a way that a machine can execute against and a human can validate, your authority gets thinner.

Some engineers will struggle too, especially those whose identity is tied too tightly to personal output. AI does not care about your attachment to hand-crafted implementation if someone else can steer the machine to a better result faster.

And some people in the middle of the organization will rise quickly. They may not have the biggest titles. But they have taste. They can decompose messy problems. They can write clear acceptance criteria. They can create structure where others create noise. They can tell the difference between a useful first pass and a dangerous hallucination. In an agentic world, those people become force multipliers.

You can already see the outlines of this shift in the market. Companies are starting to act as though part of every team’s job is now translating work into something machines can execute. Whether you look at AI-first operating models, agentic coding environments, or the emerging idea of software factories, the pattern is the same: the bottleneck is moving away from raw execution and toward the ability to define, direct, and verify execution.

That is the Steinberger Threshold in practice.

So how do you figure out who has crossed it? Not with training completion rates. Not with prompt libraries. Not with AI badges. You run a scout mission.

By that I mean a real piece of work with enough ambiguity that judgment matters, enough structure that success can be observed, and enough consequence that the quality of direction shows up clearly. It should be something an agent can materially accelerate, but not something so trivial that the agent can stumble into a passable answer without supervision.

A good scout mission is not theater. It is a bounded business problem that exposes how someone thinks in an agentic environment.

Give them a real bug with messy symptoms. Give them a workflow that needs redesign. Give them a thin internal tool to build. Give them a reporting process full of edge cases. Then watch what they do.

Do they sharpen the objective before they delegate? Do they define acceptance criteria? Do they improve the environment with better tests, clearer documentation, or stronger context? Do they review the critical path or only the polished summary? Do they notice drift? Do they challenge the output? Can they explain why the result should be trusted?

Most importantly, when the agent gets stronger, do they become more decisive or more passive? That is the question.

Because that is what separates someone who is using AI as leverage from someone who is slowly handing over their agency to it.

My view is simple. The companies that win with AI will not be the ones with the most licenses, the biggest model budget, or the loudest transformation language. They will be the ones that identify who can actually operate above the Steinberger Threshold, then redesign teams, workflows, and leadership expectations around those people.

Because once agents become part of the execution layer, judgment becomes the scarce asset.

And scarce assets end up running the system.