The Steinberger Threshold

Most leaders are asking the wrong question about AI.

They ask whether their teams are using it. They ask which model to standardize on. They ask whether agents are ready for production. They ask how quickly they can drive adoption.

That is all downstream. The real question is simpler and far more revealing: who on your team can actually direct AI, and who is starting to be directed by it? That is the divide I keep seeing in product and engineering organizations.

Some people use AI to expand their judgment. Others use it to avoid judgment. Some get faster while staying in control. Others get busy, impressed, and strangely passive. On the surface, both groups can look productive. Both can generate output. Both can show progress.

Only one of them is actually becoming more valuable. That is what is being referred to by some as The Steinberger Threshold.

I am borrowing the phrase from recent discussion around Peter Steinberger, but what matters is not the label. What matters is the shift it names. Steinberger is worth paying attention to because he has lived through multiple eras of software building, from deep technical craftsmanship to AI-native execution. The lesson embedded in his public writing and interviews is clear: the advantage is no longer just in doing the work yourself. The advantage is in framing the work, shaping the environment, inspecting the result, and deciding what happens next.

That is not prompt engineering. That is modern judgment. And that is why this matters more than another generic debate about AI productivity.

We are moving into a world where the cost of execution is falling fast. Agents can increasingly read codebases, edit files, run tests, summarize options, and handle meaningful chunks of delivery work. As that happens, the bottleneck shifts.

When execution gets cheaper, judgment gets more expensive.

That changes who stands out. It changes who scales. It changes who should lead.

The people who thrive in this environment will not be the ones who simply know how to use AI. That bar is dropping quickly. The people who thrive will be the ones who can define intent clearly, give the agent enough structure to move fast, and still know when the machine is wrong, shallow, overconfident, or drifting off mission.

That is the threshold. Below it, people let the agent set the pace, shape the work, and quietly narrow their thinking. Above it, people use the agent as leverage while keeping hold of direction, standards, and accountability.

This is not a tooling issue. It is a leadership issue.

The biggest mistake I see companies making is assuming AI adoption and AI capability are the same thing. They are not. Giving people access to powerful models tells you almost nothing about whether they can use them well. In fact, broad access can hide the problem for a while. Everyone suddenly looks more productive. More documents appear. More prototypes show up. More code gets written. More tickets move.

But velocity is a bad metric when the system can generate convincing motion on demand.

That is where executives get trapped. They see acceleration and assume capability has risen with it. Sometimes it has. Sometimes they are just watching the organization become more dependent on machine output without improving its ability to set direction or judge quality.

That is the real risk.

The person below the Steinberger Threshold is not necessarily junior. They are not necessarily non-technical. They are simply no longer fully in command once AI enters the loop. They delegate too early. They trust polished output too quickly. They confuse completeness with correctness. They let the system define the path instead of using the system to execute against a path they have defined.

The person above the threshold behaves very differently. They treat the agent like fast, tireless, sometimes brilliant labor. They know what outcome they want. They know where ambiguity is useful and where it is dangerous. They know when to tighten the frame. They know what needs review and what can be safely skimmed. Most importantly, they stay accountable for the result.

That last point matters more than people admit.

The best agent operators are usually not the ones writing the fanciest prompts. They are the ones with the clearest standards. They know what good looks like. They can spot weak reasoning. They can tell when the agent is optimizing for fluency instead of truth, or speed instead of soundness. They do not need to inspect every line, but they know exactly which lines matter.

This is why I think the rise of agents will reshuffle status inside product and engineering teams more than most people expect.

Some managers will struggle because they were already operating through abstraction without enough contact with the actual work. AI will expose that quickly. If you cannot define success in a way that a machine can execute against and a human can validate, your authority gets thinner.

Some engineers will struggle too, especially those whose identity is tied too tightly to personal output. AI does not care about your attachment to hand-crafted implementation if someone else can steer the machine to a better result faster.

And some people in the middle of the organization will rise quickly. They may not have the biggest titles. But they have taste. They can decompose messy problems. They can write clear acceptance criteria. They can create structure where others create noise. They can tell the difference between a useful first pass and a dangerous hallucination. In an agentic world, those people become force multipliers.

You can already see the outlines of this shift in the market. Companies are starting to act as though part of every team’s job is now translating work into something machines can execute. Whether you look at AI-first operating models, agentic coding environments, or the emerging idea of software factories, the pattern is the same: the bottleneck is moving away from raw execution and toward the ability to define, direct, and verify execution.

That is the Steinberger Threshold in practice.

So how do you figure out who has crossed it? Not with training completion rates. Not with prompt libraries. Not with AI badges. You run a scout mission.

By that I mean a real piece of work with enough ambiguity that judgment matters, enough structure that success can be observed, and enough consequence that the quality of direction shows up clearly. It should be something an agent can materially accelerate, but not something so trivial that the agent can stumble into a passable answer without supervision.

A good scout mission is not theater. It is a bounded business problem that exposes how someone thinks in an agentic environment.

Give them a real bug with messy symptoms. Give them a workflow that needs redesign. Give them a thin internal tool to build. Give them a reporting process full of edge cases. Then watch what they do.

Do they sharpen the objective before they delegate? Do they define acceptance criteria? Do they improve the environment with better tests, clearer documentation, or stronger context? Do they review the critical path or only the polished summary? Do they notice drift? Do they challenge the output? Can they explain why the result should be trusted?

Most importantly, when the agent gets stronger, do they become more decisive or more passive? That is the question.

Because that is what separates someone who is using AI as leverage from someone who is slowly handing over their agency to it.

My view is simple. The companies that win with AI will not be the ones with the most licenses, the biggest model budget, or the loudest transformation language. They will be the ones that identify who can actually operate above the Steinberger Threshold, then redesign teams, workflows, and leadership expectations around those people.

Because once agents become part of the execution layer, judgment becomes the scarce asset.

And scarce assets end up running the system.

From Using AI to Running AI: The Next Skill Gap

The biggest mistake leaders are making right now is framing the next era as a contest between humans and AI.

That is not what is happening inside high-performing teams. The real separation is already showing up somewhere else: between people who use AI and people who orchestrate it.

AI users get output. AI orchestrators get outcomes.

AI users treat the model like a clever intern. They prompt, they paste, they polish. Their ceiling is the quality of a single interaction.

AI orchestrators design a system where multiple interactions, tools, guardrails, and humans combine into a reliable workflow. They turn “a helpful answer” into “a completed job.” They stop thinking in prompts and start thinking in production.

You can see the industry converging on this. Microsoft is explicitly pushing “multi-agent orchestration” in Copilot Studio, including patterns for handoffs, governance, and monitoring because real work is rarely single-step. (Microsoft) OpenAI’s own guidance leans into the same idea: routines, handoffs, and coordination as the core primitives for building systems you can control and test. (OpenAI Developers) Anthropic draws a clean distinction between workflows that are orchestrated through predefined paths and agents that dynamically use tools, then spends most of its energy on what makes those systems effective in practice. (Anthropic) LangGraph has effectively positioned itself as the “agent runtime” layer for state, control flow, and debugging, which is exactly what orchestration needs when you leave toy demos behind. (LangChain)

This is why “AI literacy” is quickly becoming table stakes and then getting commoditized. Everyone will learn to prompt. Everyone will learn to generate code, slides, summaries, and drafts. That advantage collapses fast.

Orchestration does not collapse fast because it is not a trick. It is an operating model.

What an AI orchestrator actually does

Orchestration is not “use more agents.” Orchestration is the discipline of turning messy work into a repeatable machine without pretending the work is clean.

An orchestrator:

  • Breaks work into steps that can be delegated and verified, not just executed.
  • Connects AI to the real world through tools, systems, and data.
  • Designs handoffs, failure modes, and escalation paths as first-class product features. (Microsoft Learn)
  • Builds observability so you can debug behavior, not just admire outcomes. (Microsoft Learn)
  • Treats evaluation as a release gate, not a vibe check. (Anthropic)

That is why orchestration is showing up everywhere as “multi-agent,” “tool use,” and “workflows vs agents.” It is the same idea wearing different vendor hoodies. (Anthropic)

The uncomfortable truth: orchestration is where leadership lives

If you are a CTO, CPO, or head of product engineering, here is the quiet part out loud: orchestration forces accountability.

Prompting lets teams hide behind cleverness. Orchestration exposes whether you actually understand how value is created in your business.

Because the minute you try to orchestrate, you run into the real constraints:

  • Your data is scattered, permissions are inconsistent, and definitions disagree.
  • Your process is tribal knowledge, not a system.
  • Your edge cases are the product.
  • Your compliance needs are not optional, and your audit trail is not “we asked the model nicely.” (Microsoft Learn)

That is also why orchestration is a strategic advantage. It is hard precisely because it sits at the intersection of product, engineering, operations, security, and change management.

Why “AI users” will hit a wall

AI users become faster individuals. That is useful, but it is not compounding.

They save time on tasks that were never the bottleneck. They produce more artifacts, not more outcomes. They accelerate local productivity while the organization still moves at the speed of coordination.

Orchestration compounds because it scales across people. It turns expertise into a reusable workflow. It captures institutional knowledge in a living system, not in the heads of your best operators.

If you want a practical mental model, stop asking: “How do we get everyone to use AI?”

Start asking: “Which workflows, if orchestrated, would change our unit economics?”

A real-world smell test for orchestration readiness

If any of these sound familiar, you do not have an AI problem. You have an orchestration problem.

  • “We have great pilots, but nothing sticks.”
  • “We got a productivity bump, but delivery still feels chaotic.”
  • “We cannot trust outputs enough to automate anything material.”
  • “We are worried about security and compliance, so we are stuck in chat mode.”
  • “Everyone uses different prompts and gets different answers.”

Those are not model problems. Those are design problems.

The playbook: how teams move from AI use to AI orchestration

You do not need a moonshot. You need a workflow that matters, a thin orchestration layer, and ruthless clarity about quality.

  1. Pick one workflow with real stakes. Something with a clear definition of done. Not “research,” not “brainstorming.” Pick a job like triaging incidents, drafting customer responses with policy constraints, or converting messy inputs into structured records.
  2. Separate roles. Planning, execution, validation, and reporting should not be the same agent or the same step. That separation is the difference between a demo and a system. (OpenAI Developers)
  3. Build handoffs and guardrails, not a super-agent. Multi-agent orchestration exists because specialization plus controlled delegation is easier to debug and govern. (Microsoft)
  4. Make observability mandatory. Logging, tracing, and transcripts are not enterprise overhead. They are how you make AI behavior operational. (Microsoft Learn)
  5. Treat evaluation like CI. Define tests for correctness, policy compliance, and failure modes. If you cannot measure quality, you cannot scale automation. (Anthropic)

The new career moat

In the next two years, “good at prompting” will be like “good at Google.”

Nice. Expected. Not differentiating.

The career moat, and the organizational moat, belongs to the people who can do all of this at once:

  • translate business intent into workflows
  • connect tools and data safely
  • design guardrails and evaluation
  • ship systems that survive contact with reality

That is the orchestrator.

So yes, the gap will widen. But it will not be AI vs humans.

It will be AI users who generate more content versus AI orchestrators who design machines that reliably produce outcomes.

The Iron Triangle Is Back. AI Just Made It Sharper.

Every decade, the tech industry rediscovers a timeless truth and tries to dress it up as something new. Today’s version comes wrapped in synthetic intelligence and VC-grade optimism. But let’s be honest: AI did not kill the Iron Triangle. It fortified it.

For years we have preached that product decisions always balance quality, speed, and cost. You can choose two. The third becomes the sacrifice. AI arrives and many leaders immediately fantasize that this constraint has dissolved. It has not. It has only changed the failure modes.

AI accelerates coding. AI accelerates design. AI accelerates analysis. But the triangle still stands. What changes is which side collapses first and how painfully.

AI Makes “Fast” Frictionless and That Is the Problem

Teams adopt AI believing speed is now the default output. And in a sense it is. Prompt, generate, review, refine, and in minutes you have something that would have taken hours.

But the moment speed becomes effortless, the other two sides of the triangle take the hit.

Where things break:

  • Quality erodes quietly. Models hallucinate domain logic that engineers fail to notice. It compiles, it runs, and it is dangerously wrong.
  • Architectural discipline collapses. AI can ship features faster than teams can design scalable foundations. The result is a time bomb with fancy UX.
  • Costs compound through rework. The speed you gained upfront becomes technical debt someone must pay later, usually at triple the price.

AI made it easy to go fast. It did not make it safe.

AI Can Make Things “Cheap” but Often Only on Paper

Executives love AI because it hints at lower staffing costs, faster cycles, and higher margins.
They imagine a world where a handful of developers and designers can do the work of an entire department.

But here is the uncomfortable truth:

AI reduces the cost of creation, not the cost of correction.

The cheapest phase of a project is the moment you generate something. The most expensive phase is everything that comes after:

  • validating
  • integrating
  • securing
  • governing
  • maintaining
  • debugging
  • explaining to auditors why your model embedded training data into a client deliverable

AI does not make product development cheap. It simply delays the bill.

AI Promises “Quality” but Delivers Illusions of It

Platforms brag about AI-enhanced quality: fewer bugs, cleaner architecture, automated testing, smarter design. In reality, quality becomes performance theater unless teams evolve how they think, work, and review.

Common pitfalls:

  • AI code looks clean, reads well, and still violates half your constraints.
  • AI documentation is confident and completely fabricated.
  • AI test cases are shallow unless you explicitly direct them otherwise.

AI produces confidence without correctness. And too many leaders mistake the former for the latter. If you optimize for quality using AI, you must slow down and invest in human review, architecture, governance, and domain expertise. Which means speed suffers. Or costs rise.

The triangle always demands a price.

The Harsh Truth: AI Did Not Break the Triangle. It Exposed How Many Teams Were Already Cheating.

Before AI, many organizations pretended they could have all three. They could not, but the inefficiencies were human and therefore marginally manageable.

AI amplifies your ambition and your dysfunction.

  • Fast teams become reckless.
  • Cheap teams become brittle.
  • Quality-obsessed teams become paralyzed.

AI accelerates whatever you already are. If your product culture is weak, AI makes it weaker. If your engineering fundamentals are fragile, AI shatters them.

So What Do Great Teams Do? They Choose Deliberately.

The best product and engineering organizations do not pretend the triangle is gone. They respect it more than ever.

They make explicit choices:

  • If speed is the mandate, they pair AI with strict guardrails, strong observability, and pre-defined rollback paths.
  • If cost is the mandate, they track total lifecycle cost, not just dev hours.
  • If quality is the mandate, they slow down, invest in architecture, require human-in-the-loop validation, and accept that throughput will dip.

Great teams do not chase all three. They optimize two and design compensations for the third.

The Takeaway: AI Is Not a Shortcut. It Is a Magnifier.

AI does not free you from the Iron Triangle. It traps you more tightly inside it unless you understand where the real constraints have shifted.

The leaders who win in this era are the ones who stop treating AI as magic and start treating it as acceleration:

  • Acceleration of value
  • Acceleration of risk
  • Acceleration of consequences

AI is a force multiplier. If you are disciplined, it makes you unstoppable. If you are sloppy, it exposes you instantly.

AI did not remove the tradeoffs.
It made them impossible to ignore.

Tunneling in Product Management: Why Teams Miss the Bigger Play

Tunneling is one of the quietest and most corrosive forces in product management. I was gifted Upstream by Dan Heath from a product leader, and of course it was full of amazing product insights. The section on tunneling really stood out to me and was the inspiration for the following article.

Tunneling is one of the quietest and most corrosive forces in product management. Dan Heath defines tunneling in Upstream as the cognitive trap where people become so overwhelmed by immediate demands that they become blind to long term thinking. They fall into a tunnel, focusing narrowly on the urgent problem in front of them, while losing the ability to lift their head and see the structural issues that created the problem in the first place. It is not a failure of talent. It is a failure of operating conditions and incentives that reward survival over strategy.

Product teams fall into tunneling more easily than almost any other function. Shipping deadlines, stakeholder escalations, outages, bugs, demos, and endless “quick requests” push teams into a survival mindset. When tunneling sets in, teams stop working on the product and start working for the product. Their world collapses into keeping the next release alive, rather than increasing the long term value of the system.

This post examines tunneling in product management, how to recognize it, and why great leaders act aggressively to eliminate it.

The Moments That Signal You Are Already in the Tunnel

Product managers rarely admit tunneling. Instead, it shows up in subtle but repeatable patterns. When I work with teams, these are the red flags that appear most often.

1. Roadmaps turn into triage boards

When 80 percent of your roadmap is filled with fixes, quick wins, client escalations, and “urgent but unplanned” work, you are not prioritizing. You are reacting. Teams justify this by saying “we need to unblock the business” or “this customer is at risk,” but in practice they have ceded control of the roadmap to whoever yells the loudest.

2. PMs stop asking why

Tunneling pushes PMs to accept problem statements exactly as the stakeholder phrases them. A leader says “We need this report,” and the PM rushes to gather requirements without asking why the report is needed or whether the underlying decision process is broken. When discovery collapses, product strategy collapses with it.

3. Success becomes defined as getting through the week

Teams celebrate surviving releases instead of celebrating impact. A product manager who once talked passionately about the user journey now only talks about the number of tickets closed. The organization confuses motion with progress.

How Tunneling Shows Up in Real Product Teams

Example 1: The never ending backlog of “critical blockers”

A global platform team once showed me a backlog where more than half the tickets were marked critical. When everything is critical, nothing is strategic. The team had allowed sales, implementation, and operations to treat the product organization as an on demand task force. The underlying issue was a lack of intake governance and a failure to push accountability back to the functions generating the noise.

Example 2: Feature requests that mask system design flaws

A financial services product team spent months building “one off” compliance features for clients. Each request seemed reasonable. But the real problem was that the product lacked a generalizable compliance framework. Because they tunneled into each request, they burned time and budget without improving the architecture that created the issue.

Example 3: PMs becoming project managers instead of product leaders

A consumer health startup repeatedly missed growth targets because PMs were buried in ceremonies, reporting, and release wrangling. The root cause was not team incompetence. It was tunneling. They simply had no time or space to do discovery, validate assumptions, or pressure test the business model. The result was a product team optimized for administration instead of insight.

Why Product Organizations Tunnel

Tunneling is not caused by weak product managers. It is caused by weak product environments.

Three culprits show up most often.

1. Leadership prioritizing urgency over clarity

When leaders create a culture where speed trumps direction, tunneling becomes inevitable. A team cannot think long term when every week introduces the next emergency.

2. Lack of a stable operating model

Teams tunnel when they lack clear intake processes, prioritization frameworks, definitions of done, and release rhythms. Without structure, chaos becomes normal and the tunnel becomes the only way to cope.

3. Poor metrics

If the organization only measures output rather than outcomes, tunneling is rewarded. Dashboards that track ticket counts, velocity points, or story volume push teams to optimize for the wrong thing.

How to Break Out of the Tunnel

Escaping the tunnel is not an act of heroism. It is an act of design. Leaders must create conditions that prevent tunneling from taking hold.

1. Build guardrails around urgent work

Urgent work should be explicitly capped. High maturity product organizations use capacity allocation models where only a defined percentage of engineering time can be consumed by unplanned work. Everything else must go through discovery and prioritization.

2. Make problem framing a mandatory step

Teams must never act on a request until they have clarified the root problem. This single discipline cuts tunneling dramatically. Questions like “What is your real desired outcome” and “What are the alternatives you considered” shift the team from reaction to inquiry.

3. Shift the narrative from firefighting to systems thinking

Tunneling thrives when teams believe the world is a series of unconnected fires. Leadership must consistently redirect conversations toward structural fixes. What is the design gap? What is the long term win? What investment eliminates this class of issues forever?

4. Protect strategic time

Every product manager should have non negotiable time for discovery, research, client conversations, and exploration. Tunneling destroys creativity because it destroys time.

The Hard Truth: You Cannot Innovate While Tunneling

A product team inside a tunnel may survive, but it cannot innovate. It cannot design the next generation platform. It cannot shift the market. It cannot see around corners. Innovation requires space. Tunneling removes space. As Dan Heath notes, people in tunnels are not irrational. They are constrained. They are operating under scarcity of time, attention, and emotional bandwidth.

Great product leaders treat tunneling as an existential risk. They eliminate it with the same intensity they eliminate technical debt or security vulnerabilities. Because tunneling is not just a cognitive trap. It is a strategy trap. The longer the organization stays in the tunnel, the more it drifts toward mediocrity.

The highest performing product teams have one thing in common. They refuse to let the urgent consume the important. They protect clarity. They reject chaos. They create the conditions for long term thinking. And because of that, they build products that move markets.

References

  1. Dan Heath, Upstream: The Quest to Solve Problems Before They Happen, Avid Reader Press, 2020.
  2. Mullainathan, Sendhil and Shafir, Eldar. Scarcity: Why Having Too Little Means So Much, Times Books, 2013. (Referenced indirectly in Upstream regarding tunneling psychology.)

Aesthetic Force: The Hidden Gravity Warping Your Product and Your Organization

Every product and engineering organization wrestles with obvious problems. Technical debt. Conflicting priorities. Underpowered infrastructure. Inefficient processes. Those are solvable with time, attention, and a bit of management maturity.

The harder problems are the invisible ones. The ones that warp decisions without anyone saying a word. The ones that produce outcomes nobody intended. These are driven by what I call aesthetic force. Aesthetic force is the unseen pull created by taste, culture, prestige, identity, and politics. It is the gravity field beneath a product organization that shapes what gets built, who gets heard, and what becomes “the way we do things.” It is not logical. It is not measurable. Yet it is incredibly powerful.

Aesthetic force is why teams ship features that do not matter. It is why leaders chase elegant architectures that never reach production. It is why organizations obsess over frameworks rather than outcomes. It is why a simple decision becomes a six week debate. It is taste dressed up as strategy.

If you do not understand aesthetic force, it will run your organization without your consent.

Below is how to spot it, how to avoid it when it becomes toxic, and the few cases when you should embrace it.

How To Identify Aesthetic Force

Aesthetic force reveals itself through behavior, not words. Look for these patterns.

1. The Team Loves the Work More Than the Result

When engineers argue passionately for a solution that adds risk, time, or complexity, not because the customer needs it but because it is “clean,” “pure,” or “the right pattern,” you are witnessing aesthetic force.

2. Prestige Projects Receive Irrational Protection

If a feature or platform strand gets defended with the same fervor as a personal reputation, someone’s identity is tied to it. They are protecting an aesthetic ideal rather than the truth of the market.

3. Process Shifts Without Actual Improvement

If a new methodology, tool, or workflow gains traction before it proves value, you are watching aesthetic force in action. People are choosing the thing that looks modern or elite.

4. You Hear Phrases That Signal Taste Over Impact

“Elegant.”
“Beautiful.”
“Clean.”
“We should do it the right way.”
“When we rewrite it the right way.”

Any time you hear “right way” without specificity, aesthetic force is speaking.

5. Decisions Drift Toward What the Loudest Experts Prefer

Aesthetic force often hides behind seniority. If the organization defaults to the preferences of one influential architect or PM without evidence, the force is winning.

What To Do To Avoid Aesthetic Force Taking Over

Aesthetic force itself is not bad. Unchecked, it is destructive. You avoid that through intentional leadership.

1. Anchor Everything to Measurable Impact

Every debate should be grounded in a measurable outcome. If someone proposes a new pattern, integration, rewrite, or workflow, the burden of proof is on them to show how it improves speed, quality, reliability, or client experience.

Opinions are welcome. Impact determines direction.

2. Make Tradeoffs Explicit

Aesthetic force thrives in ambiguity. When you turn decisions into explicit tradeoffs, the fog clears.
Example:
Option A is more elegant but will delay us eight weeks. Option B is less elegant but gets us to market before busy season, improves adoption, and unblocks another team.

Elegance loses unless it delivers value.

3. Demand Evidence Before Evangelism

If someone champions a new tool, standard, or strategy, require a working example, a pilot, or a small-scale win. No more slideware revolutions.

4. Reward Shipping Over Posturing

Promote leaders who deliver outcomes, not theory. Teams emulate what they see rewarded. If prestige attaches to execution rather than aesthetic purity, the organization rebalances itself.

5. Break Identity Attachment

If someone’s identity is fused with a product, codebase, or architecture, rotate responsibilities or pair them with a peer reviewer. Aesthetic force is strongest when people believe their reputation depends on decisions staying a certain way.


When To Accept Aesthetic Force

There are rare moments when you should allow aesthetic force to influence the product. Doing so without awareness is reckless. Doing so intentionally can be powerful.

1. When You Are Establishing Product Taste

Every great product has an opinionated aesthetic at its core. Some teams call this product feel. Others call it craftsmanship. When aesthetics drive coherence, speed, and clarity, the force is working in your favor.

2. When the Aesthetic Attracts and Retains Exceptional Talent

Some technical choices create a virtuous cycle. A beautiful architecture can inspire great developers to join or stay. A well crafted experience can rally designers and PMs. Occasionally, embracing aesthetic force elevates the culture.

3. When It Becomes a Strategic Differentiator

If aesthetic excellence creates client trust, increases adoption, or reduces friction, it becomes a strategic tool. Apple’s product aesthetic is not a luxury. It is part of its moat.

4. When Shipping Fast Would Create Long Term Chaos

Sometimes the shortcut buries you later. Aesthetic force is useful when it protects you from reckless short term thinking. The key is to treat it as a conscious decision, not a reflex.

Thought

Aesthetic force is not a harmless quirk. It is a silent operator that will hijack your roadmap, distort your priorities, and convince smart people to pour months into work that has no strategic value. Leaders who ignore it end up managing an organization that behaves irrationally while believing it is acting with discipline.

If you want a product team that delivers results instead of beautiful distractions, you cannot treat aesthetic force as a background influence. You must surface it, confront it, and regulate it. When you do, the organization becomes sharper, faster, and far more honest about what matters. When you do not, aesthetic force becomes the real head of product, and it will not care about your clients, your deadlines, or your strategy.

The gravity is already pulling. Strong leaders decide the direction.

#ProductStrategy #EngineeringCulture #ProductThinking #CTO #CIO

The leadership myth: “I just know”

In product engineering leadership circles, people love to talk about instinct. The knowing glance at a roadmap item that feels wrong. The uneasy sense that a design review is glossing over real risk. The internal alarm that goes off the moment someone says, “We can just replatform it in a few weeks”.

That instinct gets labeled “Spidey sense”. It sounds cool. It implies mastery. It suggests your leadership capability has evolved into a sixth sense.

But in practice, treating intuition like a superpower is one of the fastest ways an engineering leader can misjudge risk, overrule teams incorrectly, or derail prioritization.

The popular interpretation of “Spidey sense” as mystical foresight hides the real mechanism: pattern recognition built over years, now masquerading as magic. As one perspective puts it, intuition is simply “a strong feeling guiding you toward an advantageous choice or warning you of a roadblock”. (mindvalley.com)

Inside a leadership context, relying on that feeling without discipline can create more harm than clarity.

The uncomfortable truth: your intuition has limits

1. Your instincts reflect your past, not your present environment
A study on engineering intuition shows that intuitive judgment comes from familiar patterns, not universal truths. (onlinelibrary.wiley.com)

As a leader, your “sense” might be tuned to a monolith world when your team is operating in microservices. Or it might be shaped by on-prem realities while your teams build cloud native platforms.

If the context has moved and your instincts have not, you become the roadblock.

2. Intuition often substitutes for process at the exact moment you need more structure
Leaders fall into the trap of shortcutting with phrases like “I’ve seen this fail before” or “Trust me, this architecture won’t scale”. That feels efficient. It is not.

Product engineering leadership requires visible reasoning, measurable outcomes, and collaborative decision making. A product sense article puts it well: intuition can be a compass but is not a map. (medium.productcoalition.com)

Compasses help you orient. Maps help an entire organization move.

3. Intuition collapses under novelty
Product engineering lives in novelty: new cloud services, AI architectures, shifting security expectations, fast-changing user expectations. Research on the metacognition of intuition shows that instincts fail in unfamiliar environments. (researchgate.net)

As a leader, if you rely on intuition in novel or high-ambiguity situations, you risk overconfidence right when the team needs structured exploration.

Where engineering leaders should actually use intuition

A. Early risk detection
A raised eyebrow during a design review can be valuable. Leaders with deep experience often sense when a team is assuming too much, skipping load testing, or building a brittle dependency chain. That gut feeling should trigger investigation, not fiat decisions.

B. Team health and dynamics
Signal detection around team morale, interpersonal friction, or a pattern of missed commitments is one of the most defensible uses of leadership intuition. People rarely surface these problems directly. Leaders who sense early disruption can intervene before a team loses velocity or trust.

C. Prioritization under real uncertainty
Sometimes the data is thin, the timelines are compressed, and the decision cannot wait.
Intuition, shaped by past experience, lets leaders choose a direction and commit. But that choice must be paired with measurable checkpoints, telemetry, and a willingness to pivot.

A leadership article on intuition describes it as a feedback loop that adapts with new data.
(archbridgecoaching.com) The best engineering leaders operate exactly that way.

Where engineering leaders misuse intuition and damage teams

  • Declaring architectural truths without evidence
    Saying “that pattern won’t scale” without benchmarks undermines engineering autonomy and starves the team of real learning.
  • Using instinct to override user research
    Leaders who “feel” the user flow is fine even when research says otherwise end up owning failed adoption and churn.
  • Blocking progress with outdated mental models
    Your past experience is not invalid, but it is incomplete. When leaders default to “my instinct says no”, they lock teams into the past.
  • Confusing speed with correctness
    Leaders shortcutting due diligence because “something feels off” or “this feels right” often introduce risk debt that shows up months later.

The disciplined leader’s approach to intuition

1. Translate the sense into a testable hypothesis
Instead of “I don’t like this architecture”, say: “I suspect this component will become a single point of failure. Let’s validate that with a quick load simulation.”

2. Invite team challenge
If your intuition cannot survive healthy debate, it is not insight; it is ego.

3. Verify with data
Telemetry, benchmarking, user tests, scoring matrices, risk assessments. Leaders build confidence through evidence.

4. Tie intuition to a learning loop
After the decision, ask: Did my instinct help? Did it mislead?
Leaders who evaluate their own judgment evolve faster than those who worship their gut.

5. Make intuition transparent
Explain the reasoning, patterns and risks behind the feeling. This grows organizational judgment rather than centralizing it.

Closing argument

Spidey sense is not a leadership trait. It is a signal. It is an early warning system that tells you when to look closer. But it is not a substitute for data, rigorous engineering practice, or transparent decision making.

Great product engineering leaders do not trust their instincts blindly. They use their instincts to decide what questions to ask, what risks to probe, what patterns to explore, and where to apply pressure.

When intuition triggers structured action, it becomes a leadership accelerant. When intuition replaces structure, it becomes a liability.

Treat your Spidey sense as a flashlight, not a compass. It helps you see what you might have missed. It does not tell you where to go.

Revealed Preferences vs Stated Preferences: The Silent Killer of Product and Organizational Strategy

Every product and engineering leader has seen this pattern. A room full of smart people declares:

  • “We want fewer priorities.”
  • “We will follow the operating model.”
  • “This initiative is the top strategic priority.”
  • “We are committed to data-driven decisions.”

Then, within days, behavior contradicts what was said. Side work appears. Priorities shift. Leaders quietly push their pet projects. Teams say yes to everything. Decisions made on Monday fall apart by Thursday.

This gap between what people say and what they do is the difference between stated preferences and revealed preferences, a concept rooted in behavioral economics from Paul Samuelson and Gary Becker.

Stated preferences are what people wish were true. Revealed preferences are what they act on. Organizations that ignore this gap drift. Organizations that confront it deliver.

How to Identify Revealed Preferences Inside Your Organization

1. Follow the time, not the talk

Calendars reveal priorities better than strategy decks.

Andy Grove captured this in High Output Management, noting that leaders often claim architectural work is vital while spending all their time on escalations. [Link]

At Google, teams optimized for leadership product reviews, not roadmaps. Those meetings became the real forcing function. [Link]

2. Look for quiet work multipliers

Yahoo pre Marissa Mayer is a classic case. Leaders stated they supported focus while creating hundreds of priorities behind the scenes. [Link]

3. Examine who gets rewarded

Values are revealed through promotions and praise.

Netflix states this clearly in its culture memo: “Values are shown by who gets rewarded and who gets let go.” [Link]

If heroics get rewarded while platform discipline gets ignored, heroics become the true preference.

4. Check what gets escalated

Teams escalate what they believe matters. If pet projects escalate faster than roadmap work, the hidden priorities are obvious.

5. Listen for the quiet “yes, but”

  • “Yes, we support the model, but I need this done outside it.”
  • “Yes, we want fewer priorities, but we need this exception.”

Chris Argyris documented this pattern as “espoused theory versus theory in use.” [Link]

The truth lives in behavior, not statements.

How to Avoid the Trap: Turning Revealed Preferences Into Better Decisions

1. Stop accepting verbal alignment as alignment

Amazon solved this by requiring written alignment through six page narratives. [Link] If someone will not commit in writing, they are not aligned.

2. Run decision pre mortems

Based on Gary Klein’s research, pre mortems force hidden risks and incentives into the open.[Link] Ask:

  • What behavior would contradict this
  • Who benefits if this fails
  • What incentive might undermine it

3. Build friction into special requests

Atlassian and Shopify use portfolio scorecards that require public tradeoffs for every exception. [Link, Link] This prevents hidden work from overwhelming teams.

4. Tie every priority to measurable outcomes

Google’s Project Aristotle showed that clarity and structure drive performance. Link Metrics force real preferences into daylight.

5. Ask for preferred failure mode

The UK Government Digital Service used this approach to uncover real priorities, often revealing that speed and usability mattered more than perfect accuracy. [Link]

When to Accept Revealed Preferences Instead of Fighting Them

1. Accept it when executive behavior is consistent

If leaders consistently act one way, that behavior is the strategy. Microsoft under Steve Ballmer said innovation mattered, but behavior optimized for Windows and Office. Satya Nadella highlighted this in Hit Refresh. [Link]

2. Accept it when culture contradicts the stated strategy

Clayton Christensen’s The Innovator’s Dilemma shows that organizations follow cultural and economic incentives, not aspirational strategy. [Link]

If firefighting culture dominates, you will get firefighting, not platforms.

3. Accept it when it reveals real power structures

The real org chart is the list of who can successfully redirect a team’s time.

4. Accept it when it reflects external pressure

Fintech leaders stated that velocity mattered until regulators forced compliance to become the true priority. [Link]

Sometimes the revealed preference is survival.

Delivery Happens When You Lead With Reality, Not Rhetoric

Every organization claims it values delivery. Yet delivery consistently fails in the gap between what leaders say they want and what their behavior actually supports.

If a leader claims focus but adds side work, delivery slips.
If a sponsor claims predictability but changes scope constantly, delivery stalls.
If a steering group claims platform maturity but rewards firefighting, delivery dies.

Delivery is not about tools or talent.
Delivery is a revealed preference problem.

Organizations deliver when behavior aligns with the strategy.
When calendars match the roadmap.
When exceptions have a cost.
When incentives reinforce the plan.

Great organizations feel calm and predictable because behavior supports commitments.
Weak organizations feel chaotic because behavior contradicts them.

Closing the gap between stated and revealed preferences is the single most important delivery intervention a leader can make.

Delivery Is What You Prove, Not What You Announce

Every organization carries two delivery strategies:

  1. The one written in slides.
  2. The one enforced through behavior.

Only the second one ships.

If you want real delivery, build around what people actually do.
Hold leaders accountable to behavioral alignment.
Confront contradictions early.
Design your operating model around reality, not aspiration.

Once behavior and strategy match, delivery stops being a goal and becomes the natural byproduct of how the organization works.

Drucker and the AI Disruption: Why Landmarks of Tomorrow Still Predicts Today

When Peter Drucker published The Landmarks of Tomorrow in 1959, he was writing about the future, but not this future. He saw the rise of knowledge work, the end of mechanical thinking, and the dawn of a new age organized around patterns, processes, and purpose. What he didn’t foresee was artificial intelligence, a force capable of accelerating his “landmarks” faster than even he could have imagined.

Today, AI isn’t simply automating tasks or assisting humans. It is disrupting the foundations of how enterprises are built, governed, and led. Drucker’s framework, written for the post-industrial age, has suddenly become the survival manual for the AI-powered one.

From Mechanistic Control to Pattern Intelligence

Drucker warned that the industrial worldview, which was linear, predictable, and mechanistic, was ending. In its place would rise a world defined by feedback loops, patterns, and living systems.

That is precisely the shift AI has unleashed.

Enterprise leaders still talk about “projects,” “pipelines,” and “processes,” but AI doesn’t play by those rules. It learns, adapts, and rewires itself continuously. The organizations that treat AI as a static tool will be replaced by those that treat it as an intelligent process, one that learns as it runs.

Companies used to manage through reporting lines. Now they must manage through data flows. AI has become the nervous system, the pattern recognizer, the process optimizer, and the hidden hand that connects the enterprise’s conscious mind (its strategy) with its reflexes (its operations).

If Drucker described management as the art of “doing things right,” AI has made that art probabilistic. The managers who ignore this are already obsolete.

The Knowledge Worker Meets the Algorithm

Drucker’s greatest prediction, the rise of the “knowledge worker,” is being rewritten in real time. For 70 years, the knowledge worker has been the enterprise’s most precious asset. But now, the knowledge itself has become the product, processed, synthesized, and recombined by large language models.

We are entering what might be called the algorithmic knowledge economy. AI doesn’t just help the lawyer draft faster or the developer code better. It competes with their very value proposition.

Yet, rather than eliminating knowledge work, AI is forcing it to evolve. Drucker said productivity in knowledge work was the greatest management challenge of the 21st century. He was right, but AI is solving that challenge by redefining the role itself.

The best knowledge workers of tomorrow will not just do the work. They will design, supervise, and refine the AI that does it. The new productivity frontier isn’t about faster execution. It is about orchestrating intelligence, both human and machine, into systems that learn faster than competitors can.

AI as a Management Disruptor

If Drucker saw management as a discipline of purpose, structure, and responsibility, AI is now testing every one of those principles.

  • Purpose: AI can optimize toward any goal, but which one? Efficiency, profitability, fairness, sustainability? The model will not decide that for you. Leadership will.
  • Structure: Hierarchies are collapsing under the speed of decision loops that AI can execute autonomously. The most adaptive enterprises are building networked systems that behave more like ecosystems than bureaucracies.
  • Responsibility: Drucker believed ethics and purpose were the essence of management. In AI, that moral compass can no longer be implied. It must be engineered into the system itself.

In other words, AI does not just change how we manage. It challenges what management even means.

From Centralized Control to Federated Intelligence

Drucker predicted that traditional bureaucracies would give way to decentralized, knowledge-based organizations. That is exactly what is happening, except now it is not just humans at the edge of the organization, but algorithms.

AI is enabling every business unit, every function, every product team to have its own localized intelligence. The new question isn’t “how do we scale AI?” It is “how do we coordinate dozens of semi-autonomous AI systems working in parallel?”

Enterprise leaders who cling to centralization will find themselves trapped in a paradox. They want control, but AI thrives on freedom. Drucker would call this the new frontier of management: creating governance that empowers autonomy without sacrificing accountability.

This is why the AI-first enterprise of the future will look less like a corporation and more like a distributed cognitive organism, one where humans and machines make up a shared nervous system of learning, adaptation, and decision-making.

Values as the Ultimate Competitive Edge

Drucker wrote that the “next society” would have to rediscover meaning, that economic progress without moral purpose would collapse under its own weight.

AI is testing that thesis daily.

Enterprises racing to deploy AI without a value compass are discovering that technological advantage is fleeting. The companies that will endure are those that turn ethics into an operating principle, not a compliance checklist.

Trust is now a competitive differentiator. The winners will not just have the best models. They will have the most trustworthy ones, and the culture to use them wisely.

AI does not absolve leaders of responsibility. It multiplies it.

AI Is Drucker’s “Next Society” Arriving Early

If Drucker were alive today, he would say the AI revolution is not a technological shift, but a civilizational one. His “Next Society” has arrived early, and it is powered by algorithms that behave more like collaborators than tools.

The irony is that Drucker’s warnings were not about machines. They were about people: how we adapt, organize, and lead when the rules change. AI is simply the latest, most unforgiving test of that adaptability.

The enterprises that survive will not be those with the most advanced AI infrastructure. They will be those that rethink their management philosophy, shifting from command and control to purpose and orchestration, from metrics to meaning.

Wrapping Up

AI is Drucker’s world accelerated, a management revolution disguised as a technology trend.
Those who still see AI as just another tool are missing the point.

AI is the most profound management disruptor of our generation, and Landmarks of Tomorrow remains the best playbook we never realized we already had.

The question isn’t whether AI will reshape the enterprise. It already has.
The real question is whether leaders will evolve fast enough to manage the world Drucker saw coming, and which AI has now made real.

How AI Is Opening New Markets for Professional Services

The professional services industry, including consulting, legal, accounting, audit, tax, advisory, engineering, and related knowledge-intensive sectors, stands on the cusp of transformation. Historically, many firms have viewed AI primarily as a tool to boost efficiency or reduce cost. But increasingly, forward-thinking firms are discovering that AI enables them to expand into new offerings, customer segments, and business models.

Below I survey trends, opportunities, challenges, and strategic considerations for professional services firms that aim to go beyond optimization and into market creation.

Key Trends Shaping the Opportunity Landscape

Before diving into opportunities, it helps to frame the underlying dynamics.

Rapid Growth in AI-Driven Markets

  • The global Artificial Intelligence as a Service (AIaaS) market is projected to grow strongly, from about USD 16.08 billion in 2024 to USD 105 billion by 2030 (CAGR ~36.1%) (grandviewresearch.com)
  • Some forecasts push even more aggressively. Markets & Markets estimates AIaaS will grow from about USD 20.26 billion in 2025 to about USD 91.2 billion by 2030 (CAGR ~35.1%) (marketsandmarkets.com)
  • The AI consulting services market is also booming. One forecast places the global market at USD 16.4 billion in 2024, expanding to USD 257.6 billion by 2033 (CAGR ~35.8%) (marketdataforecast.com)
  • Another projection suggests the AI consulting market could reach USD 58.19 billion by 2034, from about USD 8.75 billion in 2024 (zionmarketresearch.com)
  • Meanwhile, the professional services sector itself is expected to grow by USD 2.07 trillion between 2024 and 2028 (CAGR ~5.7%), with digital and AI-led transformation as a core driver (prnewswire.com)

These macro trends suggest that both supply (consulting and integration) and demand (client AI adoption) are expanding in parallel, creating a rising tide on which professional services can paddle into new spaces.

From Efficiency to Innovation and Revenue Growth

In many firms, early AI adoption has followed a standard path: use tools to automate document drafting, data extraction, analytics, or search. But new reports and surveys suggest that adoption is maturing into more strategic use.

  • The Udacity “AI at Work” research finds a striking “trust gap.” While about 90% of workers use AI in some form, fewer trust its outputs fully. (udacity.com) That suggests substantial room for firms to intervene through governance, assurance, audits, training, and oversight services.
  • The Thomson Reuters 2025 Generative AI in Professional Services report notes that many firms are using GenAI, but far fewer are tracking ROI or embedding it in strategy (thomsonreuters.com)
  • An article from OC&C Strategy observes that an over-focus on “perfect bespoke solutions” can stall value capture; instead, a pragmatic “good-but-not-perfect” deployment mindset allows earlier revenue and learning (occstrategy.com)
  • According to RSM, professional services firms are rethinking workforce models as AI automates traditionally junior tasks, pressing senior staff into more strategic work (rsmus.com)

These signals show that we are approaching a second wave of AI in professional services, where firms seek to monetize AI not just as a cost lever but as a growth engine.

Four Categories of Market-Building Opportunity

Here are ways professional services firms can go beyond automation to build new markets.

Opportunity TypeDescriptionExamples / Use Cases
1. AI-Powered Advisory and “AI-as-a-Service” OfferingsFirms package domain expertise and AI models into products or subscription servicesA legal firm builds a contract-analysis engine and offers subscription access; accounting firms provide continuous anomaly detection on client ERP data
2. Assurance, Audit, and AI Governance ServicesAs AI becomes embedded in client systems, demand for auditing, validation, model governance, compliance, and trust frameworks will growAuditing AI outputs in regulated sectors, reviewing model fairness, or certifying an AI deployment
3. Vertical or Niche Micro-Vertical AI SolutionsRather than broad horizontal tools, build AI models specialized for particular industries or subdomainsA consulting firm builds an AI tool for energy forecasting in renewable businesses, or an AI model for real estate appraisal
4. Platform, API, or Marketplace EnablementFirms act as intermediaries or enablers, connecting client data to AI tools or building marketplaces of agentic AI servicesA tax firm builds a plugin marketplace for tax-relevant AI agents; a legal tech incubator curates AI modules

Let’s look at each in more depth.

1. AI-Powered Advisory or Embedded AI Products

One of the most direct routes is embedding AI into the service deliverable, turning part of the deliverable from human labor to intelligent automation, and then charging for it. Some possible models:

  • Subscription or SaaS model: tax, audit, or legal firms package their AI engine behind a SaaS interface and charge clients on a recurring basis.
  • Outcome-based models: pricing tied to detected savings or improved accuracy from AI insights.
  • Embedded models: AI acts as a “co-pilot” or second reviewer, but service teams retain oversight.

By moving in this direction, professional services firms evolve into AI product companies with recurring revenues instead of purely project-based revenue.

A notable example is the accounting roll-up Crete Professionals Alliance, which announced plans to invest $500M to acquire smaller firms and embed OpenAI-powered tools for tasks such as audit memo writing and data mapping. (reuters.com) This shows how firms see value in integrating AI into service platforms.

2. Assurance, Audit, and AI Governance Services

As clients deploy more AI, they will demand greater trust, transparency, and compliance, especially in regulated sectors such as finance, healthcare, and government. Professional services firms are well positioned to provide:

  • AI audits and validation: ensuring models work as intended, detecting bias, assessing robustness under adversarial conditions.
  • Governance and ethics frameworks: helping clients define guardrails, checklists, model review boards, or monitoring regimes.
  • Regulation compliance and certification: as governments begin regulating high-risk AI, firms can audit or certify client systems.
  • Trust as a service: maintaining ongoing oversight, monitors, and health-checks of deployed AI.

Because many organizations lack internal AI expertise or governance functions, this becomes a natural extension of traditional audit, risk, or compliance practices.

3. Vertical or Niche AI Solutions

A generic AI tool is valuable, but its economics often require scale. Professional services firms can differentiate by combining domain depth, industry data, and AI. Some advantages:

  • Better accuracy and relevance: domain knowledge helps build more precise models.
  • Reduced client friction: clients are comfortable trusting domain specialists.
  • Fewer competitors: domain-focused models are harder to replicate.

Examples:

  • A consulting firm builds an AI model for commodity price forecasting in mining clients.
  • A legal practice builds a specialized AI tool for pharmaceutical patent litigation.
  • An audit firm builds fraud detection models tuned to logistics or supply chain clients.

The combination of domain consulting and AI product is a powerful differentiator.

4. Platform, Agentic, or Marketplace Models

Instead of delivering all AI themselves, firms can act as platforms or intermediaries:

  • Agent marketplace: firms curate AI “agents” or microservices that clients can pick, configure, and combine.
  • Data and AI orchestration layers: firms build middleware or connectors that integrate client systems with AI tools.
  • Ecosystem partnerships: incubate AI startups or partner with AI vendors, taking a share of commercialization revenue.

In this model, the professional services firm becomes the AI integrator or aggregator, operating a marketplace that others plug into. Over time, this can generate network effects and recurring margins.

What Existing Evidence and Practitioner Moves Show

To validate that these ideas are more than theoretical, here are illustrative data points and real-world moves.

  • Over 70% of large professional services firms plan to integrate AI in workflows by 2025 (Thomson Reuters).
  • In a survey by Harvest, smaller firms report agility in adopting AI and experimentation, possibly making them early movers in new value models. (getharvest.com)
  • Law firms such as Simmons & Simmons and Baker McKenzie are converting into hybrid legal-tech consultancies, offering AI-driven legal services and consultative tech advice. (ft.com)
  • Accenture has rebranded its consulting arm to “reinvention services” to highlight AI-driven transformation at scale. (businessinsider.com)
  • RSM US announced plans to invest $1 billion in AI over the next three years to build client platforms, predictive models, and internal infrastructure. (wsj.com)
  • In Europe, concern is rising that AI adoption will be concentrated in large firms. Ensuring regional and mid-tier consultancies can access infrastructure and training is becoming a policy conversation. (europeanbusinessmagazine.com)

These moves show that leading firms are actively shifting strategy to capture AI-driven revenue models, not just internal efficiency gains.

Strategic Considerations and Challenges

While the opportunity is large, executing this transformation requires careful thinking. Below are key enablers and risks.

Key Strategic Enablers

  1. Leadership alignment and vision
    AI transformation must be anchored at the top. PwC’s predictions emphasize that AI success is as much about vision as adoption. (pwc.com)
  2. Data infrastructure and hygiene
    Clean, well-governed data is the foundation. Without that, AI models falter. OC&C warns that focusing too much on perfect models before data readiness may stall adoption.
  3. Cross-disciplinary teams
    Firms need domain specialists, data scientists, engineers, legal and compliance experts, and product managers working together, not in silos.
  4. Iterative, minimum viable product (MVP) mindset
    Instead of waiting for a perfect AI tool, launch early, learn, iterate, and scale.
  5. Trust, transparency, and ethics
    Given the trust gap highlighted by Udacity, firms need to embed explainability, human oversight, monitoring, and user education.
  6. Change management and talent upskilling
    Legacy staff need to adapt. As firms automate junior tasks, roles shift upward. RSM and others are already refocusing talent strategy.

Challenges and Risks

  • Regulation and liability: increasing scrutiny on AI’s safety, fairness, privacy, and robustness means potential legal risk for firms delivering AI-driven services.
  • Competition from tech-first entrants: pure AI-native firms may outpace traditional firms in speed and innovation.
  • Client reluctance and trust issues: many clients remain cautious about relying on AI, especially for mission-critical decisions.
  • ROI measurement difficulty: many firms currently fail to track ROI for AI initiatives (according to Thomson Reuters).
  • Skill and talent shortage: hiring and retaining AI-capable talent is a global challenge.
  • Integration complexity: AI tools must integrate with legacy systems, data sources, and client workflows.

Suggested Roadmap for Firms

Below is a high-level phased roadmap for a professional services firm seeking to evolve from AI-enabled efficiency to market creation.

  1. Diagnostic and capability audit
    • Assess data infrastructure, AI readiness, analytics capabilities, and talent gaps.
    • Map internal use cases (where AI is already helping) and potential external transitions.
  2. Pilot external offerings or productize internal tools
    • Identify one or two internal tools (for example, document summarization or anomaly detection) and wrap them as client offerings.
    • Test with early adopters, track outcomes, pricing, and adoption friction.
  3. Develop governance and assurance capability
    • Build modular governance frameworks (explainability, audit trails, human review).
    • Offer these modules to clients as part of service packages.
  4. Expand domain-specific products and verticals
    • Use domain expertise to build specialized AI models for client sectors.
    • Build go-to-market and sales enablement geared to those verticals.
  5. Launch platform or marketplace approaches
    • Once you have multiple AI modules, offer them via API, plugin, or marketplace architecture.
    • Partner with technology vendors and startup ecosystems.
  6. Scale, monitor, and iterate
    • Invest in legal, compliance, and continuous monitoring.
    • Refine pricing, SLAs, user experience, and robustness.
    • Use client feedback loops to improve.
  7. Institutionalize AI culture
    • Upskill all talent, both domain and technical.
    • Embed reward structures for productization and value creation, not just billable hours.

Why This Matters for Clients and Firms

  • Clients are demanding more value, faster insight, and continuous intelligence. They will value service providers who deliver outcomes, not just advice.
  • Firms that remain purely labor or consulting based risk commoditization, margin pressure, and competition from AI-native entrants. The firms that lean into AI productization will differentiate and open new revenue streams.
  • Societal and regulatory forces will strengthen the demand for trustworthy, auditable, and ethically-built AI systems, and professional service firms are well placed to help govern those systems.

Conclusion

AI is not just another technology wave for professional services. It is a market reset. Firms that continue to treat AI as a back-office efficiency play will slowly fade into irrelevance, while those that see it as a platform for creating new markets will define the next generation of the industry.

The firms that win will not be the ones with the best slide decks or the largest data lakes. They will be the ones that productize their expertise, embed AI into their client experiences, and lead with trust and transparency as differentiators.

AI is now the new delivery model for professional judgment. It allows firms to turn knowledge into scalable and monetizable assets, from predictive insights and continuous assurance to entirely new advisory categories.

The choice is clear: evolve from service provider to AI-powered market maker, or risk becoming a subcontractor in someone else’s digital ecosystem. The professional services firms that act decisively today will own the playbooks, platforms, and profits of tomorrow.

The Great Reversal: Has AI Changed the Specialist vs. Generalist Debate?

For years, career advice followed a predictable rhythm: specialize to stand out. Be the “go-to” expert, the person who can go deeper, faster, and with more authority than anyone else. Then came the countertrend, where generalists became fashionable. The Harvard Business Review argued that broad thinkers, capable of bridging disciplines, often outperform specialists in unpredictable or rapidly changing environments.
HBR: When Generalists Are Better Than Specialists—and Vice Versa

But artificial intelligence has rewritten the rules. The rise of generative models, automation frameworks, and intelligent copilots has forced a new question:
If machines can specialize faster than humans, what becomes of the specialist, and what new value can the generalist bring?

The Specialist’s New Reality: Depth Is No Longer Static

Specialists once held power because knowledge was scarce and slow to acquire. But with AI, depth can now be downloaded. A model can summarize 30 years of oncology research or code a Python function in seconds. What once took a career to master, AI can now generate on demand.

Yet the specialist is not obsolete. The value of a specialist has simply shifted from possessing knowledge to directing and validating it. For example, a tax expert who understands how to train an AI model on global compliance rules or a medical researcher who curates bias-free datasets becomes exponentially more valuable. AI has not erased the need for specialists; it has raised the bar for what specialization means.

The new specialist must be both a deep expert and a domain modeler, shaping how intelligence is applied in context. Technical depth is not enough. You must know how to teach your depth to machines.

The Generalist’s Moment: From Connectors to Orchestrators

Generalists thrive in ambiguity, and AI has made the world far more ambiguous. The rise of intelligent systems means entire workflows are being reinvented. A generalist, fluent in multiple disciplines such as product, data, policy, and design, can see where AI fits across silos. They can ask the right questions:

  • Should we trust this model?
  • What is the downstream effect on the client experience?
  • How do we re-train teams who once performed this work manually?

In Accenture’s case, the firm’s focus on AI reskilling rewards meta-learners, those who can learn how to learn. This favors generalists who can pivot quickly across domains, translating AI into business outcomes.
CNBC: Accenture plans on exiting staff who can’t be reskilled on AI

AI gives generalists leverage, allowing them to run experiments, simulate strategies, and collaborate across once-incompatible disciplines. The generalist’s superpower, pattern recognition, scales with AI’s ability to expose patterns faster than ever.

The Tension: When AI Collapses the Middle

However, there is a danger. AI can also collapse the middle ground. Those who are neither deep enough to train or critique models nor broad enough to redesign processes risk irrelevance.

Accenture’s stance reflects this reality: the organization will invest in those who can amplify AI, not those who simply coexist with it.

The future belongs to T-shaped professionals, people with one deep spike of expertise (the vertical bar) and a broad ability to collaborate and adapt (the horizontal bar). AI does not erase the specialist or the generalist; it fuses them.

The Passionate Argument: Both Camps Are Right, and Both Must Evolve

The Specialist’s Rallying Cry: “AI needs us.” Machines can only replicate what we teach them. Without specialists who understand the nuances of law, medicine, finance, or engineering, AI becomes dangerously confident and fatally wrong. Specialists are the truth anchors in a probabilistic world.

The Generalist’s Rebuttal: “AI liberates us.” The ability to cross disciplines, blend insights, and reframe problems is what allows human creativity to thrive alongside automation. Generalists build the bridges between technical and ethical, between code and client.

In short: the age of AI rewards those who can specialize in being generalists and generalize about specialization. It is a paradox, but it is also progress.

Bottom Line

AI has not ended the debate. It has elevated it. The winners will be those who blend the curiosity of the generalist with the credibility of the specialist. Whether you are writing code, crafting strategy, or leading people through transformation, your edge is not in competing with AI, but in knowing where to trust it, challenge it, and extend it.

Takeaway

  • Specialists define the depth of AI.
  • Generalists define the direction of AI.
  • The future belongs to those who can do both.

Further Reading on the Specialist vs. Generalist Debate

  1. Harvard Business Review: When Generalists Are Better Than Specialists—and Vice Versa
    A foundational piece exploring when broad thinkers outperform deep experts.
  2. CNBC: Accenture plans on exiting staff who can’t be reskilled on AI
    A look at how one of the world’s largest consulting firms is redefining talent through an AI lens.
  3. Generalists
    This article argues that generalists excel in complex, fast-changing environments because their diverse experience enables them to connect ideas across disciplines, adapt quickly, and innovate where specialists may struggle.
  4. World Economic Forum: The rise of the T-shaped professional in the AI era
    Discusses how professionals who balance depth and breadth are becoming essential in hybrid human-AI workplaces.
  5. McKinsey & Company: Rewired: How to build organizations that thrive in the age of AI
    A deep dive into how reskilling, systems thinking, and organizational design favor adaptable talent profiles.