Aaron Levie’s post about Box hiring AI Business Automation Engineers is more important than it looks. It is not just another AI job description. It is a signal that enterprise AI is moving out of the innovation lab and into the operating model of the company.
For the last two years, most companies treated AI adoption as a tooling problem. Buy seats. Run enablement. Launch a chatbot. Ask teams to experiment. That was useful, but it was never going to be enough. The next phase is harder because agents do not simply assist work. They change how work is designed, routed, governed, measured, funded, and trusted.
That is why forward-deployed engineering is becoming one of the most important transformation patterns in enterprise AI.
OpenAI is saying this plainly. Its Frontier platform is positioned around building, deploying, and managing AI agents with shared context, permissions, onboarding, and feedback. More importantly, OpenAI’s Frontier Alliances with BCG, McKinsey, Accenture, and Capgemini explicitly combine forward-deployed engineering with strategy, workflow redesign, system integration, and change management. That is not a software rollout model. That is a transformation model. (OpenAI)
Anthropic is moving in the same direction. Its new enterprise AI services firm with Blackstone, Hellman & Friedman, and Goldman Sachs is designed to bring Claude into core business operations, with Anthropic engineering and partnership resources embedded directly into the new company. The stated goal is to create a scalable platform for designing, building, and maintaining enterprise AI deployments. (Blackstone)
This is the uncomfortable lesson for technology leaders: the bottleneck is not model capability. The bottleneck is organizational absorption.
Reuters captured the pattern well when it reported that OpenAI and Anthropic related ventures are pursuing acquisitions of engineering and consulting firms to add hundreds of engineers and consultants. The rationale is simple. Enterprise AI deployment requires people who can tailor models to company data, systems, workflows, and changing business needs. That looks far more like Palantir’s embedded engineering model than classic SaaS implementation. (Reuters)
So does this prove that companies need an outside-in model? Not exactly.
It proves that outsiders are needed to accelerate the first wave. They bring pattern recognition, frontier model knowledge, and the scars from seeing what works across multiple enterprises. They can challenge sacred cows, collapse decision cycles, and bring executive urgency that internal teams often struggle to manufacture on their own.
But the long-term operating advantage will not come from outsourced forward deployment. It will come from building internal forward-deployed AI teams that sit close enough to the business to redesign work and close enough to technology to keep the architecture safe, reusable, and governable.
That distinction matters.
A market-facing forward-deployed engineer is usually trying to make a customer successful with a product. An internal forward-deployed AI engineer is trying to make the enterprise itself more programmable. The work is more political, more operational, and often more valuable. They have to understand how Finance closes the books, how Legal reviews contracts, how Customer Success triages escalations, how HR manages employee cases, and how engineering actually ships software. Then they have to turn that work into agentic systems that are observable, secure, cost-controlled, and trusted.
This is where internal teams can outperform outsiders. They know the informal workflows, the power structures, the brittle integrations, the compliance landmines, and the real reasons a process exists. AI agents fail when they are designed against the process diagram instead of the lived reality of the work. Internal forward-deployed teams can see the difference.
Citi’s internal AI Champions and Accelerators program is a useful example. The bank reportedly built a network of roughly 4,000 internal AI helpers across business units, with proprietary AI tools available in 84 countries and adoption above 70%. The striking point is not the number. It is the operating philosophy. Citi leaders said a small central team could never reach the full organization, and business colleagues were more effective because they could demonstrate AI in the context of actual jobs. (Business Insider)
Stripe appears to be pushing the model even further with a “Forward Deployed AI Accelerator” role embedded in its marketing team. The success metrics are not generic training attendance. They are permanently transformed workflows and the number of colleagues who start work with AI as the default mode. That is the right measurement. The point is not adoption theater. The point is changing the muscle memory of work. (Business Insider)
The best enterprise pattern will be a hybrid. External forward-deployed teams should help companies learn the new craft quickly. Internal teams should absorb the craft, adapt it to the company’s operating model, and turn it into repeatable capability.
That creates a new design challenge for an organization’s product and technology leaders. If forward-deployed AI teams report only into IT, they risk becoming automation ticket takers. If they sit only in the business, they risk becoming shadow technology teams. If they sit only in product, they may optimize for platform elegance while missing operational urgency.
The better model is a small, senior, cross-functional capability with three clear mandates.
First, they should redesign workflows, not just automate tasks. AI value shows up when the process changes, not when an old workflow gets a chatbot bolted onto the front.
Second, they should build on shared platforms, not local scripts. Every agent needs identity, permissions, audit trails, evaluation, observability, cost controls, and data boundaries.
Third, they should leave behind capability, not dependency. Each deployment should produce reusable components, better documentation, trained business operators, and a clearer pattern for the next team.
This is also where the token budgeting conversation becomes real. Aaron Levie is right that tokens will become a new enterprise resource allocation problem. OpenAI’s own enterprise research shows that API reasoning token consumption per organization grew 320 times year over year, which means AI usage intensity is no longer a rounding error. As agents become longer-running and more autonomous, companies will need to decide which workflows deserve scarce intelligence and which do not. (OpenAI)
That budget cannot live entirely inside IT. A legal contract review agent, a finance close agent, and a sales intelligence agent should not all compete for the same central pool without business accountability. The business should own the value case. Technology should own the controls. Product should own the reusable experience and platform patterns.
This is where forward-deployed AI conflicts with traditional product and technology organizations. Product teams are used to prioritizing roadmaps. IT teams are used to governing systems. Business teams are used to asking for outcomes. Agentic transformation compresses all three. The team designing the workflow may also be building the agent, configuring the data access, measuring adoption, monitoring spend, and changing the human operating procedure.
That will feel messy to organizations optimized for clean handoffs. But the mess is the point. Forward-deployed AI is effective because it collapses the distance between the people who understand the work and the people who can rebuild the work.
OpenAI’s latest enterprise analysis argues that leading firms are pulling ahead not merely because they have more AI access, but because they use AI more deeply, in more delegated workflows, and in more specialized parts of the business. That is the frontier marker executives should care about. Seat count is not transformation. Delegated work is transformation. (OpenAI)
For executive recruiters, this means the next generation of AI transformation leaders will not look like traditional program managers or classic enterprise architects. They will be hybrid operators. They will understand software craft, product judgment, workflow design, data governance, security, change management, and executive communication. They will be credible in a code review and credible in a CFO’s staff meeting.
For technology and product leaders, the implication is even sharper. Do not let forward-deployed AI become another consulting wave that produces impressive demos and fragile dependencies. Use outside experts to accelerate learning, but build the internal muscle quickly. The companies that win will not be the ones with the most pilots. They will be the ones that make forward deployment a permanent capability inside the enterprise.
AI transformation is becoming less about choosing a model and more about designing a new way of changing the company. The forward-deployed engineer is not just a new role. It is the emerging operating system for enterprise AI adoption.









