Back to Insights
April 9, 2026By Enrique Guitart

The Question Has Moved from "How Do We Use AI" to "Who Runs the Work"

A lot of exec conversations are still framed as "how do we use AI." That framing is already a year behind. The real conversation is about execution agents, operating model, and how the SaaS stack you bought last decade is about to split in two.

A lot of exec conversations I walk into right now are still framed the same way. Someone at the top of the house asks, "how do we use AI." A list of tools gets read out. Somebody mentions Copilot. Somebody else mentions the ChatGPT Enterprise pilot that three departments are running in parallel without knowing about each other. A consulting deck shows up with six use cases. The meeting ends without a decision and everyone agrees to "keep accelerating."

That framing is already a year behind. I do not say that to score a point. I say it because the cost of staying in that frame for another six months is measurable, and I have watched it show up in real portfolios.

The question has moved. It is not "how do we use AI" anymore. It is "who actually runs the work, once agents can do most of it." That is a different conversation, and it lands in a different part of the org. It is not a tooling question for IT to scope. It is an operating model question for the CEO and the COO to own, with the CFO in the room asking where the savings come from and the CHRO in the room asking what happens to the people who used to do the work.

Let me try to be concrete about what I mean.

In the old frame, AI is a productivity tool. You buy a license, a knowledge worker gets a copilot next to their email, and the theory of value is that they do the same job a little faster. It is additive. Nothing structural changes. The org chart still works. The roles still make sense. The budget line looks like software.

In the new frame, AI is an execution layer. An agent takes a request, figures out the steps, acts across systems, produces a result, and closes the loop. The human is still in the picture, but they are supervising, approving, and handling exceptions. They are not the one clicking through eighteen screens to reconcile an invoice. That work is being done by the agent. At scale, across an enterprise, this is not additive. It is structural. The org chart does not work the same way. The role descriptions do not match what people are actually doing. The budget line stops looking like software and starts looking like labor.

When you sit with that shift for a minute, three things fall out of it.

The first is that the SaaS stack you bought last decade is about to split in two. One half of it gets absorbed into the agent layer. The parts of it that are just data plumbing, reporting, light workflow, and forms, those are going to be commodity substrates for agents to operate on. The vendor logo will still be there, but the value capture will move to whoever owns the agent that does the work on top of it. The other half of the stack, the part that owns the data, the transactional system of record, the regulated process, those stay valuable. Maybe more valuable. Because now the agents need them to be correct, current, and auditable.

I do not know exactly where every vendor lands on that split. I have seen enough platform transitions to know that the middle of the stack is the most exposed. The heavy systems at the bottom and the interface layer at the top are usually the survivors. Everything in between is up for grabs, which is why every application vendor is suddenly shipping an agent of their own. They can feel the same thing I am describing, and they are trying to move up the stack before they get compressed out of it.

The second is that your operating model was designed for humans doing the work. You have approval chains, handoff points, segregation of duties, and review cycles that were built when the thing being reviewed was a human decision made by a person at a desk. When the thing being reviewed is an agent decision made by a model in a runtime, most of those controls either do not apply or apply in a way that creates the wrong friction. Approval chains that were designed to prevent fraud by a single employee do not map cleanly onto agents that can take a thousand actions a minute. Segregation of duties does not mean the same thing when one agent can touch three systems in a single reasoning step. The operating model needs to be redesigned, not retrofitted.

The third is about accountability, which is the part most teams want to skip. If an agent takes an action and something goes wrong, who is accountable. Not in the legal sense, although that matters. In the operational sense. Who gets the page. Who writes the postmortem. Whose performance review reflects the outcome. If the answer is "the model" or "the vendor" or "the platform team," you do not have an operating model, you have a diffusion of responsibility. The programs I have seen work are the ones where a named human owns the agent's output, the same way a manager owns a team's output. That is a small change in how people talk about the work, and it makes everything downstream of it harder and healthier.

This is the shape of the real conversation I think enterprise leadership teams should be having in 2026. Not "how do we use AI." Not "which tool should we buy." Something closer to: who in our organization runs the work once agents do most of the doing, what is our operating model for that world, and what is the sequence we are going to get there in without destroying trust along the way.

When I was Head of AI at Restaurant Brands International, this was the conversation I kept trying to pull people into. It is uncomfortable because it is not a tooling conversation. It is an org design conversation, which means it has winners and losers and real decisions. But the companies that have it honestly in 2026 are the ones that are going to be in a strong position in 2028. The ones that stay stuck in the tooling frame for another year are going to find themselves with a bigger AI budget, a pile of overlapping pilots, and no answer to the question that matters.

If your team is still asking how to use AI, that is fine as a starting line. Just do not mistake it for the finish. The question you actually need to answer is who runs the work, and the sooner you name that person, that team, and that operating model, the less painful the next two years will be.