Back to Insights
April 10, 2026By Enrique Guitart

Between MIT Sloan, UM Herbert, and the Enterprise AI Trenches. Here Is What Held Up.

I spent the past year running an enterprise AI portfolio at a Fortune 500 organization while completing executive AI programs at MIT Sloan and the University of Miami Herbert Business School. The fieldwork taught me most of what I know. The academic work validated the two things that turned out to matter most: data strategy and governance.

I spent the past year running an enterprise AI portfolio at a Fortune 500 organization while completing executive AI programs at MIT Sloan and the University of Miami Herbert Business School. The fieldwork taught me most of what I know. The academic work validated the two things that turned out to matter most: data strategy and governance.

The pattern I saw in the field is remarkably consistent. An organization launches an AI pilot. The demo looks impressive. Leadership gets excited. Three months later the pilot is underperforming, the team is frustrated, and the disappointment gets attributed to AI maturity. The actual problem, which is almost always a structural one, never gets named.

The AI itself was rarely the issue. What failed was everything underneath it: the data architecture, the governance framework, the training model, and the willingness to reassess decisions as the technology moved. Here is what I learned about each of those layers, and why they matter more than the models sitting on top of them.

AI strategy without data strategy is a plan to fail slowly.

I did not learn this from a course. I learned it from watching a chatbot give conflicting answers to the same question because it was pulling from three data sources that did not agree with each other.

The fix was not a better model. The fix was building a KPI registry to eliminate ambiguity and designing a target-state data architecture where every AI product had a clear, reliable data foundation to point toward.

The pattern I see in nearly every enterprise conversation is the same. Organizations launch AI pilots before asking whether their data is ready. The pilots underperform. The disappointment gets blamed on AI maturity. And the real problem, unresolved data architecture, never gets named. You cannot build a reliable AI product on an unreliable data foundation. This is a leadership prioritization problem, not a technology one.

Governance is not a compliance checkbox. It is the enabler of speed.

Before there was a governance framework in place at the Fortune 500 organization where I ran AI, adoption was ad hoc. Teams evaluated tools independently. Security reviews happened late or not at all. The result was not safety. It was paralysis disguised as caution.

What changed things was building a federated model with a simple risk-lane system: green, amber, and red, integrated with the existing privacy and data platforms. After it was in place, intake volume went up and teams moved faster because they knew exactly what the path to approval looked like. Governance also made productivity tools safe to deploy broadly. Without it, those tools stay locked in IT sandboxes.

What most boards get wrong is treating governance as a risk function owned by legal or compliance. The most effective frameworks are designed by people who understand both the technical capabilities and the business use cases. Governance that does not understand what it governs does not protect the organization. It just slows it down.

Generative AI only works when you take its limitations seriously.

The most dangerous moment in any AI program is when a demo works perfectly and someone decides the hard work is done. It is not. It is just starting.

Generative AI models hallucinate, degrade over time, and give confident answers to questions they should not be answering. The right response to this is not to avoid the technology. It is to treat AI outputs like production software: tested, versioned, and monitored.

In my last enterprise role, we built evaluation pipelines with CI-triggered regression testing and LLM-as-a-judge frameworks. We built custom tool integrations that reached evaluation accuracy above 98 percent. Organizations that skip this step do not know their AI is underperforming until the damage is already done. And by then, the credibility of the entire AI program is at risk.

Deploying tools is not the same as training people.

Most organizations declare victory when the license is purchased and the tool is live. That is the beginning, not the end.

I have built and delivered hands-on training programs across entire corporate offices covering productivity and development tools. Not e-learning modules. Actual sessions where people worked through the tools in the context of their real jobs. The focus was not on features. It was on workflow redesign: not "here is how to use this tool," but "here is how your job changes when this capability exists."

AI adoption lives or dies on whether people actually change how they work day to day. A tool sitting unused in a browser tab is not a transformation.

The pace of change has made your last strategic decision obsolete.

A use case I deprioritized as not technically ready became viable within three months. An architectural decision made to accommodate the limitations of an older model became an unnecessary constraint by the time it was implemented. This happened repeatedly.

The practical implication is that any use case dismissed more than six months ago deserves a fresh look. Any board presentation on AI strategy that relies on benchmarks from twelve or more months ago is describing a landscape that no longer exists. The organizations that build durable AI capability are not the ones with the most pilots. They are the ones that build the discipline of reassessing their AI portfolio against the technology landscape as it exists today, not as it existed when the original decisions were made.

The structural investments are the transformation.

Organizations that will succeed with AI are not the ones that moved fastest to deploy tools. They are the ones that made the investments most teams skip: data architecture that AI products can actually rely on, governance frameworks that enable speed instead of blocking it, training that changes how people work rather than how they click, and the institutional agility to respond when the technology shifts under their feet.

None of these are exciting. None of them demo well. All of them are prerequisites for everything else to work.

If your AI program is underperforming, the first place to look is not the model. It is the foundation you built it on, or did not.