By Erika Fille T. Legara
JUST RECENTLY, I was in a room full of seasoned board directors asking the now-familiar questions about AI and cybersecurity. That part is hardly surprising anymore. Most boards are trying to understand where AI actually matters, what it changes, what it breaks, and how seriously they should treat the noise around it.
One of them asked a question I liked immediately because it was simple in form and difficult in substance: What is the most frequent and biggest mistake many enterprises make with AI? And how can we make things better?
The answer, of course, depends on the organization. Two companies can spend the same amount on AI and end up in very different places, depending on how mature they are, how decisions are made, and how clear leadership is about the investment’s purpose. The mistakes also changed shape over time. The enterprise that ignored data and analytics in 2017 was making a different mistake from the enterprise that built an AI center in 2019, and both are different from the enterprise that now claims to have an AI strategy when what it really has is a budget line for chatbots.
Nevertheless, there is a pattern across all three.
The most common mistake enterprises make with AI is treating it as a technology buying exercise rather than a strategy, capability, and governance problem.
I tend to think about this in waves because I have lived through them that way.
WAVE 1: NOT SEEING IT AT ALL
In 2017, I came home to the Philippines after almost six years of working in Singapore. That was also the year I designed the first formal Master of Science in Data Science program in the country at the Asian Institute of Management. We built the program to be rigorous, but also practical. One of the things I insisted on was a final capstone project in which student teams worked on real company problems rather than toy datasets or abstract classroom exercises.
At the time, pitching this was tough.
This was the first wave. Many enterprises were still at or near zero in terms of structured, data-driven decision-making. They were certainly aware of the language. Executives could talk about descriptive, predictive, and prescriptive analytics because those categories had already entered management vocabulary. But in many organizations, that was where the sophistication ended. The terms were familiar, but the operational meaning was not.
Even getting companies to participate was a struggle. Some weren’t convinced there was anything worth investing in. Others were curious, even willing, but once we got to the data question, the gap between interest and readiness became obvious. The data didn’t exist in a usable form, or it was siloed across systems that had never been asked to talk to each other. Curiosity without data readiness turns out to be a very common starting point.
You had to convince them that better data, better analytics, and better models were not luxury items for “innovative” firms, but capabilities that could reduce cost, improve efficiency, and support better decisions. Sure, that sounds obvious now, but it did not feel obvious then.
So in that first wave, the mistake was underestimation. Many enterprises simply did not grasp what these tools could do, or what it would take to build the foundations for using them well.
WAVE 2: EXCITEMENT WITHOUT DIRECTION
Then came the second wave, which was almost the mirror image.
By the late 2010s and into the pre-pandemic period, some firms had become very excited about AI, and, in fairness, some of that excitement was justified. Money started moving. Enterprises launched centers, labs, innovation units, and transformation teams. They hired expensive talent and approved large budgets for use cases that were often not especially sophisticated; in some cases, spending hundreds of millions on fairly standard machine learning problems without having thought seriously about operationalization, adoption, workflow redesign, or accountability.
That is where things began to go off the rails.
A company would build a center, then a lab, then another adjacent team with a slightly different mandate. It all looked active and modern, and it made for good PR and annual reports. But after three to five years, boards would ask the obvious question: what, exactly, has materialized? Too often, the honest answer was “not much.” There might be pilots, dashboards, or prototypes, and perhaps even technically competent models somewhere in the organization, but rarely a clear line connecting any of it to enterprise strategy, operating priorities, or measurable business outcomes.
This is where many boards become understandably disillusioned. They have seen the spending, approved the talent, and heard management talk about transformation for years, yet the outcomes remain fuzzy, fragmented, or local. So the reaction becomes abrupt. Funding slows, then stops. That overcorrection is its own problem, but it usually begins with a real governance failure, where management spent aggressively without enough strategic discipline.
WAVE 3: EVERYONE’S A CONVERT, SAME MISTAKE
Then ChatGPT arrived and kicked off the third wave. The public release made AI legible to a much wider population of executives and directors who had previously treated it as technical background noise. Money started moving again, and with it came a renewed sense of urgency as organizations suddenly felt they needed an AI strategy. The trouble is that in many organizations, that quickly became shorthand for “go buy some GenAI.”
There’s a definitional problem hiding inside a lot of AI announcements right now. When companies say they want to invest in AI, many mean they want to buy GenAI systems, often from several vendors at once, and sometimes for use cases that are barely distinguishable from one another. One government agency I came across was seriously committed to what it called “AI transformation.” What it actually had was a collection of chatbots: different vendors, different tasks, no coordination, no connective tissue, no clear line back to any strategic objective. The spending was real, but the fragmentation had not gone away. It was the same pattern I had been watching for nearly a decade.
The numbers are already cautionary. BCG found that only 26% of companies generate tangible value from AI, and MIT’s NANDA research found that only about 5% of enterprise AI pilots achieve measurable revenue impact. Gartner warned that at least 30% of generative AI projects would be abandoned after proof of concept because of poor data quality, weak risk controls, or unclear business value.
Generative AI can absolutely be useful. In some organizations, it is one of the fastest ways to improve knowledge work, customer interaction, or internal productivity. The argument is not against it. It is against the collapse of the whole field of AI into one highly visible category of tools. For many firms, the highest-value use cases may have little to do with generative AI at all. Better forecasting, logistics optimization, anomaly detection, fraud analytics, and conventional machine learning systems can create enormous value when tied to actual business priorities. In many environments, these will matter more than an enterprise chatbot layered on top of messy internal processes.
THE PATTERN UNDERNEATH ALL THREE WAVES
That is why I keep coming back to the same point. AI belongs in operations and governance, where strategy gets tested in practice.
Boards should be asking management to show how AI investments connect to strategic imperatives, what specific outcomes they expect, what supporting data and systems are required, how the organization will absorb and use the outputs, and who is accountable when an AI-enabled decision fails or causes harm. That is a far better conversation than asking whether the company is “doing AI.”
The fix is not conceptually complicated, though it is difficult institutionally. It starts with strategy, being honest about the actual business problem before selecting a tool, and being specific enough about the expected outcome that you could, two years from now, look back and say whether the investment worked. It also means treating data, process, talent, and governance with the same seriousness as the model itself, because those are usually what determine whether a technically sound system ever produces anything useful. Fragmentation is the enemy here, whether that means duplicative vendor contracts, disconnected pilots, or GenAI deployments sitting atop processes nobody has bothered to redesign. Innovation and governance are not in tension; if anything, governance is what keeps an organization from spending several years and a great deal of money discovering that motion is not the same thing as progress.
If I had to answer that board question in one line, I would say this:
The biggest enterprise mistake with AI is mistaking motion for progress.
That mistake looked like neglect in the first wave, overexcited but incoherent spending in the second, and fragmented GenAI buying in the third. Different packaging, same strategic weakness.
Erika Fille T. Legara, Ph.D. is a physicist, educator, and data science and AI practitioner working across government, academia, and industry. She is the inaugural managing director and chief AI and data officer of the Philippine Education Center for AI Research, and an associate professor and Aboitiz chair in Data Science at the Asian Institute of Management. She serves on corporate boards, is a fellow of the Institute of Corporate Directors, an IAPP Certified AI Governance Professional, and a co-founder of CorteX Innovations Corp.