The AI Leadership Gap: Why Most Organisations May Be Solving the Wrong Problem
There is a number that surfaces repeatedly in conversations about AI performance: research from MIT's Media Lab suggests that 95% of corporate AI initiatives show no measurable return on investment. Whether that figure reflects your experience or not, the pattern it points to is hard to ignore. Organisations are investing heavily in AI — in tools, talent, and transformation programmes — and the results are falling consistently short of expectations.
The reasons are rarely simple. Technical challenges are real. Data quality matters. Integration is hard. But a growing body of research suggests that something else is also at play — something that sits upstream of implementation, in the decisions leaders make before a single line of code is written or a vendor is approved.
That is what our Special Report sets out to examine.
The Decision Gap Nobody Is Talking About
When AI initiatives underperform, the instinct is often to look for a technical explanation. A data pipeline that wasn't clean enough. A model that wasn't well-suited to the use case. An integration that took longer than expected.
These are legitimate diagnoses. But research consistently points to another contributing factor: the absence of strong decision-making frameworks at the leadership level.
Senior leaders are increasingly expected to approve AI initiatives, set strategic direction, and define governance standards. Many are technically briefed. They attend vendor presentations, read the reports, follow the industry news. Yet when asked to evaluate which initiatives actually merit approval, what risks are acceptable, or how impact should be measured, the frameworks to answer those questions are often unclear or missing entirely.
That gap — between the decisions leaders are expected to make and the tools they have to make them — is what the research describes as a structural leadership gap. And it has real consequences for organisations trying to make AI work.
The Disconnect Is Real — and Measurable
The leadership gap isn't just theoretical. You can see it in the data.
A Wall Street Journal analysis found that nearly 20% of C-suite executives report saving more than 12 hours per week using AI. Meanwhile, 40% of workers report no time savings at all. That variation suggests AI performance is shaped significantly by how it is led — and that the people setting the direction are not always the same people absorbing the consequences when direction is unclear.
A global survey by Randstad adds another dimension. 71% of men report having AI skills, compared to 29% of women. Men are more likely than women to have been given access to AI training by their employers. Generational gaps are similarly significant. These are not incidental differences. They reflect variation in exposure, opportunity, and perceived risk — and without intentional leadership intervention, the evidence suggests they tend to compound rather than self-correct.
Why Women's Hesitation Deserves a Closer Look
Here is where the picture becomes more nuanced — and arguably more important.
Research from Oxford University suggests that women's lower rates of AI adoption are not primarily explained by a lack of skills or access. They appear to be strongly influenced by how women assess the societal consequences of artificial intelligence. Concerns about mental health impacts, privacy risks, environmental cost, and labour displacement account for a meaningful share of the variation in adoption among women — in some cases outweighing digital literacy and education as predictors.
Researchers describe this pattern as "other-oriented concern."
This distinction matters. Organisations that treat women's hesitation primarily as a confidence gap or a skills deficit may be misreading the signal. What looks like reluctance often reflects something closer to discernment — a considered evaluation of consequences that extends beyond immediate organisational benefit. The leaders asking harder questions about bias, accountability, and downstream harm are not necessarily blocking progress. In many cases, they are identifying the conditions under which progress becomes sustainable.
The adoption gap, in this reading, is less a skills issue than a values divide. And it points toward a kind of leadership that AI, and the organisations deploying it, may need more of.
Five Ways Organisations Tend to Make It Worse
When organisations confront poor AI performance, they tend to respond in recognisable ways. Most of them feel reasonable in the moment. Few of them address the underlying problem.
They accelerate — launching more pilots, moving faster, operating on the assumption that speed is what's missing. They invest in technical upskilling, as though the gap were primarily a coding problem. They hire more technical specialists, deepening reliance on the very dynamic that may have created the leadership vacuum in the first place. They appoint a Chief AI Person and assume that concentrated responsibility will produce distributed clarity. Or they bring in consultants — which can genuinely help, but typically only when leaders already have enough orientation to know what they are asking for.
The pattern across these responses is consistent: activity increases, but strategic clarity does not. Without stronger decision frameworks at the leadership level, implementation gaps tend to persist regardless of how much resource is deployed.
Four Patterns That Keep Capable Leaders Stuck
Beyond organisational responses, there are four recurring patterns that slow progress at the individual leadership level — not because the leaders involved are incapable, but because the structural conditions make these patterns difficult to avoid.
Strategic theatre. AI initiatives get approved to demonstrate innovation rather than solve defined problems. Pilots are launched without measurable outcomes or clear ownership — generating activity without accountability.
Unaddressed societal concerns. Legitimate questions about bias, privacy, labour impact, and environmental cost are treated as secondary considerations — something to return to once value has been demonstrated. When that sequencing becomes the norm, adoption slows and internal trust erodes.
The disconnect between decision-makers and users. Tools are frequently chosen based on executive expectations rather than frontline experience. The people expected to integrate AI into daily work often have had little input into how it was selected, which goes some way toward explaining the perception gap the WSJ data surfaces.
Isolation. Senior leaders frequently describe feeling alone when navigating AI decisions — among the few voices asking strategic questions in rooms where technical enthusiasm dominates, with limited access to spaces where they can pressure-test their thinking in plain language.
What the Research Suggests About Leaders Who Do This Well
Analysis of successful AI adoption points to four recurring capabilities at the leadership level. None of them are primarily technical.
Strategic evaluation and restraint. Effective leaders assess proposals against defined organisational priorities, operational readiness, and measurable outcomes — and are willing to decline initiatives that don't meet those criteria. The research suggests selectivity matters as much as openness to experimentation.
Responsible guardrails. Governance, accountability, and ethical considerations are built into decisions from the outset, rather than addressed retrospectively. When concerns about bias or societal risk are treated as strategic inputs rather than friction, the evidence suggests trust strengthens and adoption improves.
Decision authority without technical mastery. Effective AI leaders tend not to position themselves as technical authorities. Instead, they establish clarity around ownership, escalation pathways, and risk thresholds — exercising authority through structured inquiry and boundary-setting rather than through technical depth.
Peer calibration. Leaders who navigate AI decisions effectively rarely do so entirely alone. Benchmarking thinking against informed external perspectives, surfacing blind spots, and building shared experience with peers navigating similar complexity appear to reduce both isolation and poor decision-making.
The Shift Worth Making
None of this suggests that technical capability is irrelevant. It matters enormously. But research increasingly points to leadership architecture as a significant determining factor in whether AI initiatives generate lasting value — shaped by how decisions are made, who owns them, what governance looks like from the start, and whether leaders have the frameworks and peer support to navigate complexity with confidence.
For many senior leaders, the shift required is less about learning more about AI and more about applying the strategic judgment, risk assessment, and governance instincts they have already developed — with the specific decision frameworks that make those capabilities legible in an AI context.
That is what closing the leadership gap looks like in practice. Not a different relationship to technology, but a more deliberate approach to the leadership decisions that shape whether technology delivers.
The AI Leadership Gap: How to Convert Concern Into Confidence is our new special report examining the structural patterns behind AI underperformance — and what it takes to lead differently.
If you want to assess where you stand across the four leadership capabilities, take the Responsible AI Leadership Assessment for a personalised analysis in under five minutes.
And if you're ready to develop those capabilities alongside an international cohort of senior peers, make sure to check out our Executive AI Intensive.