Rebuilding belief in AI with accountable adoption
AI isn’t stumbling because of bad algorithms. It’s stumbling because people don’t trust it.
At the very moment businesses are betting big on AI to drive growth, confidence in the technology is eroding. Our research shows 71% of organizations still hesitate to trust autonomous agents in enterprise environments.
For a tool hailed as the next productivity engine, that’s a trust crisis hiding in plain sight.
Article continues below
Vice President of Analytics and AI at Capgemini Invent.
The frontline trust gap
For AI to deliver real, scalable ROI, it can’t sit alone in an innovation lab. It needs to be woven into the everyday decisions and workflows that keep a business thriving. But the people doing that everyday work are, ironically, some of the least convinced.
Harvard Business Review recently found that employee usage of employer-provided tools dropped by 15% between February and July this year. When AI feels opaque or untested, workers tend to avoid it. Even worse, they turn to their own “shadow” AI tools instead
Capgemini research shows that 63% of software professionals currently using generative AI are doing so with unauthorized tools or in a non-governed manner. This introduces security threats, compliance gaps, and inconsistent results, and ultimately slows down successful adoption.
When trust erodes at the frontline, organizations can’t move beyond the experiment phase, no matter how advanced their models may be.
Overconfidence without safeguards
The trust problem cuts both ways. There’s also too much trust, or rather, trust in the wrong places. IDC reports that around a third of UK firms say they “completely trust” AI.
Yet many of those same organizations haven’t put the basic guardrails in place: governance, data controls, risk frameworks, or ethical oversight. In other words, they trust technology more than they trust their own infrastructure.
It’s a risky imbalance. Organizations overestimating their AI maturity race ahead with experimentation without bias mitigation or compliance planning. And in a regulatory climate shaped by frameworks like the EU AI Act, the cost of misjudgment can be severe – up to 7% of global revenue for high-risk misuse.
Fixing the trust problem at its source
Traditional AI has always needed governance. But generative AI, with its creativity, unpredictability, and hallucination risk, demands a more intentional approach than ever. Building trust requires a holistic approach that blends governance, culture, training, and intentional human-AI collaboration.
Governance can’t be treated as an afterthought, bolted on once the rollout is complete. It should shape design from day one. Organizations need to establish clear frameworks for model lifecycle management, data provenance, risk assessment, explainability, human oversight, and ongoing monitoring and quality assurance.
While that might seem like a long list, robust governance should be seen as a competitive advantage, not a hindrance. Done right, it accelerates innovation by making scaling safe, predictable, and trustworthy.
Build AI that reflects human values
We can’t expect people to trust what they don’t understand – and nor should they have to.
Trust comes from clarity. It grows when employees understand how AI works, why it recommends certain outputs, and how it aligns with organizational values. This is why governance needs to go beyond technical considerations. Human ethics should be woven into the AI stack as tightly as performance metrics.
When people recognize their own principles (like fairness and transparency) reflected in AI behavior, adoption becomes a natural step rather than a leap of faith.
Empower employees with skills and confidence
AI is most effective when people know how to use it. Comprehensive training, new role definitions (such as AI supervisors and human-in-the-loop specialists), and a skills-based approach help employees feel empowered rather than displaced.
To achieve this, human-AI collaboration must be intentionally designed. Decision-making structures, escalation paths, task handoffs; these are all intricacies that need mapping out.
While autonomous agents can drive processes end-to-end, humans remain accountable for providing direction, maintaining guardrails, and ensuring successful outcomes.
From pilots to enterprise ROI
The path to AI ROI is through scale, and scale only happens when the foundations are strong.
Today, those foundations are often lacking. Many organizations are stuck in pilot mode, running isolated experiments without the data architecture, governance, or change management needed to scale.
Recent Microsoft research highlights that the AI leaders achieving 3x ROI over “laggards” are set apart by their coherent, organization-wide strategy.
Leaders need to accept that building such a strategy takes time. Trust isn’t built overnight. A phased roadmap aligned to your people’s values will always outperform rushing to deploy whatever model happens to be newest this month.
A future built on trust
AI won’t reach its enterprise potential through technical innovation alone. It needs cultural transformation, governance innovation, and a renewed commitment to building systems people actually want to work with.
Trust is the foundation of scalable ROI. Ignore it, and your AI strategy becomes a ticking time bomb. But when organizations invest in the right safeguards, skills, and values, AI becomes what it was always meant to be: a dependable partner in building the future.
We’ve featured the best IT automation software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro