作者:Arafat Kabir

DAVOS, SWITZERLAND - A person walks past a temporary AI stall along the main Promenade at the World Economic Forum in 2024. (Photo by Andy Barton/SOPA Images/LightRocket via Getty Images)
SOPA Images/LightRocket via Getty Images
Last week, I argued that the MIT "GenAI Divide" report compelled us to rethink how we measure AI's impact in business. Beyond failure rates lies a more nuanced story of measurement blind spots. Now Gallup's latest surveys reveal another critical metric that demands our urgent attention: trust. Racing to join the AI gold rush is tempting, but without public trust, the gains will be fleeting
According to the 2025 Bentley University-Gallup report, about a third of Americans (31%) now trust businesses "a lot" or "some" to use AI responsibly, a marked improvement from 21% in 2023. Meanwhile, 57% say AI does as much harm as good, up from 50%. Forty-one percent trust businesses "not much," and more than a quarter (28%) say "not at all." Almost three-quarters expect AI to shrink U.S. jobs in the next decade, a belief unwavering over three years of polling.
That is not a groundswell of resistance. But it is not durable trust, either.
Are Businesses Measuring the Wrong Things—Again?
Much as my earlier MIT analysis argued for measuring the true impact of AI—capturing shadow adoption, micro-productivity gains, the bottom-up transformation that official “failure rates” miss—the new challenge for business is similar. Businesses track pilots, press releases, P&L statements, but rarely include public sentiment, trust, or transparency as a KPI. Yet Gallup’s latest and last year’s polling show those are exactly what the public demands.
Transparency is the runaway winner when Americans are asked how companies could alleviate AI concerns. It is a stronger lure for trust than education, more persuasive than regulation, and more urgent than vague promises. Nearly six in ten say businesses should be transparent about how they use AI—how and where decisions are made, who’s impacted, what happens to jobs, and where human oversight begins.
The Trust Dividend: Not Just a PR Asset
Why should business leaders care? Because this is not just about keeping up appearances. It is about unlocking the “trust dividend,” the tangible business benefits that flow when customers and employees believe that AI is improving their experience, not just the bottom line. Trust smooths adoption curves, drives customer engagement, helps attract top talent, and increasingly, keeps businesses on the right side of regulation.
But trust, like productivity, is not an abstract virtue. It needs to be tracked, audited, and managed. Businesses that treat trust-building as a first-class business outcome, e.g., counting trust scores, tracking transparency efforts, linking senior pay to public and workforce trust metrics, are the ones most likely to reap AI’s sustained rewards.
Neutrality: A Window, Not a Resting Place
MIT and Gallup have uncovered parallel truths. The measurable gains from AI—revenue, cost savings, efficiency—tell only half the story. The deeper transformation is happening in the subtle shifts of daily work life. The rise in neutrality from 50% to nearly 60% in just a year is barely a cause for corporate complacency. It represents a window. Public judgment about AI’s net value remains up for grabs. Businesses that act now to make their use of AI transparent, participatory, and demonstrably fair will capture the swing vote.
What Should Business Leaders Do Now?
Headlines about AI failure often obscure a richer reality of bottom-up innovation and quiet productivity lifts. Now, with Gallup’s pulse on the public, it is clear the next business challenge is not just to do AI right, but to be seen as doing it right. Businesses that win the trust game openly, consistently, and with tangible proof will be in a league of their own.