While it’s mission-critical, most organizations still haven’t implemented an AI monitoring and risk management platform. What’s the hold up?
As AI applications at Principal Financial Group proliferated over the last few years, so has the need for a comprehensive AI governance strategy, and a set of tools to help monitor and enforce it.
“We’re leveraging over 100 active AI use cases, including natural language processing, machine learning and generative AI models used for fraud detection, claims automation, investment research, retirement plan optimization, and contact center support,” says VP and chief data and analytics officer Rajesh Arora. Each, however, introduced risks, such as compliance, bias, and ethics concerns that required an AI governance strategy.
The investment management company first developed the ethical and responsible AI (ERAI) framework, which governs the full lifecycle of AI from intake and risk classification to model validation and ongoing monitoring. That framework mandates explainability, human oversight, and privacy controls for all of its AI applications. Then Principal deployed an AI governance platform, Credo AI, to inventory all AI applications and address risk assessment, data privacy, compliance tracking, and general alignment with AI regulation and standards. “We’re also piloting some governance workflows in ServiceNow,” he says.

Rajesh Arora, VP and chief data and analytics officer, Principal Financial Group
Principal
The risks AI presents are very real, says Avivah Litan, VP and distinguished analyst at Gartner. “The main problems are data compromise, leaks, and inaccurate, unwanted outputs coming back, especially with generative AI, that lead to making the wrong decisions,” she says.
AI governance is absolutely mission critical to every business, adds Sinclair Schuller, responsible AI leader at EY. “Governance failures can lead to company failures,” he says.
Despite the need to address these issues, implementation isn’t as widespread as the urgency suggests.
A slow adoption curve
Gartner has identified AI governance platforms as the second highest strategic trend for 2025. Organizations that use these tools will experience 40% fewer AI-related ethical incidents, it predicts. But the platforms still aren’t widely used yet — and it’s not because the tools are immature. “CIOs don’t want to invest in the tools because they’re having a hard enough time finding ROI in these applications,” says Litan. Until now, security and risk management have been an afterthought.
For instance, Vikram Nafde, EVP and CIO at Webster Bank, hasn’t committed to a dedicated AI governance platform so far even though the bank has deployed gen AI solutions for a wide variety of business processes, including document processing, handling unstructured data, and peer credit reviews. It’s also developed internal governance guidelines, created formal AI use policy, and created an AI governance committee to provide oversight, strategic direction, and governance for responsible design, implementation, and use of AI in the organization.

Vikram Nafde, EVP and CIO, Webster Bank
Webster Bank
Today, Nafde says, “We rely on existing enterprise tools like Jira, SharePoint, and ServiceNow to manage components of AI governance, such as workflows, controls, and evidence tracking.” But he’s been evaluating AI governance platform options as well. “We would ideally like to have a single platform that provides comprehensive coverage across the full AI governance lifecycle, including integration with internal risk, legal, data, and security domains,” he says.
Agentic AI initiatives, where AI makes decisions autonomously, will drive broader adoption of AI governance platforms as the technology expands, says Litan. “Agentic is so unpredictable and can go off the rails so easily that it’ll have to be reigned in with controls,” she says. Today, many companies use manual reviews and policies, but autonomous agents, when they take off over the next two years, will move so fast that companies won’t be able to control it with manual methods. “There’s a lot of hype but not a lot of adoption,” she adds. “It’ll take a couple of years to get down to the plateau of productivity” — Gartner-speak for mainstream adoption.
AI governance platforms can help CIOs monitor model performance, detect bias, enforce policies, and streamline compliance reviews, says Lisa Palmer, CEO and CAIO at Dr. Lisa AI, an AI business strategy consultancy. They can detect bias and fairness issues in models, provide model explainability (such as feature attribution and heatmaps), and monitor model performance, drift, and compliance in real time, she writes in her CIO Advisory Guide, 5 Strategic AI Governance Priorities Every CIO/CAIO Must Own.

Lisa Palmer, CEO and CAIO, Dr. Lisa AI
Dr. Lisa AI
“Tools like Fiddler, TruEra, and Credo AI can surface explainability gaps, track data lineage, and ensure models behave as expected in production,” she says. “What they can’t do is replace human judgment, define business value, or automatically align AI use cases with strategic priorities.”
Litan estimates there are 30 to 40 vendors in the AI governance platform market, but because of the low adoption rate, you won’t find a lot of customer references, she says, which is one reason why Gartner has yet to publish a Magic Quadrant naming market leaders and laggards.
But some have strengths in specific areas of AI governance, Litan adds. For example, Zenity is strong at monitoring Microsoft products such as 365 Copilot, Cranium excels at third-party risk management, Noma Security is good at infrastructure and runtime violations, and Holistic performs well at testing for bias.
AI governance tools can also help establish and execute policy around third-party AI consumption (think ChatGPT or Anthropic), as well as the internal design and development of new AI assets. “These tools can describe the policy for use and help enforce the policy,” Schuller says.
Before assessing AI governance tools, CIOs need to take several steps, starting with building an inventory of AI applications and creating a policy framework. What problems does the tool need to solve, who owns governance outcomes, and what policies, workflows, and thresholds are in place or need to be built? “Without this clarity, even the best tools will underdeliver,” Palmer says. “CIOs should begin by identifying their use cases and assessing risk tiers. Early-stage organizations will benefit from MLOps platforms, while mature organizations need policy enforcement layers or bias audit automation.”

Avivah Litan, VP and distinguished analyst, Gartner
Gartner
“First get organized,” adds Litan. “Define your policies for AI accountability. Discover all the AIs. Make sure you know what’s going on, who’s using what, and how risky it is. Then get your data in order. Make sure it’s properly permissioned and classified, and that it’s locked down.”
When evaluating tool options, CIOs should look for features like model explainability, bias detection, policy automation and rule-based compliance triggers, real-time model performance monitoring, auditability and documentation for regulatory scrutiny, and integration into existing model development lifecycles, Palmer says.
Have a set of selection criteria you can walk vendors through at the start, and have an idea of what the future state of your governance model will look like, adds Schuller. “If you can’t capture that in the platform you’re looking at, you should rule it out.” He also says to look for platforms that have a feature that lets you define a governance policy that all projects have to abide by, and then create sub-policies that inherit those policies.
Nafde agrees. “That feature would be very powerful, especially when managing governance at scale across multiple business lines or domains,” he says. “The ability to enforce baseline policy with contextual tailoring is key to organizational alignment without slowing down innovation.”
But you still need humans to approve those policies. “You should have milestone checks where you can approve those policies or not,” says Schuller. “Ultimately, people still need to do the governing.”

Sinclair Schuller, responsible AI leader, EY
EY
Palmer says key evaluation criteria should include the depth of integration, usability across different roles, and platform adaptability as models and regulatory obligations evolve. Cross-functional access to the governance tool is especially important, since AI governance has legal, compliance, and business stakeholders.
Usability, customization, and scalability were key for Principal, as was ensuring that the tool could evolve alongside its governance, Arora says. He also looked for strong functionality, performance, and TCO.
On the downside, though, he says his evaluations showed that governance tools often struggled with the complexity of organization-specific AI applications where subjective judgment is required. So training and operationalizing tools can be time intensive. Also, seamless integration with existing systems is rarely straightforward, and many tools fall short to address foundational data issues such as data quality, accuracy, and completeness, he says.
What to do moving forward
While having an AI governance platform is desirable, don’t rush into buying tools, Nafde says. “Define your governance framework and processes first. Understand your AI footprint and associated risks, and let that guide your tool selection.”
And don’t be surprised if vendors are willing to negotiate down prices. “It’s such a new field that vendors will cut you a deal, but it’s not the cost of buying the tool,” Litan says. “It’s about the cost in terms of time and resources and staff. Companies are stretched thin so it’s not even clear who should manage it.”
While AI governance tools can help with monitoring, CIOs still need to define what acceptable risk means for the business, align AI initiatives with strategic business outcomes, and establish an enterprise-wide governance strategy, Palmer says. “These platforms don’t define governance strategy for you,” she adds, “and most don’t address external threats such as AI-enabled public influence campaigns, coordinated mass complaints, or reputational manipulation. “That’s a blind spot CIOs can’t ignore.”
Once you’re up and running, Schuller cautions against becoming too restrictive with policies. “AI is a creative engine,” he says. “You want to constrain it, but not so much that you can’t get creativity.”
There’s only so much AI governance platforms can do, adds Arora. He thinks a lack of mature and clearly defined responsible AI and security policies at the industry level could eventually hinder the effectiveness of AI governance tools. “Without this foundation, governance tools will struggle to work at their full potential,” he says. “My advice is to treat AI governance as a business capability, not just a compliance requirement. Choose tools that are flexible enough to adapt to your organization’s structure, but robust enough to enforce consistent standards.”
The AI space is changing rapidly, so once CIOs have a platform in place, regular reviews are critical. “I’d recommend a very tight loop, maybe monthly or quarterly, to make sure you don’t need to modify your policies,” says Schuller.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.