英语轻松读发新版了,欢迎下载、更新

Stanford AI professor offers some cautionary advice for deploying the technology

2025-05-20 14:41:00 英文原文

作者:By Bill SiwickiMay 20, 202510:41 AM

So many in healthcare today have such high hopes for artificial intelligence to do everything from defeating clinician burnout to advancing medical research. The challenge with AI in healthcare is not excitement – there's plenty of that. 

Rather, the key challenge for IT leaders is clarity – seeing AI clearly for what it can actually accomplish. Where is AI actually delivering ROI? What's safe to deploy now? And how do you manage risk, governance and long-term value without getting swept up in the hype?

Dr. Justin Norden is a Stanford professor and cofounder and CEO of Qualified Health, which sells infrastructure for generative AI in healthcare designed to deliver the technology, training and support for hospitals and health systems to get started with generative AI and scale it safely across the organizations.

Emphasis on the word 'caution'

Norden cautions IT leaders at healthcare provider organizations to identify ROI, safety issues, risk, governance and long-term value when approaching AI systems and tools. And he puts emphasis on the word "caution."

"We're now nearly two-and-a-half years into ChatGPT's release, and while the buzz around generative AI in healthcare is louder than ever, it's time to ask, 'Where's the real ROI?'" Norden said. "Despite all the excitement, even the most widely adopted use case – ambient documentation – hasn't delivered consistent financial returns across provider groups. Some physicians love it, but adoption is still limited and uneven.

"Meanwhile, dramatic headlines about AI outperforming doctors on diagnostic tasks grab attention, but these miss the point: These clinical use cases are not what will define AI's impact near term in healthcare," he continued.

The real value of AI today is in healthcare operations, he contended.

"That's where we're starting to see ROI," he said. "AI now can unlock insights from unstructured data – the bulk of what healthcare produces. Tasks like quality reporting, improving revenue cycle workflows and simplifying patient outreach may not sound flashy, but they're essential and time-consuming.

"AI is finally capable of automating what's been buried in PDFs, faxes and clinical notes," he continued. "These behind-the-scenes improvements may look small individually, but together they represent significant, scalable impact."

Ideas from the people closest to the work

Many in health IT and healthcare overall still are searching for a single "killer app" to change everything. However, real transformation will come from hundreds, or thousands, of small, practical use cases embedded into everyday work – and the best ideas won't come from the top down but from the people closest to the work, Norden said.

"Doctors and nurses already are using AI – just unofficially, on personal devices or through workarounds," he noted. "That tells us two things: there's demand, and there's risk. The path forward is clear: We need to bring AI above the table. Make it secure, HIPAA-compliant and accessible so we can turn this quiet revolution into lasting, system-wide progress."

On another front, when it comes to safely deploying AI in healthcare today, Norden said, it is essential to begin by acknowledging what is not safe because that's where many organizations are still exposed, whether they know it or not. One of the most pressing concerns is staff using personal AI accounts to process sensitive patient information, and it is more common than many realize.

"Talk to leaders across the country, and you'll hear everything from, 'We know it's happening, but we're looking the other way' to 'We'll deal with it if it becomes a problem,'" he said. "Some even operate under a quiet 'don't ask, don't tell' policy. But none of these are viable long-term strategies. We've seen this before with Meta's pixel and Google ad tracking, where once privacy violations came to light, lawsuits followed. The same is likely with AI. The legal and reputational risks are too big to ignore.

"Another area that demands caution is public-facing AI chat tools," he continued. "While the demos can be impressive, these systems are vulnerable to 'jailbreaking.' We've seen AI tools exploited to produce inappropriate, harmful or even dangerous content, often completely bypassing the systems' intended safeguards. In a clinical setting, that could result in anything from misinformation to data leaks or even harmful patient interactions."

Watch out for the open internet

The risk grows exponentially when these models are connected to the open internet, he added.

"Bad actors can plant malicious content online designed to influence AI behavior, creating serious cybersecurity threats," he said. "At best, this leads to a PR headache. At worst, it can result in data breaches or ransomware attacks that bring entire systems to a halt.

"The safer path forward starts with internal deployment of AI in secure HIPAA-compliant environments with human-in-the-loop systems," he stated. "This makes sure data is being used safely, and humans are still signing off on actions being taken by AI systems. Early AI applications should focus on operational areas like streamlining admin tasks, improving workflows and reducing friction – areas that offer ROI without introducing clinical risk. The goal isn't to avoid AI – it's to use it wisely, building value and trust with every step."

Norden also offers cautionary advice when it comes to managing AI's risk, governance and long-term value without getting swept up in the hype, calling this one of the biggest challenges in healthcare today.

"There's a natural hesitation, and rightfully so, given how much is still unknown," he said. "That's why many health systems are stuck in cautious mode – launching pilots, dabbling in internal side projects and experimenting without a clear path forward.

"The shift from 'This looks promising' to 'This is safe and scalable' starts with clear leadership and direction on what tools to use, and how we should measure success before we start," he continued. "What's difficult is [that] without clear direction for our workforce now, people are turning to outside public tools and under-the-table use. We must start with HIPAA-compliant options for where our workforce should access these tools."

Much more than safety

But safety alone isn't enough.

"Too often, safer tools are also clunkier or less helpful, which pushes people right back to public options," he explained. "We need to make internal tools both secure and genuinely more valuable. That means embedding AI into real workflows and enriching it with internal data, so it's not just compliant, but also indispensable.

"As usage expands across the organization, governance must scale, too," he continued. "That includes tracking usage, auditing interactions and educating users – not to police them, but to guide safe, responsible use. If someone tries to use AI for high-risk tasks like medication dosing, we need systems in place to catch and correct that behavior early."

Ultimately, long-term value comes from building a repeatable, scalable process, he added.

"That means structured pilots, performance thresholds and infrastructure that helps governance teams track and grow what works," he said. "With strong tools, smart policies and clear priorities from leadership, we can move past experimentation and into sustainable, system-wide transformation."

Avoiding common missteps

So, how can hospitals and health systems avoid common missteps that stall progress? In a variety of ways, Norden said.

"Right now, when we talk to healthcare leaders, we see most AI strategies falling into one of four buckets – waiting for the EHR vendor to roll something out, banning tools like ChatGPT outright, buying a point system like ambient documentation, or trying to build everything in-house," Norden observed. "All of these approaches have some logic behind them, but on their own, they often miss the bigger picture.

"What's really needed is a clear, shared vision across the organization that AI is coming, it's going to change the way healthcare operates, and we need to start preparing for that future now," he continued. "Without that buy-in, teams end up working in silos, unsure of where to focus, and progress gets stuck."

Another common pitfall is trying to do too much at once.

"We've all seen systems chasing dozens of pilots with different vendors, spreading their time and resources thin," Norden said. "The result is not enough traction in any one place to make a meaningful impact. What works better is picking a few high-priority areas where AI can make a clear, immediate difference and investing in those with real support and leadership backing.

"It's about making fewer, smarter bets and giving those teams the tools, data and clarity they need to succeed," he added. "That focused approach builds momentum and makes it easier to scale what's working."

Don't forget people

Finally, Norden said one cannot talk about avoiding missteps without talking about people.

"Most of our staff already are using AI tools in their personal lives, and increasingly, they're bringing them into work," he noted. "If we ignore that or try to shut it down, we're missing a huge opportunity. What we need to do is lean into it by giving them safe, secure tools to experiment with and teaching them how to use AI effectively and responsibly.

"Education and training can't be a one-off; it needs to be an ongoing part of how we support our teams," he concluded. "The future of AI in healthcare isn't just about the technology – it's about empowering our people to use it well. When leadership brings everyone along, that's when real transformation happens."

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

WATCH NOW: Grabbing the Chief AI Officer brass ring – and working with top brass

关于《Stanford AI professor offers some cautionary advice for deploying the technology》的评论


暂无评论

发表评论

摘要

AI in healthcare faces challenges related to clarity, safety, governance, and long-term value. Dr. Justin Norden, Stanford professor and CEO of Qualified Health, emphasizes the need for caution when approaching AI systems, focusing on identifying ROI, risks, and practical use cases from一线工作人员。他指出,尽管生成式AI在医疗保健领域的讨论热度不减,但实际的财务回报仍需明确。Norden认为真正的价值在于通过操作流程中的小型实用案例实现变革,并建议采用安全合规的方法部署AI,以减少风险并提高长期价值。此外,他还强调了员工教育和培训的重要性,认为这是成功应用AI的关键因素之一。

相关新闻