英语轻松读发新版了,欢迎下载、更新

No, I don’t want an AI scribe to write my pulmonologist’s note

2025-04-15 08:33:42 英文原文

作者:Aliaa Barakat

“Yes,” I said reluctantly to the urgent care provider who entered the exam room to see me that January morning, “I am OK with that.” She asked for consent to record our conversation “for the purposes of producing a draft of the encounter documentation.”

The evening before, I wasn’t feeling well and reached out to my pulmonologist. After a deliberative conversation, he suggested I visit an urgent care visit for respiratory viral testing. I was concerned that my underlying interstitial lung disease (ILD) might flare up. 

While waiting by myself in the urgent care exam room for the results of the viral panel, I reflected on the consent process, which took place at a time when I was feeling too ill to inquire about the ambient listening technology that was about to record everything I would say. I later learned that the urgent care center interpreted my single consent as ongoing permission to record all future visits.

Then I thought about how I would respond if my pulmonologist’s clinical notes were drafted by an AI scribe. I have been his patient for 20 months, since transitioning my long-term ILD care to his health system to consolidate my specialty care. Though we had been colleagues for years, we needed to build a new relationship as patient and physician, a relationship that would be grounded in earned and maintained trust.

There is an ongoing race to build artificial intelligence to rival or exceed the human capacity for seeing, thinking, and feeling. While I share the accuracy, ethical, and safety worries that critics raise about the adoption of unregulated AI tools in medicine, as a patient I am also concerned about the erosion of the humane, therapeutic experiences of medicine with the intrusion of these tools on the patient-physician relationship. 

My pulmonologist’s notes are much more than a summary of our privileged clinical encounters. Each of his notes is an important and carefully crafted document for my care planning and for coordination with other providers. Equally important, the notes are a communication to me, his patient.

As I read his notes, I can feel his acumen and experience as a practitioner of medicine — his interest and understanding, his concern and compassion, his discernment and responsiveness. I don’t think an algorithm can re-create those specifically human experiences.

While I consider myself a good historian of my nuanced and evolving medical trajectory, I am awed by his narration of my history. His ability to ask with genuine interest, to listen for words and gestures, and to distill my story note after note — and in a way that is most pertinent to both diagnostics and therapy — is one of the finest examples of scientific and medical writing I have seen in my career as a scientist.

The winter after entering his practice, he and I had our first experience together responding to an acute flare-up of my ILD and managing a long recovery. The following summer, I developed an adverse reaction to a medication that I was trialing with another provider, necessitating a short hospitalization. Following the hospitalization, I struggled with prolonged recovery from Covid-19.

Consequently, and in a deviation from my pulmonary standard of care, I wanted to continue a pause in immunosuppressive treatment. My pulmonologist and I engaged in shared decision-making, an active, complex relational process that is anchored in a kind of knowledge rooted in experience, not opinions.

To balance my preference and his clinical guidance, we decided to move up my pulmonary function tests (PFTs) originally scheduled for two months later. In his note, he described my preference as “completely reasonable” and then added, “If her lung function is stable to improved compared to her [prior] PFTs, then we will feel more comfortable about this decision to continue to hold [the medication].”

On average, our encounters have lasted 75 minutes each, including the time he prepares for them and to write his post-visit note. The only time he looks away from me in the exam room is when we look together at my imaging studies on his computer, or at the end of the visit to briefly place orders or requests for follow-up visits or tests.

My pulmonologist once said to me that he treats each of his patients as if they were a member of his own family. If “the secret of quality is love,” then one metric of quality is time. 

Providers are pushed to spend less time caring for each patient as health systems move to reduce costs and increase revenue under the technological principle of maximal efficiency and output. But medicine was never intended to be another industrial complex.

When the viral panel results came back, I thanked my urgent care provider and politely declined to stay for more testing. That day neither she nor I had the appropriate understanding of the technology involved in recording our visit and drafting the note.

Yet I continued to think about the AI scribe I had consented to — in particular, whether the technology works as intended and by whose standards.

Pilots eliciting patients’ experience with AI scribes have so far been very small, and the impact of AI scribing on clinical decision-making and outcomes is still unclear. Carefully designed studies are needed. Moreover, often patients are not properly informed about their health care data being analyzed or repurposed or housed by third parties. As such, their consent for these tools to be used in their care is not always truly informed.

A year ago, a STAT review of generative AI’s adoption found that the “tension between technology’s move-fast ethos and medicine’s traditional insistence on evidence — is deepening every day.” Technical feasibility is the foundation for neither evidence-based practice nor medical ethics.

Later that day, I read my first medical note co-produced by the AI scribe. While the AI’s contribution was not labeled, its mechanical phrasing and scattered inaccuracies made it apparent. In contrast, part of the “Assessment and plan” section stood out as clearly human. The note could not offer much of a plan since I had decided to leave urgent care without further testing, but my kind provider captured her efforts: “She was counseled more than once that I would prefer to do a chest X-ray and prefers to decline at this point in time.  She states that she will follow-up with her pulmonologist.”

With symptoms still ongoing two days later, my pulmonologist ordered a chest X-ray, which I was able to walk in that evening to get. By the next morning, Saturday, he had reviewed the images in the system and compared them with prior ones. He messaged me through the patient portal that “although we are still waiting on an official read” by radiology, to his eye, my chest X-ray was “very reassuring.” I was still feeling ill, but I was well by his assessment—my seeing, thinking, feeling human doctor.

Aliaa Barakat, Ph.D., is a mathematician, and a medical and social researcher. She founded and co-leads the ILD Collaborative, a patient-physician collaborative care and research community.

关于《No, I don’t want an AI scribe to write my pulmonologist’s note》的评论


暂无评论

发表评论

摘要

A patient describes their concerns about the use of AI in documenting medical encounters, particularly after an urgent care visit where they were recorded without fully understanding the implications. The patient emphasizes the importance of the human element in medical care, noting that their pulmonologist's notes reflect a deep understanding and compassion that cannot be replicated by algorithms. They also highlight issues around informed consent for AI use and the potential erosion of trust in doctor-patient relationships due to technological intrusion.

相关新闻