The current era of artificial intelligence (AI) began around late 2022, when OpenAI released its first version of ChatGPT, a type of large language model (LLM) available to the public. Since then, AI capabilities have accelerated enormously.
Neurologists, however, have been part of the AI trend since long before ChatGPT. In medicine, AI includes any use of advanced computing to support the overall delivery of health care, such as by analyzing complex medical data, offering insights, and simplifying administrative tasks.
Neurologists are problem-solvers by nature, and AI simply is another tool to tackle challenges in patient care. Most AI, especially the kind used in clinical practice, is a machine learning algorithm, which can look through lots of data, help detect patterns, and offer a conclusion. This type of AI can even help interpret test results, such as by detecting hemorrhages or measuring lesions on brain imaging.
For many routine uses, though, AI has not yet reached a point where neurologists can depend on it to guide their decisions. At best, it can serve a role similar to a search engine—helpful, but with risks of misinformation.
These types of tools differ from generative AI platforms, such as ChatGPT, which is an LLM. These often are less predictable and reliable but can hold fluent conversations. Because of its ability to generate text (language) from prompts, this technology has become far more accessible to the average person. It also has a wide range of uses in medicine, including summarizing and explaining medical information; helping fill out paperwork; and brainstorming ways to educate patients.
Some doctors use AI to help craft replies to patient messages because LLMs can quickly respond to common patient inquiries. For physicians already dealing with lots of messages, having to edit a draft instead of starting a reply from scratch can be a big help.
That said, there are important caveats. AI responses often lack sufficient patient-specific details and can depend too much on what the patient wrote, making its responses fluent but generic. AI also can be subject to bias and “hallucinations”—when LLMs create inaccurate or fabricated information. That means a human must review and approve messages before sending them.
Another way some neurologists use AI is by having it listen in on appointments with patients—who are informed this is happening—and generate notes. Evidence shows it saves physicians time and improves efficiency, and for some practices, it has even increased the number of patients clinicians can see. But the biggest benefit is that it frees physicians to focus entirely on the patient—making eye contact, speaking naturally, and explaining things more thoroughly—while the AI listens and helps with the paperwork.
Since these notes are based on the conversation in the room, some traditional parts of the exam, such as the neurologists's findings, may be lost. Some doctors have started narrating the exam out loud, which many patients appreciate because they get to hear the observations. AI tools also can stumble over slurred speech, uncommon names, and rare terms, and they may try to interpret what was said rather than creating an accurate summary. This means nuance and accuracy may be lost, so neurologists still need to review and verify the generated note.
When neurologists do use AI, a significant consideration is protecting patient privacy and confidentiality. Of the two main groups of AI tools doctors use, one—such as those helping with patient messages and paperwork—can be used for patient care and comes with strict business agreements to protect the use of patient data. These agreements identify who owns the data, where it is stored, and how it will be deleted. The other set of tools, which includes platforms such as ChatGPT and Microsoft Copilot, does not have these types of agreements. Doctors should never enter patient-specific details into them.
AI use in neurology will continue to evolve rapidly over the next 10 years, but the core responsibility for making neurologic decisions will remain with neurologists. AI can help them gather information and automate tasks, but the nuanced clinical judgments—especially in a specialty as complex as neurology—will still depend on human expertise.
Dr. Wu is director of clinical research informatics for Stanley Manne Children’s Research Institute at the Ann & Robert H. Lurie Children’s Hospital of Chicago and a member of the Brain & Life editorial board