Researchers, thought leaders, and policy makers attended a daylong program in North Carolina’s Research Triangle Park on Sept. 17, 2025, to develop research agendas and explore philosophical questions on the nature of humanity with the rise of artificial intelligences.
What does it mean to be human in the age of artificial intelligence? Is it a unique use of language? Is it the demonstration of empathy? Is it the ability to form communities?
How can artificial intelligence help humans better understand their own special capabilities and natural rights? For that matter, what legal rights should be bestowed on highly advanced systems that can reason and, perhaps in the near future, may become self-aware?
These questions and many more were posed during “The Human Edge: Our Future with Artificial Intelligences,” a daylong summit in North Carolina’s Research Triangle Park co-hosted by RTI International and Elon University. More than 600 people registered to attend the conference on Sept. 17, 2025, either in person or via Zoom.
Participants explored relationships between AI and modern approaches to education, human agency, creativity, and well-being. In addition, attendees worked toward a shared research agenda during breakout sessions meant to support responsible development and use of AI technologies.
A roundtable of higher education leaders from top universities across the state also presented on the AI initiatives and research taking place on their respective campuses.
Elon University President Connie Book urged attendees in her welcoming remarks to confront fundamental questions about humanity’s place in a world increasingly shaped by artificial intelligence.
Book traced Elon’s leadership in technology research through its long-running Imagining the Internet Center, the predecessor to the university’s Imagining the Digital Future Center. She also pointed to Elon University’s leadership in developing a set of core principles to guide development of artificial intelligence policies and practices at college and universities.
More than 140 higher education organizations, administrators, researchers and faculty members from 48 countries collaborated on a statement of those principles, which was released Oct. 9, 2023, at the 18th annual United Nations Internet Governance Forum in Kyoto, Japan.
Book cited the success of “The Student Guide to Artificial Intelligence”, an Elon University publication authored in partnership with the American Association of Colleges and Universities since adopted by approximately 4,000 colleges, universities, schools and organizations globally.
“All institutions must seriously address the coevolution of humans and digital systems,” she said, calling the conference a chance to “foster forward thinking and take significant action for building a better future together.”
In his own welcoming remarks, RTI International President and CEO Tim Gabel encouraged attendees to consider the promise and responsibilities of employing emerging AI technologies.
“Today is about possibility,” Gabel said. “It’s about gathering as professionals, as leaders, as people to think about how we integrate artificial intelligence into our lives, how it shapes our work, how it shapes our communities, and how it shapes our future.”
Today is about possibility … it’s about gathering as professionals, as leaders, as people to think about how we integrate artificial intelligence into our lives, how it shapes our work, how it shapes our communities, and how it shapes our future.”
– Tim Gabel, President and CEO, RTI International
Gabel noted his pride in hosting the summit in partnership with Elon University and outlined some of RTI’s efforts to use artificial intelligences responsibly. Projects include tools for public health communication, a new AI system for RTI researchers, and a “digital twin” of the U.S. population to model disease spread and test solutions.
“The promise lies not just in the technology,” Gabel said, “but in how we, as humans, choose to use it.”
James Boyle, the William Neal Reynolds Professor of Law at Duke University and author of “The Line: AI and the Future of Personhood,” suggested in one of two keynote addresses that participants rethink legal and moral boundaries as artificial intelligences advance, arguing that machines with humanlike capacities will force society to confront what it means to be a person.
Boyle, who attended via Zoom and addressed attendees on large screens that flanked both sides of the stage, said the debate over AI goes beyond familiar concerns about bias, jobs and copyright. He urged a deeper look at the “line that we draw between subject and object, between persons and things,” and at how that line has shifted in past moral struggles over race, sex and life itself.
Boyle told his audience that language – long deemed the human hallmark by philosophers from Aristotle to Turing – no longer settles the question of personhood or humanity. Modern systems “have so much language,” Boyle said, and linguistic ability complicates assumptions that syntax implies sentience.
While Boyle said that “Chat GPT is … not in any way conscious right now,” the rapid pace of development makes eventual change plausible. His remarks outlined three themes:
AI will prompt scientific, philosophical and spiritual reflection about consciousness and human exceptionalism.
AI will force reconsideration of legal personhood — not only for biological beings but for entities such as corporations that already hold rights for pragmatic reasons.
Encounters with machine intelligence can be a mirror: they may expose ethical shortcomings, or spur critical reflection on what entitles beings to moral consideration. Boyle closed on a note of guarded wonder, saying that while risks are real, the possibility of meeting another intelligent entity should also inspire reflection – and, perhaps, humility
Erich Huang, head of clinical informatics at Verily (Google’s life sciences subsidiary) and chief science & innovation officer for Onduo/Verily, shared insights on the latest trends in AI and their impact on healthcare innovations and human well-being.
A surgeon trained at Duke University Hospital, he framed the second of two keynote addresses around a trauma case to underscore the limits of today’s AI tools.
Huang described stabilizing a 58-year-old crash victim, placing chest tubes and rushing her to surgery while consoling her physician husband — moments that no model or robot can yet replicate. “Algorithms don’t pledge any oaths,” he said, invoking the promises physicians make under the Hippocratic oath. “Medicine is a real-life enterprise, and there are still real-life things that have to happen.”
The speaker argued that large language models excel at identification and synthesis but do little to build the culture, incentives and workflows needed to change clinician and patient behavior. He warned that electronic health record data and billing codes often reflect reimbursement priorities rather than pathophysiology, risking “garbage in, garbage out.” Aligning payment with outcomes, he said, would create better data and a stronger foundation for trustworthy models.
Huang shared how he has invited technologists to complete “clinical rotations” to see care at the bedside and understand unwritten practices that rarely appear in charts but drive safer outcomes.
While calling himself an optimist about machine learning — citing his early research modeling cancer signaling pathways — he pushed back on hype, noting that autonomous vehicles and other highly touted systems have adopted more slowly than promised.
“We shouldn’t be using AI as a way to paint over fundamental underlying problems,” he said. Instead, the field should intentionally produce higher-quality clinical data, rigorously test models for specific tasks and embed them in team-based workflows in which humans still call consults, coordinate services and deliver hard news. The goal, he said, is not artifice but “real intelligence” that helps patients get better.
Lee Rainie, director of Elon University’s Imagining the Digital Future Center, delivered plenary remarks that summarized his center’s recent public opinion surveys of expert and American attitudes about the impact of artificial intelligences on key human capacities and traits.
Rainie described how both experts and the public voiced concern that AI could erode key aspects of human identity over the next decade. Of a dozen traits that were posited in the survey, ranging from empathy to decision-making, “experts thought nine would turn out more negatively than positively,” Rainie said.
Only creativity, curiosity and problem-solving drew optimism.
Those with higher levels of education are more pessimistic than those with lower levels, Rainie said. That reversal from earlier technology surveys, he added, “absolutely reverses the valence” of typical adoption patterns, where educated groups are usually early enthusiasts.
“There’s this palpable, universal sense that the moment we are in is a pivotal moment,” Rainie said. “We’re sharing the space now, in some respects, with other intelligences.”
During audience questions, one participant compared today’s changes to past industrial revolutions. Rainie replied that AI differs because “this is the first time we’ve faced a tool that looks like it has cognitive capacities.”
**
“The Human Edge: Our Future with Artificial Intelligences” was made possible by support from Burroughs Wellcome Fund, the Knight Foundation, and Schmidt Sciences. It was organized by the Imagining the Digital Future Center at Elon University (with Lee Rainie), and RTI International’s Fellows Program (with Brian Southwell) and University Collaboration Office (with Katie Bowler Young).