英语轻松读发新版了,欢迎下载、更新

Yale confronts AI with new classes, varied policies and an ethics focus

2025-05-17 05:02:58 英文原文

作者:Jake Robbins, Isobel McClure

In four years of rapid technological innovation, Yale’s investments in research, ethical questions and education about AI mark steps toward a future where human values meet machine learning.

Staff Reporters

Zoe Berg

When the class of 2025 first arrived in New Haven in the fall of 2021, ChatGPT did not exist, artificial intelligence was still largely theoretical and Yale had yet to address AI in university policy. Four years later, Yale responded to the rapidly developing technology with a $150 million commitment to build infrastructure, recruit talent and reimagine an Ivy League education in the age of intelligent machines.

Administrators announced the investment in August 2024 as part of a push to position Yale as a national leader in AI. The initiative includes expanding the school’s computational infrastructure, hiring faculty across disciplines and accelerating interdisciplinary research with AI. It marks Yale’s most significant foray into a technological frontier where institutions like MIT and Stanford are considered leaders.

“I see it as a competition between higher education and industry. Our question is, what role do universities play in this new AI world?” Provost Scott Strobel told the News in March. “It’s not to make money. It is to think about what this means in terms of ethical consequences, ethical impacts and what AI makes possible in research, advancement and discovery.”

A few years ago, access to computing power — especially graphics processing units, or GPUs — was limited, often reserved for select departments. Since then, Yale has worked to expand access to high-performance computing across all departments.

Yale is recruiting not only engineers and computer scientists, but also legal scholars, ethicists and public health experts. The approach stems from a central recommendation of the University’s 2024 Task Force on Artificial Intelligence: that Yale build the capacity not only to develop artificial intelligence, but also to question and guide it in line with safe practices.

“The provost had a committee of faculty and administrators that reviewed and discussed all kinds of questions related to AI, like how Yale should approach these questions and what faculty and administrators and students needed,” Laurie Paul, a professor of philosophy and cognitive science and a member of the task force, told the News last August.

Faculty and students alike have argued that AI development must be paired with rigorous ethical reflection. One of the earliest and most consistent advocates for that integration has been the Yale AI Policy Initiative, a student group founded to evaluate the technology’s legal, political and social impacts.

In the past four years at Yale, AI evolved from a niche topic to a prominent academic theme. Few graduating seniors likely came to Yale to study AI, but now few can ignore the subject entirely — whether in computer science lectures, political science seminars or creative writing workshops.

Meanwhile, as the Pourvu Center for Teaching and Learning guides AI’s integration into some classrooms, concerns over academic integrity violations have persisted.

“We realized that this technology could have a transformative effect on education and that we needed to pay close attention and educate ourselves in order to support faculty,” Jennifer Frederick, the executive director of the Poorvu Center, wrote to the News in November. She noted that broad faculty interest in AI began to surface in 2022 — the year ChatGPT became available.

The Pourvu Center’s website provides guidance on AI usage for students, including a section on the ethical concerns of AI use. The center has a separate page for faculty, listing benefits and limitations of AI, as well as available AI tools. In the past semester, the center opened applications for instructors to receive so-called AI Course Revision Pilot Grants to integrate AI into their teaching.

“The majority of my friends use ChatGPT and Claude on a daily basis, often to assist with their assignments in a range of subjects. I have also been shocked by the low level of concern about being flagged for AI use,” Sophia David, a co-president of the Yale AI Policy Initiative, wrote to the News, referring to two AI tools. “My personal opinion is that AI is not going anywhere, and it is unrealistic to expect students not to learn the ins and outs of these tools.”

The conversation about AI has shifted beyond understanding it to scoping out its best uses. New and reimagined courses emerged across departments: “Cultural AI: Machine Vision;” “Art and Design in 2023;” “Creative Artificial Intelligence for Visual Computing;” “AI Aesthetics;” “AI Policy: A Comparative View;” “AI, Medicine and Society” and “Neuro-AI.”

Besides courses, various departments across the university have worked to create programs to inform students about AI’s role in education and their responsibilities with the technology. For example, the Schmidt Program on Artificial Intelligence, Emerging Technologies and National Power launched at the Jackson School of Global Affairs in 2021. The program is supported by a $15.3 million donation from former Google Chief Executive Officer Eric Schmidt and his wife Wendy Schmidt.

As curricula and classrooms evolved, challenges emerged. Nowhere was that more evident than in the computer science department, where the rise of generative AI tools like ChatGPT sparked urgent questions about academic integrity.

In March, dozens of students in an upper-level computer science class were flagged for the potential use of AI. Instead of enforcing a uniform rule, the department deferred to individual instructors to set their own AI policies — a choice that created a patchwork of expectations and left many students confused about what was permitted. Some faculty allowed limited use of AI tools, while others banned them entirely.

Students in the implicated course were given ten days to admit to using AI on a problem set or face referral to the Executive Committee for possible disciplinary action. 

ChatGPT is a product of OpenAI, which is headquartered in San Francisco.

JAKE ROBBINS

ISOBEL MCCLURE

Isobel McClure is a Beat Reporter under the University Desk and serves as Head of Copy Editor for the News. Originally from New York City, Isobel is a first year in Pauli Murray College.

关于《Yale confronts AI with new classes, varied policies and an ethics focus》的评论


暂无评论

发表评论

摘要

Yale has committed $150 million to advance AI research, infrastructure, and education, aiming to become a leader in ethical AI development alongside institutions like MIT and Stanford. The initiative includes expanding computational resources, hiring interdisciplinary faculty, and fostering research that addresses the ethical implications of AI. Yale's approach emphasizes integrating rigorous ethical reflection into AI development and education, with efforts to guide students and faculty on responsible use through initiatives like the Pourvu Center for Teaching and Learning. Courses and programs across various departments now incorporate AI, addressing its role in fields ranging from computer science to art and design.

相关新闻