英语轻松读发新版了,欢迎下载、更新

Harvard’s Generative AI Policy Is Inequitable | Opinion | The Harvard Crimson

2025-09-03 07:00:00 英文原文

Clicking through the Canvas pages for my courses, I’m met with an all-too-similar statement time and time again: The use of AI is not permitted in this course and will result in a failing grade.

No matter the myriad policies, guidelines, and fearmongering, generative AI will still be used by Harvard students. This isn’t to sound defeatist. AI use doesn’t have to be a bad thing. It could actually improve the way students choose to learn.

But in order for AI to benefit every student regardless of their race, gender, or socioeconomic status, it must first be allowed in every Harvard classroom.

A study from the Harvard Kennedy School revealed that men are 7 percent more likely than women to use generative AI in their homes. That number increases to 9 percent in the workplace. Whether this effect is because of male confidence in not getting caught, a greater willingness to take risks, or yet another factor, I have witnessed this disparity for myself among my male peers.

In the same vein, the study also concluded that higher-income, younger, and more educated workers use generative AI more often. I have noticed classmates who attended high schools that taught “AI literacy” and encouraged the use of ChatGPT for academics use these tools more frequently — and also more effectively — in their Harvard classes. Rather than ask the model to write their entire essay for them, they will ask for alternate phrasings of a cliche, critiques of their own words, or a list of counterarguments to test their own logic.

Under the current policy, thirty percent of Harvard students worry their peers use generative AI to gain unfair advantages in class, per a survey commissioned by the Harvard Undergraduate Association. But for my friends who are female or people of color, I’ve observed that this worry isn’t enough to convince them to use it themselves. They tend to avoid the tool as much as they can — even if an instructor permits it.

Like calculators, spell check, and the internet, generative AI has already permeated classrooms and will eventually become an accepted and standard tool of use in the workplace. This fact means that every student should learn to properly harness its power. Just because educators are still navigating appropriate uses of AI does not change the reality that ChatGPT is already free and accessible to every Harvard student.

Indeed, Harvard’s AI Sandbox provides a safe, data-protected environment for students to use this tool for their education. Despite Sandbox being available to every undergraduate, official University policy leaves AI regulations to the discretion of each individual educator — which is problematic, to say the least.

According to one survey from 2023, 54 percent of college students agree that AI use on assignments or exams is academically dishonest, but 56 percent have still used AI on assignments or exams. And Harvard actively discourages teachers from using generative AI detectors, as they are generally unreliable and can falsely implicate innocent students.

In courses where AI use is prohibited, those who violate their instructor’s guidelines probably get extra sleep, less stress, and experience more ease on assignments. They are rewarded for breaking the rules. They also tend not to get caught.

Until the invention of tools to accurately hold students accountable, Harvard cannot continue to rely on good-faith efforts to adhere to their AI policies.

Recent research found that 67 percent of college students think AI use is “essential” in a modern day world. Yet, only a third report receiving AI training from their institutions. Beyond Harvard, it is worth identifying which students are being left behind in this latest wave of innovation.

Recently, new College Dean David J. Deming told the incoming Class of 2029 in his convocation address that their Harvard education would equip them to be leaders in an AI-driven world. Under the current AI policy, that is not true for everyone. The college must prepare all its students for their futures, regardless of race, gender, socioeconomic status, or concentration.

AI might transform education. If it does, let’s make sure that every student stands to benefit.

Salma O. Siddiqui ’28, a Crimson Editorial editor, lives in Pforzheimer House.

关于《Harvard’s Generative AI Policy Is Inequitable | Opinion | The Harvard Crimson》的评论


暂无评论

发表评论

摘要

Harvard students are increasingly using generative AI despite prohibitions from many courses due to concerns over academic dishonesty and unfair advantages. A study shows male students, higher-income individuals, and those with more education tend to use AI more frequently than their peers. The author argues for allowing AI in classrooms to ensure all students can benefit equally from this technology as it becomes an accepted tool in the workplace. Harvard’s current policy leaves AI regulations to individual educators, which is seen as problematic, and accurate accountability measures are needed until better tools are invented. The article concludes that preparing all students for an AI-driven world requires inclusive policies regarding AI usage.

相关新闻