作者:Joseph Gordon-Levitt
A long time ago, kings owned all the land, while serfs worked that land without owning anything. Back then, if a serf had said, “Hey, I think this little plot of land where I built my house and farm my crops should belong to me,” he would have been laughed at.
“Oh yeah, how’s that going to work?” the king would have asked. “Is every one of you going to own your own little plot of land? Will you little people be able to buy and sell land to one another? How are you going to keep track of who owns what? Obviously, none of this is doable.”
In today’s increasingly digital world, data is becoming as valuable as land. And the lords of Silicon Valley don’t want us owning our data any more than the old kings wanted serfs owning their land.
Last week at the questionably titled “Winning the AI Race Summit” in Washington, D.C., President Donald J. Trump was talking about whether big tech companies should have to share the wealth with all the people whose skill, talent and labor contribute to the value of their extremely lucrative AI products.
“You just can’t do it,” said Mr. Trump, “because it’s not doable.”
I consider myself an extremely lucky artist. I’ve gotten to be a part of some incredible creative projects, but what I actually feel luckiest about is the people I’ve gotten to collaborate with. Making things together with my fellow passionate artists — whether “professional” or “unestablished” and whether “above the line” or “below” — is truly one of the great joys in my life. So you might assume I’d hate the very idea of using technology to do creative things that in the past could only be done “manually” by humans. But this isn’t the case. I don’t have a problem with AI as a technology; I think some of the new creative tools are inspiring. However, I believe we all have an urgent problem with today’s big AI companies’ unethical business practices.
The truth is that today’s GenAI couldn’t generate anything at all without its “training data” — the writing, photos, videos and other human-made things whose digital 1s and 0s get algorithmically crunched up and spit out as new. For more than half a decade now, AI companies have been scraping up massive amounts of this content without asking permission and without offering compensation to the people whose creations are so indispensable to this new technology.
Silicon Valley’s justification for what I believe is a clear case of theft — which Mr. Trump echoed — is that a Large Language Model (LLM) is no different from a person who, for example, reads a book and takes inspiration from it. But this comparison is not only inaccurate, it’s dystopian and anti-human. These tech products are not people. And our laws should not be protecting their algorithmic data-crunching the way we protect human ingenuity and hard work.
Enter Republican Sen. Josh Hawley and Democratic Sen. Richard Blumenthal (to thunderous applause) who introduced The AI Accountability and Personal Data Protection Act just last week as well. This new legislation would bar AI companies from training on copyrighted works, and allow people to sue for use of their personal data or copyrighted works without consent. In stark contrast to Mr. Trump’s Silicon Valley bootlicking summit, these two lawmakers from both sides of the aisle are standing up for working Americans against the giants of the tech industry. We should all hope their bill passes.
There are also glimmers of hope coming from the judiciary. In contrast to Mr. Trump’s comments, the White House’s official AI Action Plan doesn’t address the question of training data and intellectual property, and administration officials said it should be left up to the courts. Now, a few weeks ago, Mark Zuckerberg’s Meta declared victory on the issue, when a federal court ruled against a group of authors who had sued for violation of their copyright. But in fact, the judge of that case said the authors probably only lost because their lawyers made the wrong argument about the legal framework of fair use.
In his ruling, Judge Vince Chhabria wrote: “No matter how transformative LLM training may be, it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.” So, if I were Zuck, I wouldn’t be celebrating too hard yet. There are plenty more lawsuits against AI companies still pending — including a recent first from major Hollywood studios — and I can only imagine the next set of plaintiffs will heed Judge Chhabria’s advice to focus on market harm.
But, what if none of this works? What if AI companies are just allowed to keep going with this unethical practice? Of course no one can predict the future, but it only stands to reason that this would eventually spell the end of any other commercial content business. Film and television, for sure. Professional journalism, as well. The new and vibrant creator economy of today’s YouTubers, podcasters, newsletter writers, all gone. I’m not saying people won’t make stuff anymore; I’m just saying they won’t be able to earn a living with what they’ve made. Because as long as an AI company can copy all of our content into their model at no cost and spit out quasi-new content for close to no cost, there’s no logical business case for paying human creators anymore.
Don’t get me wrong — I do believe it’s possible that this new technology could propel a great leap forward in human creativity. But only if there’s a system in place that rewards people for their novel creative work as it’s incorporated into the AI models. Without such a system, and with no economic incentive for people to be creative, our media landscape and public square will become absolutely devoid of anything but algorithmically regurgitated slop optimized for attention maximization and ad revenue.
As concerned as an artist like myself may be with the future of art and creativity, this issue actually reaches far beyond the media industry. It’s also about ordinary people’s everyday struggle just to make ends meet.
We creators might be some of the first to feel the threat, but anyone who does their work on a computer is in the same crosshairs: people who work in marketing, or logistics, or finance, or design, just to name a few. And while white-collar jobs will be impacted earlier, blue-collar jobs will follow soon enough, especially as autonomous vehicles and robotics come into further use. Employment as a plumber is considered safe for now, but perhaps not for our kids’ generation. And how will an autonomous plumber-bot know how to do its job? The AI powering it would be trained on data that came from millions of human plumbers doing their jobs. Wouldn’t those humans deserve some compensation? Not if Silicon Valley gets its way. The decisions we make today really could commit us to a future where any valuable work done by any human being will become fair game for a tech company to hoover up into its AI model and monetize, while that human being gets nothing.
People feel this coming. In fact, in a recent poll, 77 percent of Americans said they’d rather get AI right than get it first. Of course, this sentiment is bad for business, so Big Tech responds by sounding the national security alarm. Mr. Trump echoed this common Silicon Valley refrain last week, warning that American AI companies must be allowed to continue to steal everyone’s data or else we’ll lose to China. Why do you think the summit was titled “Winning the AI Race”? Who’s going to argue with a matter of national security?
But let’s be real. These AI businesses have no loyalty to the American people. Their only obligation is to their shareholders. Plus, if our national security would really be compromised if AI companies had to compensate people for their data, then theoretically, shouldn’t the government be willing to make up the difference? I was just corresponding with a D.C. lawyer about this, and he brought up the “Takings Clause” of our Fifth Amendment: “… nor shall private property be taken for public use, without just compensation.” To me, it still seems like the tech companies should pony up, not the government. But I think it goes to show that all this urgency to “beat China” is not really a matter of national security. It’s just competitive businessmen wanting to beat their competitors. Cash, as one great American poet said, rules everything around me.
I didn’t vote for President Trump, but I think most of the people who did vote for him genuinely believed he would stand up against a powerful establishment and fight for working Americans. But there is no establishment more powerful in the world today than the handful of gigantic businesses building and selling AI, and none posing a greater threat to the American people’s widespread prosperity. If Mr. Trump really wanted to fight for working Americans, he would join Senators Hawley and Blumenthal in building out policy to protect the public good over Silicon Valley’s bottom line. It’s what he was elected to do. And it is, in fact, doable.
Joseph Gordon-Levitt is an actor, filmmaker, and founder of the online community HITRECORD. He recently started publishing “Joe’s Journal” on Substack and is set to direct an upcoming thriller about AI for Rian Johnson and Ram Bergman’s T-Street.