The views expressed below are those of the author and do not necessarily reflect those of the Carr-Ryan Center for Human Rights or Harvard Kennedy School. These perspectives have been presented to encourage debate on important public policy challenges.
America’s AI Action Plan
The further development of Artificial Intelligence (AI) is safely left to Big Tech, climate change is not a problem the current generation should address, and neither AI nor climate change are human rights concerns. If you believe this much, the new American AI Action Plan along with executive orders (EOs) on Promoting the Export of the American AI Technology Stack, Accelerating Federal Permitting of Data Center Infrastructure, and Preventing Woke AI in the Federal Government will be good news. Equally welcome will be the administration's concurrent efforts to undermine the legal basis on which the Environmental Protection Agency (EPA) combats climate change, as well as its efforts to force the European Union to acquire exorbitant amounts of American fossil fuels.
But suppose you do believe there should be appropriate governmental and indeed societal oversight over what the private sector does with AI, that this generation does have a responsibility to combat climate change, and that doing all this is a human rights matter. Then you might well think of these developments as more evidence that this administration takes enormous risks with human flourishing, including the flourishing of Americans. You will probably also think this administration neither cares about future generations, nor about partnerships around the world, nor even about bringing along much of the country.
The U.S. treats countries in its orbit as customers at best and client states at worst.
Instead, what you might well see is how this administration imagines the emerging world order. My purpose is to argue that this vision should concern us: a world in which the U.S. and China are major competitors, but the U.S. has a decisive edge; the U.S. treats countries in its orbit as customers at best and client states at worst; AI becomes critical to the American economy—both domestically and for exports—while its large-scale deployment advances what Shoshana Zuboff has aptly termed Surveillance Capitalism, a version of capitalism driven by collecting and mining data; the energy needed to drive the American AI Empire largely comes from fossil fuels, whose sale to other countries is a major source of revenues; and behind all this is something that has become an ever-present feature of how this administration operates—gaslighting as a way of making those fall in line who do not comply, here by way of accusing them of building AI that fails to be “ideologically neutral.” But as the EO on “woke AI” reveals, “ideologically neutral” is AI that reflects the worldview of this administration. Human rights do not appear in this brave new world.
You might also recall the prominent appearance of Tech bosses at Trump’s inauguration—some months before one such boss’s lavish wedding in Venice threw a spotlight on the excessive wealth differences that prevail in our ailing world. Whatever investments Big Tech has made to curry favor, they are paying off: Big Tech gets deregulation, export support, and lenient rules around training data. The U.S. already hosts about 75% of global compute capacity, followed by China’s 15%. (Also see here.) Part of the Action Plan is to make others choose between America’s AI Empire and committing the “unforced error” of joining its rivals. (For a broadly-based appeal to create a People’s Action Plan on AI instead, see here; for reactions by the Council on Foreign Relations, see here.)
The Three Pillars
The Action Plan comes with three pillars. The first is to remove regulation that slows down AI development. This part is mostly about federal regulation—and in line with JD Vance’s repudiation of “AI safety” in Paris in February—but also includes regulation by states. An earlier version of the taxation bill the President signed in early July (previously known as the One Big Beautiful Bill) included a moratorium on AI regulation by states that did not reach the finish line. But the concern that states should not undermine the administration’s permissive approach to AI regulation reappears in the Action Plan, which seeks to bar states with “burdensome AI regulations” from federal AI-related funding. What counts as burdensome remains undefined. But presumably it is regulation that emphasizes safety.
Climate change will not cease to be real because the U.S. fails to integrate it into risk assessments.
Also, the National Institute of Standards and Technology (NIST) is asked to revise its AI Risk Management Framework, “to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” How this administration sees the connection between AI and climate is a theme throughout this commentary, but we should note immediately that climate change will not cease to be real because the U.S. fails to integrate it into risk assessments. Diversity, Equity, and Inclusion (DEI) is discussed below, but we should never cease to be startled that this administration lumps it together with climate change. “Misinformation” has been a target at least since Vance’s infamous speech at the Munich Security Conference in February. What is curious here is that the Action Plan does recognize the dangers of synthetic media for the legal system. Yet the administration never explains how one could make sense of this point without acknowledging that “misinformation” is not a “Soviet-era word,” as Vance pronounced, but a real problem for contemporary America and beyond.
That the instructions to NIST are listed in a section entitled “Ensure that Frontier AI Protects Free Speech and American Values” confirms how gaslighting has become the norm for this administration: they accuse others of doing what they themselves are doing—and those others then must go to the trouble to point this out, thus running the risk of drowning in a cacophony. (Also see here—and that this language suggests that free speech is something separate from American values barely registers amid such issues.)
This is a plan providing contents for the alliance between Government and Big Tech, not for American citizens.
We must be clear-minded: this deregulation occurs under particular circumstances. We are dealing with an executive branch eager to consolidate presidential power; the recent taxation bill allocates enormous funds to building up security forces, especially in immigration and customs; and one net result of the Musk-led efforts to increase governmental efficiency is considerable centralization of government-held data, at the expense of safety. It is under these circumstances that the Action plan promotes AI applications for law enforcement and public safety, including predictive analytics, crime prevention, and secure evidence handling. And it is under these circumstances that the plan highlights the importance of securing AI infrastructure and developing resilient AI systems, particularly for safety-critical and homeland security applications. That the plan does not provide much guidance on how AI use might affect individuals as either citizens or consumers is worth noting. This is a plan providing contents for the alliance between Government and Big Tech, not for American citizens.
The second pillar is to develop AI infrastructure by investing in the power grid, increasing the number of data centers, and advancing national semiconductor manufacturing. This is where the connection of energy policy is important: the taxation bill phases out tax credits[YM1] for wind, solar, and electric cars. It also makes it cheaper to drill and mine for fossil fuels on federal lands. (On how all this is a disaster for the climate, see here.)
The U.S. is looking for customers, not partners.
The third pillar is to get other countries to buy America’s “full AI technology stack”—hardware, models, software, applications, standards. As Chatham House already noted, as far as AI is concerned, the U.S. is looking for customers, not partners. That much became visible also in the trade “deal” with the EU. That Trump’s arrangement with Ursula von der Leyen includes an EU commitment to buy American fossil fuels in enormous quantities already sets the stage for future irritations between the parties. The EU per se does not buy fuels, and purchasing them at this level makes no sense in light of EU climate commitments.
As economist Moritz Schularick pointed out on ARD German news on July 28, the style in which this arrangement was made and the uneven conditions were reminiscent of trade deals from the colonial period. Back then, a militarily superior power would impose terms on the weaker party. The term “imperial condescension” is appropriate for this approach. While the EU is on a par economically with the U.S., Europe needs American protection against Russian aggression. The EU is keenly aware of that. All this seems to be part of Trump’s restructuring of transatlantic relations. In the future the price for protection against Putin might well be acquisition of American AI—and fossil fuels, a major setback for any efforts to keep climate change bearable.
Also, to be more competitive with China—which is already routinely deploying open-source AI models—American companies are now urged to make their models open-source as well. This is presumably meant to help make American AI the global standard. This part of the plan came as a surprise to many observers.
Preventing “Woke AI”—and the Damage Done Thereby
But it is not only energy and trade policy that merge with AI policy in the Action Plan: the Action Plan itself and especially the accompanying EO on “woke AI” usher in some of this administration’s other priorities—the systematic dismantling of agendas to give more protection to society’s most vulnerable. As a briefing by a law firm noted, this EO “will almost certainly be met with legal challenges.” But that is not my concern here. I am interested in the human rights dimensions of this plan and in its place in the larger vision for world order.
Going back to an EO by John F. Kennedy, we used to think of many efforts to make sure American society would hold its promise for everyone as “affirmative action.” “Affirmative action” quickly came to stand by itself as an expression. But when introduced it was affirmative (presumably as opposed to, say, lackluster) action “to ensure that applicants are employed, and employees are treated [fairly] during employment, without regard to their race, creed, color, or national origin.” More recently, these efforts came under the heading of DEI. Diversity—to make sure variegated origins and cultural backgrounds would be properly considered and not be a source of disadvantage for anyone. Equity—to make sure across social, economic, and political contexts participants would be treated as equals. And inclusion—to signal that the goal was to build a society not characterized by domination. (All this, by the way, is in the spirit of the values in the American Declaration of Independence, whose 250th anniversary we celebrate next year – see here and here.)
This administration’s approach over the last several months seems to have been to recast any such efforts as somewhere between an illegitimate social agenda pursued by extremists (Marxists, socialists, left-wing radicals, lunatics—people out of touch with America) and outright civil rights violations investigated by the government. This by itself must make us pause. The point of the civil rights movement—which gave rise to that affirmative action and its successors—was to make sure rights of American citizens were indeed for all citizens. Such efforts are now depicted as violations, as trespasses against certain people’s rights.
This reversal now finds its way into AI policy. According to the EO on Woke AI, we want “unbiased” and thus “trustworthy” AI. That sounds laudable—anyone who has supported the civil rights movement could applaud. What according to this EO makes AI unbiased is, first, that it is truth-seeking by prioritizing “historical accuracy, scientific inquiry, and objectivity,” while acknowledging “uncertainty where reliable information is incomplete or contradictory.” Second, what is needed is “ideological neutrality.” AI should “not manipulate responses in favor of ideological dogmas such as DEI.” As the EO explains:
In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.
There is a lot to unpack but let us note the troubling irony, which is the same irony with which civil rights considerations have been turned against advocates of civil rights for everyone. We are supposed to believe that when we strive for diversity, equity, and inclusion—along with presenting the inconvenient truth that American history has often ignored these principles—we end up with a whole slew of distortions. Meanwhile, if we just lie back, accept the status quo, and stick to a historical narrative that sweeps those failures under the rug, we find truth, objectivity, and accuracy.
In other words, history spun by dominant groups (in which white, heterosexual men have tended to do well) is supposedly the real thing. Everything else is at best noise in the background and at worst a malicious effort to belittle or erase the achievements of people who happen to be in the dominant group. That is what the gaslighting is here: it is the other side that is accused of doing certain things (here, distorting history, attacking rights) that in fact the accuser (the second Trump administration) is doing.
A recent piece in TechPolicy.Press reveals what this EO’s understanding of “ideological neutrality” implies. First, hiring algorithms that ignore structural bias have filtered out qualified candidates from underrepresented backgrounds, as seen in Amazon’s now-shelved recruitment AI. This EO seeks to deprive us of tools to address such situations.
Second, public-sector AI tools deployed in policing, health access, and education might well encode past discrimination as "neutral" when DEI-related considerations must be removed from evaluation metrics. We do have tools to effectively reduce bias while preserving or improving accuracy. (For the ACLU’s take, see here.)
Or, third, consider how climate-informed AI models are critical for accurate predictions in agriculture, disaster response, or defense. Political pressure to strip climate-change considerations from federal frameworks like NIST’s AI risk management framework might well degrade their precision and strategic value. There are no longer any serious scientific doubts that climate change is real. That the U.S. ignores basic facts means future Americans must suffer and go to enormous length to make adjustments that today could be done at smaller costs. Increased prevalence of fires like the one that ravaged LA in January and of floods like the one that ravaged parts of Texas in July are known consequences of climate change.
Why is the Gaslighting Here?
One might ask—why are we getting this EO on “woke AI” at all? Within a mindset that puts a certain understanding of American interests first rather than embracing global challenges as global challenges, it certainly makes sense to combine AI policy with energy and trade policy—but why this? It seems the best answer is that this administration habitually proceeds by projecting a sense of national crisis. The nation is overrun by illegal immigrants—except it is not. American campuses are drowning in systematic antisemitism—except they are not. Other countries, especially in Europe, are disregarding the rights of their citizens, hanging on to Soviet-era concepts, and undermining their democracy, and so the U.S. has to make sure this does not happen here—except these countries do nothing of the sort.
In some of these examples of national crisis the accusations are accompanied by outright gaslighting. Accusations of spreading ideological distortions are a case in point. Appeals of this sort are used to construct a moral high ground on which supporters are gathered and from which opponents can be condemned—sometimes “just” with words, sometimes with legal prosecutions. This EO sets the stage for doing all of this in the AI domain. And of course, this EO helps the government to enshrine their preferred view of the world in Large Language Models.
Human Rights and AI: From Apathy to Advocacy to Abdication
Part of America’s complicated history with human rights is that it has often found alternative vocabulary for matters the human rights movement casts as human rights concerns. The values of diversity, equity, and inclusion are a case in point. In the rapid development of AI and digital technologies more broadly over the last 15-20 years we can distinguish three periods in which human rights have entered the technology world in different ways.
Until roughly the mid-2010s, the relationship between technology development and human rights was one of apathy. Zuckerberg’s slogan “Move Fast and Break Things” characterized this period. Human rights language or its equivalents appeared at best on the margins.
But then many people got anxious about the rapid development of digital technologies, and about how the future of humanity would thereby be handed over to a private sector driven by the desire to move fast and break things. In the mid-2010s, a different attitude—advocacy for human rights in technology development— took shape in at least three ways. I should warn the reader that a lot of detailed references are coming up. The point is to demonstrate that the advocacy has indeed been intense.
First, a slew of academic books appeared that reached broader audiences and drew attention to the dark side of digital technologies. Let me mention a few. Cathy O’Neil’s Weapons of Math Destruction explained how across many sectors algorithms were deployed to make the rich richer at the expense of the poor. Ruha Benjamin’s Race After Technology explored how new technologies reflect and reproduce existing inequities but are promoted and perceived as objective and progressive. Safiya Noble’s Algorithms of Oppression investigated how search engines perpetuate inequalities and reinforce racism. Virginia Eubanks’ Automating Inequality explained how automated systems (algorithms, risk models, databases, etc.) are used by governments and agencies to make decisions about public services (welfare, housing, child protection, etc.) and in that process often perpetuate inequality, bias, and discrimination. Meredith Broussard’s Artificial Unintelligence warned of how computers often misunderstand the world. Solon Barocas and Andrew Selbst alerted us to the many ways in which data mining might lead to biased results.
Also, Shoshana Zuboff’s Age of Surveillance Capitalism explored how the current form of capitalism is driven by the large-scale collection and mining of data in ways that are so encompassing that she speaks of the commodification of people’s whole lives. Moreover, in a widely debated publication in Pro Publica, Julia Angwin and co-authors argued that algorithms used in criminal justice perpetuate bias against Black Americans. Joy Buolamwini founded the Algorithmic Justice League to draw attention to how AI was doing harm, not only but specifically in the domain of racial prejudice. Timnit Gebru too has drawn attention to these issues.
Moreover, Finn Brunton and Helen Nissenbaum’s Obfuscation sought to teach users to push back, software developers to keep user data safe, and policy makers to gather data without misusing it. Kate Crawford’s Atlas of AI offered a material and political perspective on what it takes to make AI and how it centralizes power. Not all these publications make human rights central. But their concerns are readily captured in such terms. An article in the New York Times from March 2023 had the title “AI is Being Built by People Who Think It Might Destroy Us”—capturing voices of major figures in the field of whom just that was true. Brave new world indeed.
Second, a flurry of declarations on principles of AI safety appeared that often gave a primary role to human rights, explicitly or implicitly. The Future of Life Institute around Max Tegmark, Jaan Tallinn, and others published the Asilomar AI Principles. The Center for AI Safety, under leadership of Dan Hendrycks, Oliver Zhang, Josue Estrada and others published their Statement on AI Risk, signed by major researchers and industry leaders in the AI domain such as Geoffrey Hinton, Demis Hassabis, Sam Altman, and Dario Amodei. The Partnership on AI brought together over 90 companies and non-profits to explore best practices around safe AI.
To continue this list, the Organization for Economic Cooperation and Development (OECD) published a set of values-based principles to guide the future of AI-development. AI for Good was established by the International Telecommunication Union (ITU), the UN’s leading agency for digital technologies. It has organized several summits to explore how to deploy AI for global problem-solving. The European Commission published Ethics Guidelines for Trustworthy AI. AI Now published on algorithmic accountability and transparency, among many other topics. The World Economic Forum created the AI Governance Alliance to promote the development and deployment of AI systems that are transparent, accountable, and aligned with human and societal values. And the University of Montreal started a civil society initiative that led to the Montreal Declaration on Responsible AI. Many others could be cited.
Third, the EU took the lead on devising AI regulation that includes human rights commitments. The EU AI Act is the world’s first comprehensive law to regulate AI, aiming to create a framework for trustworthy, safe, and ethical AI use across the EU, with an eye on setting global norms. The act came into force in 2024 and formulates a risk-based approach to AI regulation.
All of these efforts became even more intense after the release of ChatGPT-4 in March 2023. At the AI for Good Summit in summer of 2023, Stuart Russell—a leading computer scientist—presented the release of ChatGPT-4 as a wake-up moment. He had long argued that humanity’s attitude towards AI was as if aliens were sending an email announcing their pending arrival and humanity would respond with an out-of-office message. For Russell, reactions to this momentous release meant humanity was back in the office. Indeed, much activity resulted around the world, including in the Biden White House (see e.g., here and here).
When Biden acted, it was after years of intense advocacy around more cautious development of AI guided by human rights principles or similar approaches. And it was when technological breakthroughs made clear to many people that humanity needed time to adjust to this rapid change. AI might transform human life permanently, and lead to events whose importance surpasses anything that ever happened in history. It would make sense to take all this slowly—and a lot of people have recognized this since roughly 2016.
But then the election of 2024 happened. And now we have the new Action Plan. As far as human rights in the AI domain are concerned, given the importance of the U.S. there, we are now making the transition from advocacy to abdication—unless considerable political will can be mustered to push back on these developments.
Brave New World
The aforementioned piece in TechPolicy.Press reminds us that, sometimes, model alignment does go too far. When users asked Google’s Gemini to generate images of historical figures, the system returned visual outputs so diverse that they erased historical accuracy. This matter was referenced in the EO on woke AI.
But while this should not happen, nothing like this is an argument for the government to take charge the way it does with this EO. The model can be retrained: “the decision to launch Gemini despite this flaw reflects Google’s rush to market, not some underlying ideological program.” But banning concepts like misinformation, DEI, or climate change altogether is an immutable feature of the system, not a fixable bug.
Presumably the intention of the Action Plan is not to fix bugs but to contribute to a brave new world of this administration’s liking. That is a world in which the American private sector designs AI in close interaction with an ever more powerful executive branch; the needed infrastructure is largely powered by fossil fuels; climate change is set aside as a problem for future generations if it is acknowledged at all; America no longer has allies but only customers or client states; and a sense of national crisis and plenty of gaslighting is driving the political dynamics in the U.S. and beyond.
The Action Plan does not imagine a role for human rights or for any kind of safety in this brave new world.
After about a decade of human rights advocacy in the tech domain—with mixed results, admittedly—we now do seem to enter a period of abdication in this domain. The Action Plan does not imagine a role for human rights or for any kind of safety in this brave new world. This is a world Big Tech will like, and that central figures in this administration will like. It is also a world many Americans will not value. Many of them took to the streets for the “No Kings” demonstrations in June. We must hope they will keep doing so. And as a recent op-ed by conservative writer Russ Douthat pointed out: “Whatever your ideals or fears, whatever your beliefs about the good society, the battles you care about will be won or lost in the United States.” In no domain is this more accurate than in tech policy and the ways in which it does or does not pay attention to human rights.
Mathias Risse, Faculty Director, Carr-Ryan Center for Human Rights