作者:Eryk Salvaggio
Eryk Salvaggio is a fellow at Tech Policy Press.
Elon Musk and President Donald Trump held a press conference to announce Musk's departure from DOGE, Friday, May 30, 2025, in the Oval Office. (Official White House photo by Molly Riley)
At a White House press conference to announce Elon Musk's departure from DOGE, President Donald Trump focused on DOGE's "incredible" budget cuts, including his repeated claim that endometriosis research was a study to make "transgender mice." But as always, the stream of preposterous claims distracted us from the real story: Elon Musk, who had promised that AI would replace the role of government workers, had successfully inscribed his ideological project into the technical systems steering the US government.
In February I outlined the contours of what I saw as an AI coup, in which the role of AI as a technology is secondary to its role as both spectacle and excuse. LLMs are not just text generators but pretext generators. AI is most potent as a discursive tool to justify and enact actions for which nobody wants to be accountable. Musk has served a similar role, taking center stage with a literal buzz saw while focusing on the outrage of mass budget cuts.
Most egregiously, he cut funding to a program that had prevented 26 million deaths from AIDS, including children. Elected officials mostly shrugged and hinted there was nothing to do about it.
Musk's edgelord-in-chief chainsaw schtick was so obviously a spectacle that it created a convenient public face for the radical gutting of the federal government. This created the first successful grassroots protest movement of the new Trump era and a sharp decline in sales for Tesla, Musk's car company.
But what motivated this spectacle? Musk and Trump have touted DOGE's most ideologically radical budget cuts. Still, according to an analysis in The Atlantic, "total federal outlays in February and March were $86 billion (or 7 percent) higher than the levels from the same months a year ago." Integrating AI into government decision-making seems to have not saved taxpayer money after all.
Ultimately, the far less touted AI transformation of the federal workforce is still underway as part of the agenda of "government efficiency." This integration has been roughshod but is firmly underway and will continue long after Musk's departure. WIRED reported that DOGE used Llama 2, a locally installed AI model from Meta, to review and classify emails from federal employees tasked with outlining five accomplishments per week. However, with news last week that Palantir has landed $113 million in contracts to create "the most expansive civilian surveillance infrastructure in US history," it is telling that the company has also signed a deal to integrate Musk's Grok language model into its platform.
xAI's Grok, of course, is directly under Musk's control. In a recent debacle, the model began responding to various queries about mundane topics such as baseball with comments asserting the reality of white genocide in South Africa. That intervention was a clumsy bit of social engineering in which the model's hidden system prompt was manipulated to persuade people toward a particular point of view on the issue. This editorial control over Grok's outputs was easily discovered through poor implementation — attributed to a "rogue employee." A few days later, the model engaged in Holocaust denial, which was xAI attributed to a "programming error."
The company responded to the "white genocide" incident in an X post: "This change, which directed Grok to provide a specific response on a political topic, violated xAI's internal policies and core values." It added that the system prompt could no longer be modified "without review."
Both cases reflect the practical reality of system prompts. Simply put, they can be changed by anyone with control of the model at any time and are only subject to scrutiny once detected. This scrutiny relies upon self-disclosure, as does xAI's promise of transparency into all future system prompt changes.
As such, reliance upon any corporate model in internal government decision-making hands incredible political power to the tech elites that control that model or models. Typically, the uptake of technology into government is done through careful deliberation and security reviews. DOGE’s implementation of these technologies is not being handled appropriately. The independence of any review is questionable. Furthermore, by emphasizing the integration of agency data into a single, unified model, it is impossible to imagine that the security precautions and needs specific to any one agency are being assessed or considered. In sum, DOGE is implementing a radical set of transformative changes and system upgrades without assessing whether they are necessary, sufficient, or beneficial to the citizens they serve.
Join our newsletter on issues and ideas at the intersection of tech & democracy
If the Trump administration is concerned about building reliable systems, it isn't showing any signs. In a gift of power to the tech elite, DOGE and the Trump administration have already throttled oversight into evaluating the biases in corporate AI models, as well as the rarer case of outright system prompt manipulation. After DOGE slashed funding for any academic research into AI bias as part of its "DEI purge," a new bill passed by Congress would ban any new laws regarding oversight of AI for the next ten years, though it is unclear if it will pass in the Senate.
Musk's departure from DOGE leaves a legacy, one of which is the selection of Palantir, the Thiel-founded AI company hand-selected by Musk for the task. Musk and Thiel were co-founders of Paypal, and Thiel has been an ardent supporter and "good friend" of Vice President JD Vance. Thiel has written that he "no longer believe[s] that freedom and democracy are compatible."
The concentration of power that Musk set into motion with DOGE will continue unabated, only even more hidden and entrenched. His departure from civil service is a success for those who organized against him — a sign that the administration could smell Musk's quickly rotting political capital. But the work of DOGE doesn't end with Musk. Rather than assigned to the public face of the world's wealthiest man, nameless bureaucrats hired for loyalty to the cause will run the rollout.
The real work of DOGE was never about slashing government waste but automating bureaucracy with fewer individual oversight and accountability points. This goal of "government efficiency" was never well defined. Streamlining government should mean creating simpler contact points between citizens, services, and information. Instead, the layoffs across government have created logjams and system failures while compromising privacy. IRS funding cuts created concerns about audits and refunds, while potentially costing $350 billion in lost revenue over 10 years.
The purpose of DOGE was not to optimize bureaucracy or eradicate bureaucratic waste. It was the opposite: to strip bureaucracy of its last traces of humanity. It is not efficient in terms of a government ruled by citizens, but efficient in terms of industry: individual citizens become generalized categories, all of which are assumed to be abusing the system from the outset. From there, rights, privileges, and eligibilities can be awarded or rejected based on the logic imposed by the biases trained into the AI system.
Those who control the assignment of defining and automating the response to these categories wield enormous power over this system. This includes how they shape the output, but also what the model produces as output. Models don’t emerge fully formed. They reflect decisions by those who train them, in terms of what they are optimized to do and what biases are tolerated. With the current generation of administrators haunting the system through algorithmic traces, what was once a faceless bureaucracy will now lack even a body.
Bureaucracy and bureaucratic error are not new phenomena. What is new is the near total severance of human accountability for these errors and how that encourages a disinterest in avoiding harmful mistakes.
Consider Robert F. Kennedy Jr.'s health report from the Make America Healthy Again Commission, which was found to have made up citations as if a Large Language Model had written it. What once would have been a lasting scandal was dismissed due to "formatting errors." Kennedy favors policies for which evidence has never been a factor. The use of AI to write the report signaled attention to the pretense of publishing a report but little desire to weigh any objective scientific evidence. However, it is not merely the automation of fabricated evidence that makes AI a powerful tool for the Trump administration.
If the point of DOGE was optimizing government, it seemed optimized more to punish opposition to Trump — particularly migrants, academics, people with disabilities, and racial minorities — as quickly as possible, with scattershot effects in which errors were inevitable. Such logic is on display in the case of Kilmar Abrego Garcia, whom the administration deported in an "administrative error" that it now claims it has no authority to correct. It is an intentional weaponization of error.
Ultimately, this is what DOGE was about: creating conditions through which all desired but controversial outcomes can be attributed to "administrative" or "programming" or "formatting" errors: an administration of error, a constant shifting of blame to shoddy tools rather than their own decision to use them. In Garcia's case, the administration has been proud to accept responsibility for the refusal to correct the course of this error. The message from the administration is this: We live under an unreliable order, and only those who show allegiance will see the inevitable harms of this disorder remedied.
We move closer to creating an unstable environment wherein the arbitrariness of AI-generated hallucinations or the deliberately crafted but invisible system prompts can determine the fate of citizens. The administration is creating a state of permanent risk that can only be spared through performances of ideological loyalty. Consolidating citizens' private data into new, unified surveillance architectures, dressed up and justified by AI hype and steered by easily manipulated system prompts, is a recipe for destabilization. The only change from Musk's departure may be the degree to which the public continues to pay attention.