Home

Donate
Perspective

Trump Administration Official Says Quiet Part Out Loud on AI-in-Government Plans

Jordan Ascher / Feb 5, 2026

The headquarters building of the US Department of Transportation (DOT) in Washington, DC in June 2022. Shutterstock

Last week, ProPublica reported that the United States Department of Transportation is planning to use Google Gemini, a large language model, to draft federal transportation regulations. Writing a federal regulation is frequently a long and intensive process. Agency officials apparently believe they can outsource “80% to 90%” of that work, usually done by legal and policy experts, to artificial intelligence, “revolutioniz[ing] the way we draft rulemakings.” As the agency’s general counsel put it, “it shouldn’t take you more than 20 minutes to get a draft rule out of Gemini.” DOT plans to be the “point of the spear” of a broader federal effort to use LLMs to speed rulemaking. That is consistent with reporting last summer that the erstwhile US DOGE Service hoped to use AI to facilitate the rescission of half of all federal regulations in a matter of months.

Clearly, the Trump administration is all-in on regulation-by-AI. Others are not so sure: although AI has the potential to greatly aid the work of federal regulators, agencies that over-rely on LLMs to do their work for them open themselves to legal and policy risk. What’s most remarkable about the DOT’s plans is that agency leaders seem to have little interest in mitigating or avoiding those risks. Instead, they apparently welcome them as an acceptable price to pay for speed and volume. “We don’t need a perfect rule on XYZ. We don’t even need a very good rule on XYZ,” the agency’s general counsel apparently boasted. “We want good enough. We’re flooding the zone.” The administration, at least behind closed doors, seems to have dropped the pretense that good governance is the goal.

There is a reason why federal rulemaking requires careful work. The default rules for how an agency may issue binding regulations are laid out in the Administrative Procedure Act (APA), a statute that is having a moment in the sun. On the front end, an agency must generally begin by issuing proposals explaining the action it intends to take. It must then field public comments (which can number in the hundreds of thousands or millions) and issue a final rule containing a detailed legal and policy justification. Final rules are often hundreds of pages long. On the back end, rules can be subject to rigorous judicial review. Litigants might ask courts to assess whether a rule comports with the Constitution and is authorized by congressional statute. The APA also directs courts to set aside rules that are “arbitrary” and “capricious.” Arbitrary-and-capricious review, though “deferential” in theory, can be rigorous in practice. Courts often flyspeck rules to confirm they are “reasonable and reasonably explained”—that is, free of factual or logical errors.

It is easy to see why some might be eager to incorporate LLMs into this process. Agencies need to generate cogent, detailed documents justifying their rules. And LLMs can produce vast quantities of high-verismilitude text, even on technical matters, at the push of a button. Indeed, administrations of both parties have been incorporating AI tools into their work for years. Those efforts have intensified at the White House’s prodding, but not all efforts to use AI in rulemaking are as dramatic as the DOT and DOGE proposals.

The DOT proposal goes well beyond using LLMs to support agency decisionmaking; they are reportedly set to do the deciding. ProPublica’s reporting is replete with evidence that administration officials want LLMs to play the leading role in making policy. Humans, in their view, will do little more than monitor “AI-to-AI interactions.” Agency staff fear their jobs in this paradigm would merely be “to proofread this machine product.” A plan proponent went so far as to deride detailed rule justifications as “word salad.” This concept of administrative decisionmaking raises legal and policy concerns.

For one thing, LLMs tend to produce outputs containing errors. There is the persistent problem of “hallucination,” which DOT officials seem largely to have dismissed out of hand. In a similar vein, LLMs are prone to acting sycophantically—reinforcing the explicit or implicit premises of the prompts they are given—and replicating biases or mistakes contained in the information on which they are trained. They may also struggle to accurately process long, complex documents.

This is, of course, a policy problem. LLMs, if left to their own devices, might produce policies that contain errors, lack a basis in evidence, or are downright misguided and inapt. All federal regulations are high stakes, but overreliance on AI to generate rules designed to prevent plane crashes and pipeline explosions could well put lives at risk. What’s more, it is hard to imagine that even the most advanced LLM can replicate and apply the expertise (much less wisdom) of agency policymakers or facilitate the consensus necessary to plan and implement an agency’s policy agenda.

The risk of LLM error is also a legal problem. A rule containing factual and legal mistakes may be invalidated in court. The Supreme Court has explained that a rule is arbitrary and capricious under the APA where an agency “relie[s] on factors which Congress has not intended it to consider, entirely fail[s] to consider an important aspect of the problem, or offer[s] an explanation for its decision that runs counter to the evidence before the agency, or is so implausible that it could not be ascribed to a difference in view or the product of agency expertise.” AI-generated rules may be particularly likely to contain errors like these, making overreliance on LLMs counterproductive to the agency’s policy goals (whatever they may be).

DOT’s plan also implicates a more fundamental legal question: what role must humans play in rulemaking? Federal law imposes a number of duties on agencies issuing regulations. They must consider the relevant factors and articulate a satisfactory explanation for an action, including by “provid[ing] a full analytical defense” of any model used to inform a rule. They must consider and respond to public comments. And, as a matter of basic due process, they may not fully prejudge important substantive matters. There are reasonable—albeit untested—arguments that an agency cannot satisfy these obligations by simply rubber-stamping (or merely proofreading) the output of an LLM. An agency must instead independently confirm that the reasoning contained in a rule is comprehensive, cogent, and reliable, especially when a tool prone to error generates the first draft. That could well require similar levels of human oversight and judgment as in ordinary rulemakings—a reflection of courts’ longstanding “insistence that ultimate responsibility for the policy decision remains with the agency rather than the computer.”

Given all this, it was surprising to read that DOT’s general counsel does not care if LLM rules are “perfect” or even “very good.” They need only be “good enough” for “flooding the zone.” If he means the agency is willing to accept rules with inaccurate and internally inconsistent factual and legal analyses, that betrays a misunderstanding of the standards by which courts review agencies’ work. It also shows a disturbing lack of seriousness about producing strong and prudent rules that safeguard our transportation system.

It’s also saying the quiet part out loud. To the extent the architects of DOT’s plan are admitting that LLMs are, at present, principally useful for churning out “word salad” rather than generating strong regulations, that suggests that even AI’s strongest proponents do not yet see the technology as fit for regulating. Far from being able to efficiently generate effective rulemaking documents, LLMs are, in the administration’s view, best suited to muddying the waters. That concession reinforces why LLMs should, at least for now, play at most a supporting role in rulemaking.

The ProPublica report gives advocates and litigators a warning that it is time to prepare for the “flood.” Advocates might watch the Federal Register for new DOT proposals and final rules, combing through them for the kinds of errors in reasoning courts typically classify as arbitrary and capricious. Moreover, arguments that an agency unlawfully rubber-stamped an AI-generated proposal do not depend on the substance of particular rules. Finally, advocates can push—both in court and in comments on proposed rules—for DOT to disclose whether it has used AI in a particular matter. ProPublica’s explosive reporting constitutes powerful evidence of why such disclosure is warranted—and, more generally, why minimally supervised reliance on LLMs to write federal regulations is misguided and unlawful.

Authors

Jordan Ascher
Jordan Ascher is Policy Counsel at Governing for Impact. He writes on a range of legal and governance topics, including the application of administrative law to federal agencies' use of artificial intelligence. He was previously a litigator in private practice and a law clerk to two federal circuit ...

Related

Perspective
Using AI to Reform Government is Much Harder Than it LooksJune 3, 2025
Perspective
America Needs Better AI AmbitionsDecember 11, 2025
News
Unpacking Trump’s AI Action Plan: Gutting Rules and Speeding Roll-OutJuly 23, 2025

Topics