Home

Donate
Perspective

The Anthropic Pentagon Standoff and the Limits of Corporate Ethics

Sharon Strover / Mar 12, 2026

The Pentagon, headquarters of the United States Department of Defense, in Arlington County, Virginia. Shutterstock

I recently joined a spin-off session from an annual Ethical AI Symposium at UT-Austin to talk about the Anthropic-Pentagon struggles with two other professors. The discussion centered on Anthropic’s refusal, under the Trump administration, to allow its AI models to be used for mass domestic surveillance or autonomous weapons – what the Department of Defense (or Department of War) terms “lethal autonomous weapon systems.”

The DoD argued that the only restriction should be that AI be used for “lawful purposes.” Anthropic rejected that framing, and when the new “Department of War” maintained its conditions, the contract opportunity migrated to OpenAI (whose lead robotics researcher has since quit the company over that contract). On its face, the episode looks like a principled corporate stand against government overreach. But the deeper dynamics are more complicated than a simple hero-versus-villain narrative in which the company emerges as the principled actor.

First, there is no real hero here. Government-versus-corporation skirmishes are a recurring feature of a country that prizes corporate liberalism. The United States has a long history of cooperation between the defense establishment and private technology firms, and the lines between them are blurrier than any single contract dispute would suggest. Anthropic’s chatbot Claude has been used widely by the Defense Department and is integrated with products from Palantir, a data analytics company deeply invested in surveillance systems. Anthropic had also placed other tools with the Department of Defense, including some used in the current Iranian conflict.

More broadly, when we look at the history of communication companies in conflict with the government or regulatory agencies over issues like content regulation or industry structure, they are negotiating the terms under which companies can operate and what role the government might have in smoothing the path toward commercial gain. It is not an accident that the US was singular in establishing a wholly commercial broadcast system while other countries opted for State-run entities. (Tom Streeter’s book “Selling the Air” is a masterful illustration of this dynamic in broadcasting.) US antitrust law has also tended to allow significant consolidation within these markets.

When we ask whether one company is “better” or “worse” than another, we lose sight of the broader matter of values. The real lesson is the importance of democratic structures that can chart a durable ethical path reflecting societal values, rather than structures that amplify transitory political leverage or capital gains. Congressional guardrails for AI have been proposed with little progress. A former Secretary of the Air Force recommended legislating rules specifying acceptable uses. But regulation alone may invite future transgressions unless it is anchored in genuine democratic accountability.

Second, Anthropic’s stand against AI being used for mass domestic surveillance sounds good, but the United States already has a deep and largely normalized surveillance infrastructure. J. Edgar Hoover’s decades-long cottage industry of unwarranted surveillance, harassment and Counter Intelligence Program eventually prompted governmental reforms, yet the underlying impulse never disappeared; it simply migrated into new technologies. Indeed, Beverly Gage’s analysis of Hoover in her history “G-Man: J. Edgar Hoover and the Making of the American Century” argues that he had largely enjoyed public assent with his program.

Today, surveillance extends beyond the federal government and is embedded in routine commercial and public-sector transactions. Federal statistics from the 2020 Bureau of Justice Statistics show that body-worn cameras are used in all police departments in cities of more than 1 million people, and license plate readers (ALPRs) are deployed universally in cities of that size. As of 2020, 46% of those cities were also using facial recognition technology.

Sara Brayne’s research in “Predict and Surveil documented that predictive policing tools were used in 38% of all police departments, according to a national survey. The book cited a prediction that a majority of police departments would be using data to forecast crime by 2020, which proved too conservative. Data from the Bureau of Justice Statistics show that in cities with more than 1 million residents, about 77% use predictive policing tools and 92% use social network analysis in investigations. Only when communities have fewer than 250,000 residents does the use of predictive policing move below 50%.

Federal agencies using data-based tools compound the picture. Immigration and Customs Enforcement (ICE) relies on products from Clearview AI and apps like MobileFortify and Mobile Identity for facial recognition, while the Department of Homeland Security uses Webloc to identify and track phones. Border Patrol agents often work with local law enforcement, leveraging local police capabilities to investigate people in ways normally off-limits to federal agents. My colleague Emily Tucker at Georgetown University’s Center on Privacy and Technology captures the cumulative effect when she observes, “All data are police data.” The risk of mission creep is acute when every level of law enforcement has access to advanced analytical tools originally designed for military or intelligence contexts.

It is also worth noting that surveillance is highly cultural, shaped by a society’s ideas about accountability and trust. Countries like South Korea and China deploy extensive camera networks within governance frameworks that rest on different assumptions about state authority. In the United States, the challenge is that surveillance technologies are expanding rapidly while the democratic norms meant to constrain them have not kept pace. A 2023 study from the Pew Research Center reports that 72% of US adults say they have “very little or no” understanding of the laws and regulations in place to protect their data privacy.

Third, we need to pay careful attention to the language framing political and social programs that justify intrusive surveillance. “Safety” and “emergency” are powerful, loaded words that can create a slippery slope toward reduced civil liberties. When Senator Russ Feingold cast the lone Senate dissenting vote against the Patriot Act in 2001, he argued the legislation gave too much power to the executive branch, sidestepped checks and balances, and eroded privacy rights, all in the name of responding to an emergency. In a 2021 essay in The Nation reflecting on two decades of the law’s consequences, Feingold asked how the country could rein in the power the executive branch had accumulated.

The Bush administration’s President’s Surveillance Program, including warrantless wiretapping of communications of people believed to be connected to al-Qaeda and later exposed by Edward Snowden, illustrates how emergency rhetoric can translate into unchecked authority. Historically, domestic antiterrorism powers have been disproportionately wielded against Black and Brown communities and political progressives, from the Red Scare of 1919–20, through Hoover’s FBI, to the targeting of Muslim communities during the War on Terror, as Feingold noted in a 2021 article in the Wall Street Journal.

I was thinking of the Minneapolis public in the streets earlier in the year when I encountered philosopher Bonnie Honig’s observation in “Emergency Politics”:

“When we treat sovereignty as if it is top down and yet governable by norms we affirm, we help marginalize rather than empower important alternatives, such as forms of popular sovereignty in which action in concert rather than institutional governance is the mark of democratic power and legitimacy.”

Honig’s insight points to what is ultimately at stake in the Anthropic dispute and in the broader drift toward pervasive, AI-enabled surveillance. The question is not merely which institution—government or corporation—should hold determining power, but whether democratic publics can reclaim the authority to set the terms.

Promising alternatives are emerging. Bruce Schneier and Nathan Sanders have advocated for public-interest AI frameworks in their book “Rewiring Democracy.” The Mozilla Foundation and the Open Future initiative are building civic-minded infrastructure. Countries like Canada and Switzerland are experimenting with public AI models designed to serve the common good rather than private returns. The path forward lies not in trusting any single actor, whether state or corporate, but in building the democratic structures robust enough to govern the technologies that increasingly shape our lives.

Authors

Sharon Strover
Dr. Strover is the Philip G. Warner Regents Professor in Communication, former Chair of the Radio-TV-Film Department at the University of Texas, and now Professor in the School of Journalism and Media where she teaches communication technology and policy courses and co-directs the Technology and Inf...

Related

Perspective
Five Unresolved Issues in OpenAI’s Deal With the Department of DefenseMarch 9, 2026
Analysis
A Timeline of the Anthropic-Pentagon DisputeFebruary 25, 2026
Podcast
How to Think About the Anthropic-Pentagon DisputeFebruary 28, 2026

Topics