AI Firms Can Limit Military Surveillance of Americans. What About Everyone Else?
Kristina Irion / Mar 11, 2026
Illustration of a button on a keyboard that reads "OpenAI." Rokas Tenys/Shutterstock
In recent days, a public dispute has laid bare tensions between AI companies and the US military over who decides how the AI is used. The US Department of Defense cancelled Anthropic’s AI agent Claude and instead struck a deal with OpenAI. This happened due to Anthropic’s CEO Dario Amodei’s refusal to give the US military free rein over the use of Claude, in particular, Claude being used for domestic surveillance and plugged in with lethal autonomous weapons. OpenAI subsequently stepped in to provide an alternative system, but within days the company was scrambling to clarify that its contract would include safeguards limiting how its AI could be used, particularly in relation to surveillance of US citizens. Should this be a concern to the rest of us?
Clearly, AI merchants have not been debating moral scruples about their AI being used for mass surveillance, per se. Even Anthropic’s reservation has been confined to domestic AI-enabled surveillance in the US. Besides, contractual safeguards will likely turn out to be ineffective to disarm AI models from being used for mass surveillance. Only brakes that are hardwired into the AI model would offer a reliable deterrent; however, this would be bad for business. Hoping that self-regulation by the very industry that is incurring massive debt in order to keep training ever larger AI models will restrain the US Defense Department is misplaced.
The framing around limiting the US military’s use of generative AI for domestic surveillance should unsettle the rest of the world. AI-enabled surveillance is no longer science fiction. Consider the news about the Israeli military’s reliance on the Microsoft Azure platform to store and analyze millions of mobile phone calls from Palestinians in Gaza and the West Bank. The capability of generative AI outgrows that of any previous surveillance technologies and its scalability allows for truly large-scale intelligence operations. It begs the question of how the US Defense Department will decide against which foreign countries and populations it will use its new AI-enhanced surveillance capability?
And here comes the déjà vu. In 2009, well before the ascent of generative AI, I published a viewpoint about international communications surveillance and the consequential distinction between citizens and non-citizens in US constitutional law. The Fourth Amendment to the US Constitution affords protection against government snooping on domestic communications. (Still, the late whistleblower Marc Klein, an AT&T technician, released evidence that the National Security Agency (NSA) and AT&T conspired to carry out mass domestic wiretapping).
International communications surveillance, by contrast, is not limited by US constitutional law. In fact, US legislation, such as Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order (EO) 12333, authorizes the interception of communications of non-US persons abroad. Back in 2009, I argued that if US law specifically allows international surveillance, then we can assume that such authority is also used by US intelligence services. This was utterly confirmed by the whistleblower Edward Snowden, a former N.S.A. contractor, in 2013, revealing the large-scale collection of users’ data from US internet companies called the PRISM programme.
Where constitutional protection is limited to domestic surveillance, the rest of the world becomes a legitimate target of AI-enabled mass surveillance. Neither national rules nor international law reigns in the US military to do so. The Trump administration has revoked the previous government’s policy on AI safety and trustworthiness. The US is not a party to binding and enforceable international law commitments that would limit AI-enabled mass surveillance. The human right to privacy, enshrined in the non-binding Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights (ICCPR), has been sidelined by successive US administrations. Besides, most international agreements feature a national security exception, which creates a safe harbor for the military’s use of AI.
Countries seem woefully unprepared to respond to AI-enabled mass surveillance by another state. Although the European Union’s General Data Protection Regulation (GDPR) governs cross-border data flows, the EU-US Transatlantic Data Privacy Framework fully enables commercial data flows to the US. Though Biden’s Executive Order “On Enhancing Safeguards For United States Signals Intelligence Activities” still stands, it is unclear how much protection it can afford under the current administration. It could well be that the Transatlantic Data Privacy Framework has already turned into a façade that the US intelligence community no longer abides by.
It would also be completely inadequate for states to negotiate a similar agreement with AI merchants, as this would replicate the faultlines of the US constitutional approach. What the world needs are binding multilateral rules that mandate state parties to respect the rights to privacy and confidentiality of digital communications in their use of AI. The ubiquitous national security exception in international law should also be tethered to situations of national defense and exclude wars of aggression. So, for instance, in its talks with the US, the EU should insist on reciprocal safeguards for EU citizens’ data to that of US citizens. Moreover, the EU should recognize that for the US, national security is tethered to economic security, which has implications for the scope of largely self-judging national security exceptions.
Ultimately, it will be humans (and not some evil AI) who decide if AI will be tasked with international mass surveillance. For now, the US Defense Department appears too preoccupied with fighting one battle after another in order to unleash AI-enabled mass surveillance on the rest of the world. But then let’s not discount what scaling in the context of AI really implies. What if the US military takes a similarly sweeping approach to its AI-enabled surveillance as the NSA (previously) did with international signals intelligence? Whether encryption of digital communications holds up against AI is also debatable. At a time when the rule-based international legal order disintegrates, the international community urgently needs multilateral guardrails to outrule AI-powered mass surveillance.
Authors
