AI and the Dangerous Fiction of ‘All Lawful Use’
Dunstan Allison-Hope, Iain Levine / Apr 16, 2026
A woman stands behind a vehicle destroyed in a strike while visiting a residential area that was struck on March 9 in Tehran, Iran, on April 9, 2026. (Photo by Morteza Nikoubazl/NurPhoto via AP)
Much has transpired since Anthropic and the United States Department of Defense went public in February with their dispute over whether the US government should be able to require AI companies to permit “all lawful use” of their technologies by the government. While the “all lawful use” framing may seem reasonable at first glance, its adoption as a universal principle for government use of AI risks widespread violations of international human rights law (IHRL) and international humanitarian law (IHL).
Anthropic and the DOD failed to reach an agreement over two particular questions: mass surveillance of US citizens and the development of lethal autonomous weapons operating without human control. The government designated Anthropic a “supply chain risk,” which the company is challenging in court.
In recent weeks, as the international community and human rights activists have reacted to the US and Israeli offensive against Iran and Lebanon, even more serious concerns have been raised about AI-supported decision-making by humans and its impact on the failure to protect civilians as defined by the laws of war.
However, last month, the scope of AI applications expanded to include US domestic uses of AI as well. As first reported by the Financial Times, the US General Services Administration drafted rules for civilian AI contracts that would also apply the “all lawful use” standard. These draft rules state that the supplier of AI systems must grant to the government “an irrevocable, royalty-free, non-exclusive license to use the AI System…for any lawful Government purpose.”
This question—of whether AI companies should be required to permit “all lawful use” of their technology by governments—has consequences far beyond the dispute initiated by the current US administration. It addresses a core dilemma in the sale of AI technology to governments worldwide, not just in the US.
On its face, it might seem rational to conclude that “all lawful use” is an appropriate standard for private companies when making sales to the government, whether for military purposes or civilian ones. Governments have legal obligations to protect the rights of their citizens and exist to serve the public interest, a higher and more virtuous goal than delivering returns to shareholders. Surely, it should be legal standards enacted by governments that decide how products are used in the public interest, not shareholders' interests or the preferences of investors and billionaire tech founders.
But the idea that AI companies should be required to allow “all lawful use” of their technology falls short for at least three reasons.
First, national laws may not align with international human rights standards. The absence of a federal privacy law in the US and the flaws of Section 702 of the Foreign Intelligence Surveillance Act—which permits the collection of large volumes of US citizen data through warrantless surveillance of non-Americans outside the US—demonstrate how US laws can and do conflict with IHRL.
Second, governments can send clear signals that they do not intend to follow their own laws and take actions that make compliance with the law in practice much less likely. Many recent statements from the US President and Secretary of Defense show clear disregard for both national and international law, even as the dismantling of the Civilian Protection Center of Excellence and similar teams has greatly increased the risk of civilian casualties during armed conflict. Or consider the case of the Russian government, which has violated the laws of war in Ukraine on multiple occasions.
Third, the “all lawful use” standard does not scale globally. Requiring companies to permit “all lawful use” in the US weakens their ability to resist government demands in other countries where human rights and humanitarian law violations might be even more likely. While much of the global attention on the dangers and human rights risks of AI technologies in conflict is currently focused on the US and Israel given current events in Iran and Lebanon, the issue is broader, and there are many situations in which national laws are incompatible with human rights standards or where governments proudly flout international norms in pursuit of a nationalistic and autocratic agenda.
AI technologies are playing an ever more critical role in both military operations (as we have witnessed in Gaza, Ukraine, and Iran) and federal law enforcement. During the recent immigration crackdown in Minnesota, federal agents used AI and other digital technologies to track both undocumented migrants and protestors. The need to adopt clearly defined international human rights standards as the consistent and universal reference point for integrity is increasingly imperative.
Human rights serve as a universal standard for all peoples and nations, while IHRL and IHL outline the obligations of governments to act in ways that promote human rights and refrain from acts that fail to protect them. This offers a more durable, consistent, scalable, robust and internationally recognized standard for guiding company decision-making than important but ultimately more malleable and contested concepts of ethics, democratic values, safety, and responsibility.
The question of how companies should embed respect for human rights into their decision-making has evolved significantly over the last two decades. The United Nations Guiding Principles on Business and Human Rights (UNGPs) apply to companies across all industries and state that they should “seek ways to honor the principles of internationally recognized human rights when faced with conflicting requirements.” When the local context makes it impossible to fully meet this responsibility, companies are expected to respect the principles of internationally recognized human rights to the greatest extent possible under the circumstances and show their efforts in this regard.
The UNGPs establish normative standards that can and do guide our thinking during times like this. They offer a more principled approach for companies—and a more desirable one for society overall—than the “all lawful uses” standard. They recognize the reality we are becoming all too familiar with: that governments, and therefore their laws and enforcement, do not always serve the public interest.
It is right for governments to establish policies, laws, and regulations that constrain company action in the service of protecting and respecting human rights, but the reverse is also true. When governments fail to protect the rights of their citizens, companies have a responsibility, under international human rights standards, to protect the rights of those whom they impact, including by undertaking due diligence to assess whether the use of their products may result in adverse human rights impacts. Human rights due diligence must adapt to the pace of AI development, which will require shifting toward methods that are integrated directly into product design, development, and deployment. If risk of harm is reasonably foreseeable, companies should take action to avoid, prevent, or mitigate these impacts.
However, it is also important to acknowledge that the role and responsibility of AI companies extends beyond a go/no-go decision on sales to governments and beyond a collection of contractual commitments to responsible use, as expressed in AUPs and terms of service. This includes training, setting limits on what governments (and other users) can and cannot do with their products, and restricting products’ capacity to enable certain harms.
If governments and the UN are to ensure that AI systems do not harm civilians, their design, development and deployment must be in accordance with human rights norms, and human rights due diligence must be built into every stage of the process. “All lawful use” is too imprecise, too weak, and too easily exploited a term to justify harmful or rights-infringing applications of AI in both military and civilian contexts.
Authors

