Home

Donate
Perspective

States are the Stewards of the People’s Trust in AI

Trooper Sanders / Apr 6, 2026

Northwest view up to the pediment, rotunda, and dome of the California State Capitol in Sacramento. (Radomianin / Wikimedia Commons)

Artificial intelligence provides the means to create a new era of growth, innovation, and opportunity, but it faces major roadblocks: trust in AI is shaky and America’s approach to AI politics is broken.

A recent Fox News poll found that 66 percent of registered voters were concerned about AI. Pew Research Center polling found that only one in ten Americans feel they have a great deal of control over whether AI is used in their lives despite more than six in ten Americans wanting more control.

They aren’t wrong. The frontiers of AI capabilities are advancing rapidly, and AI’s role in our lives grows by the day. AI-powered entertainment and education products are increasingly part of our children's lives. AI agents are being tested in the office and on the factory floor. We encounter AI in banking, insurance, health care, and public safety. But, as discussed in detail in the 2026 International AI Safety Report, AI’s promise is tempered by immediate harms, especially to children, as well as near term challenges to safety and social cohesion, and potential existential threats to humanity.

How we balance AI’s risks and rewards, and build defenses against threats, while forging pathways seizing opportunities, should be worked out in the public square and in the marketplace. Unfortunately, constructive debate and spirited competition is being crowded out by unproductive assertions that government regulation is anathema to innovation and by attempts to shove states out of the AI policy arena. The commercial success of AI will not happen without earning the public’s trust. And the public will not trust AI until it has assurances that AI is safe and that those who develop and deploy it can be held to account. The best hope for progress on these matters lies with the states.

The role of states

States bear much of the responsibility for ensuring that the industries, innovations, products, and services touching our lives are safe and suitable for us. From early household chemicals to building codes to the electrical grid, state governments have been the first to build the standards and processes that ensure the benefits of innovation are widely distributed and the risks are contained. The Constitution vests states with broad powers to make policy in matters of public health and safety, making states the primary regulator of public health and health care, education, professional licensing, land use, insurance, and crime. In addition, states exercise considerable authority alongside the federal government in occupational safety, environmental protection, financial services, consumer protection, and food safety.

Practically speaking, from the food we eat and the bars and restaurants Americans trust, to the hospitals, child care centers, and nursing homes that care for us and our loved ones, to the workplaces where we earn a living and the banks and stores we rely on, state regulation and oversight quietly shape and safeguard everyday life. And when trouble brews—whether in sudden emergencies or through slowly growing concerns—it is to state and local officials that we turn to first for guidance, reassurance, action, and accountability.

Today, frontier AI companies are promising that their models will have seismic impact across these and many other domains. They are facing considerable pressure to turn cutting-edge research and massive capital investment into products and services that generate sales and revenue for the company and returns for investors. This creates tensions between pursuing profits and investing in safety. These tensions will play out in choices made across the AI stack but will be especially felt by people and communities closest to the application layer.

Because AI adoption will occur in highly regulated industries and sectors, states have clear authority to steer how the technology is developed and deployed—and even more soft power to influence what the enterprise customers of frontier AI companies demand of their models. Seeking AI revenue in industrial automation? Twenty-two states are the primary regulators of occupational health and safety. Building AI revenue in health care? States administer Medicaid that finances approximately 1 in 5 dollars of national health spending. In search of AI market share in K-12 and secondary schooling? State departments of education, along with local school boards and officials, are gatekeepers to potential customers.

Federal policy may affect the scope of these state powers, but it will not change the fundamentals. From AI development and deployment to data center infrastructure and energy generation and distribution, US states will always be necessary partners in AI governance. Indeed, states have the opportunity to extend what they do best across regulation, procurement, market-based incentives, and public participation to shape and support AI’s responsible evolution.

Whole-of-state influence

State governments can meet these responsibilities by building a nimble and adaptable whole-of-state approach to AI governance. This involves a blend of:

  • New and existing laws and related regulations;
  • Norms that, while not legally enforceable, wield considerable influence on behavior; and
  • Technical standards, both binding and voluntary.

Legislatures in progressive, moderate, and conservative states alike have taken up bills, and some have been enacted. New AI model transparency laws enacted in 2025 will concentrate the minds of AI labs and spur states to act between the flash of a troubling disclosure and the bang of real-world harm.

But even the best-crafted laws affecting AI face the challenge of timing against AI’s progress. First, the pacing problem: the law evolves and is implemented slowly while technology evolves fast and can quickly strip new regulations of relevance. Second, conclusive evidence that AI is causing harm may emerge too slowly for regulation to limit the effects of near and present dangers.

That’s why we must also tap non-legislative tools as well. For example, commercial interests pay close attention to agency guidance, agency heads’ strategic plans, and signals from public speeches, statements, and private correspondence. Governors can use their platform through executive orders, recognition of good practice, and state of the state addresses to elevate the political significance of AI safety. For example, California Governor Gavin Newsom recently issued an executive order to enable the state to better assess AI companies wanting to do business with the state for the quality of their efforts tackling exploitation and the distribution of illegal content, bias, and violations of civil rights and free speech.

State attorneys general, insurance commissioners, and other agency leaders can also tap their oversight, enforcement, and convening powers to focus the minds of industry on the public’s concerns.

Existing regulatory regimes covering industries deploying AI can be leveraged to influence AI product safety and model governance. Each state’s considerable spending and investments are also powerful levers for influencing commercial practice. And interstate compacts and coordinated efforts between governors that aggregate the influence of many states can create streamlined, de facto national policy.

In many AI domains, states share regulatory power with the federal government. We should strive for high performing AI federalism and a productive partnership between the two levels of government. Sound bipartisan federal AI legislation is a start but the relationship should also include federal flexibility to states, such as through waivers, and AI safety data sharing with states to spur policy and business practice innovation.

To be clear, building and implementing a whole-of-state approach to AI policy is an opportunity to complement, not replace, the role of state legislatures and the hard power of law. A blend of hard and soft power creates more opportunities for states to be effective and efficient stewards of the public interest by using the right tool—be it a policy hammer, chisel, scalpel, or none at all, as the situation requires.

The whole-of-state approach to AI also enables policymakers to adapt rapidly as we transition from current generative AI and early agents to increasingly powerful, and potentially self-improving, AI systems and artificial general intelligence. This is especially important given the dynamic nature of AI’s quirks, blind spots, and vulnerabilities such as hallucinations, scheming, and potential loss of human control. Commercial actors that prove unmoved or untrustworthy after one policy intervention create proof points for a heavier hand.

But the goal should neither be a heavy hand, nor a light touch. It should be an all hands approach, with federal and state government, business and the general public, bringing their best to the table to ensure we get the best from AI.

Authors

Trooper Sanders
Trooper Sanders is president of the State AI Safety Roundtable, a nonprofit supporting civil society, government leaders, and allies in advancing AI safety at the state level. Trooper has worked on AI for nearly a decade and has a career spanning business, government, and the nonprofit sector.

Related

Perspective
Trump’s AI Policy Framework Leaves Most Vulnerable ExposedMarch 27, 2026
Perspective
Why Trump’s AI EO Will be DOA in CourtDecember 12, 2025
Perspective
America Needs Better AI AmbitionsDecember 11, 2025

Topics