Home

Donate
Perspective

How the AI Framework Breaks Trump's Promise to Kids, Artists and Communities

Brad Carson / Apr 3, 2026

Brad Carson is president of Americans for Responsible Innovation (ARI) and a former congressman representing Oklahoma.

President Donald Trump arrives for the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., on July 23. (Official White House Photo by Joyce N. Boghosian)

In December, the Trump administration released an executive order focused on preempting state artificial intelligence laws that featured an important caveat: any such prohibition would “ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.” The pledge was a political concession to protect these groups after two previous attempts to override state AI laws floundered in Congress following severe backlash from children’s safety advocates, artists and creators and state lawmakers.

But a promise on paper is only as strong as the policy behind it. With the March release of the Trump administration’s national AI policy framework, the public has the first opportunity to examine whether its pledge to protect kids, artists, and local communities holds water.

How does it measure up? Not well. Let’s take a closer look.

Eliminating protections for children

Nowhere are these shortcomings more apparent than in the framework’s perfunctory approach to protecting children online. The gaps fall into three clear categories: the absence of meaningful federal protections, the failure to preserve state laws that safeguard kids and a flawed understanding of how to protect children online in the AI era.

Over the past four years, children’s advocacy groups pushed for passage of the Kids Online Safety Act (KOSA), the legal heart of which is the “duty of care.” This provision would require Big Tech companies to take “reasonable measures” to mitigate harms to users on their platforms, such as sexual exploitation, the spread of self-harm content and addictive platform features. If platforms fail to take such actions, they may be held liable for user harms.

The duty of care is so critical that when KOSA finally received a markup in the House earlier this year, dozens of advocacy groups opposed this iteration of the legislation after it was stripped from the bill.

In its section on protecting children online, the framework not only omits an endorsement of the duty of care, it seemingly opposes any congressional action to advance the measure by stating that :“Congress should avoid setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation.”

It’s striking that a section framed around protecting kids instead focuses on what kids protections Congress should not pass.

In addition to failing to offer meaningful federal protections, the framework would also fail current and future generations by tying the hands of states that are trying to respond to rapidly evolving harms. Across the country, lawmakers are grappling with issues ranging from dangerous chatbot interactions to the spread of harmful and manipulative content. Rather than supporting those efforts, the framework recommends Congress exempt only “generally applicable laws protecting children” from preemption, not those specifically addressing AI harms to children should be preempted.

It shouldn’t come as a surprise that the administration would attempt to overrule these laws in its AI policy framework. Already this year, the White House has reportedly flexed its political muscle to resist kids' AI safety proposals in states from Utah to Florida. A legislative framework that outright bans such proposals is the next logical step.

This highlights a fundamental misunderstanding of what it will take to protect young people online in the AI era. By banning any laws protecting children outside of “generally applicable” statutes, the framework effectively bans any laws that regulate the development of AI models pre-deployment. It would allow states to continue to ban users from sharing child sexual abuse material (CSAM), but it would prohibit states from examining models before their deployment to ensure their training data did not include CSAM. It could allow states to require parents to monitor their children’s tech use, but barring unlikely congressional action it would effectively prohibit states from requiring that companies test that technology to ensure it is safe pre-release.

That’s the legal equivalent of allowing families to sue an airline after their loved one’s plane has crashed, but prohibiting the testing of planes before they take off. It’s a legal structure that not only fails to prevent harm, it ensures harm takes place before companies are incentivized to innovate safely.

Taken together, the framework’s lack of meaningful federal protections, its ban on state legislation protecting kids and its failure to acknowledge the importance of pre-deployment safeguards aren’t just a broken promise to exempt kids safety from AI preemption, they are a recipe for ensuring that children face new harms in the AI era.

Failing artists and creators

The framework also fails to protect artists and creators. It makes clear that copyrighted works should be broadly available for training AI systems, while leaving questions of consent and compensation to the courts. In theory, that might sound like a neutral position. In practice, it places the burden on creators to navigate a complex and uncertain legal landscape where, historically, outcomes have been inconsistent and often unfavorable.

We have already seen time and time again how this plays out: when new technologies emerge without clear rules, creators are often the last to benefit and the first to be displaced. In the early days of the internet, for example, writers and journalists saw their work widely distributed with little if any compensation.

By leaving protections for creators to the courts, the framework sets up a judicial battle with well-funded tech companies on one side and poorly compensated creators on the other. With limitless cash and massive financial incentives to access and train on copyrighted works, we can be sure that AI companies will test and exploit the legal system to access copyrighted artistry.

Leaving local communities vulnerable

Local communities are facing a different set of challenges. Across the nation, residents are raising concerns about rising electricity prices while utilities and grid operators are bracing some projections see data centers accounting for almost half of the growth in electricity demand by 2030. Rather than confronting this reality, the framework doubles down on the administration’s “Ratepayer Protection Pledge,” in which companies made non-binding commitments to pay for the cost to power their data centers. In doing so, it is seeking to turn into law policies the White House claims will protect households from higher costs.

But that assurance does not hold up. As energy experts have noted, the AI data center boom is driving increased demand for transmission lines, fuel, land and other grid infrastructure, costs that together are pushing electricity prices higher. Allowing companies to build or procure their own power addresses only a fraction of that demand and does little to offset the broader system costs that will ultimately be passed on to ratepayers. Even if fully implemented, the pledge would leave major cost pressures untouched and households still on the hook.

An abdication of responsibility

In the end, the only promise this AI framework truly delivers is preemption. It would tie the hands of lawmakers for decades, freezing in place a system that prevents federal oversight while AI technology continues to advance at a historic pace.

At a moment when the risks of AI are becoming clearer by the day, from harms to children to displacement of creators to rising costs for communities, the Trump administration’s framework falls far short of a serious effort to protect Americans. It is an abdication of responsibility, giving Big Tech a free pass to continue causing harm with no accountability.

Congress should reject this framework and do what this proposal fails to: establish clear, enforceable guardrails that protect Americans, preserve the ability of states to act and ensure that innovation serves the public interest, not just the companies driving it.

Authors

Brad Carson
Brad Carson is president of Americans for Responsible Innovation. Carson is a former congressman representing Oklahoma's 2nd District and served as Acting Undersecretary of Defense.

Related

Perspective
Trump’s AI Policy Framework Leaves Most Vulnerable ExposedMarch 27, 2026
Analysis
March 2026 US Tech Policy RoundupApril 1, 2026
News
Trump Signs Executive Order To Combat State AI RegulationDecember 12, 2025

Topics