April 2026 US Tech Policy Roundup
Rachel Lau, Shirley Frame, Ben Lennett / May 1, 2026Rachel Lau and Shirley Frame work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

House Energy and Commerce Committee Chair Brett Guthrie (R-Ky.) presides over a markup on Capitol Hill on March 5, 2026. (Francis Chung/POLITICO via AP Images)
April’s US tech policy landscape was shaped by a renewed push in Congress to establish a national data privacy framework and to reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA). House Republicans introduced two complementary data privacy bills aimed at creating a single federal standard to govern how companies collect, share and sell Americans’ personal data. The proposals, backed by key Republican committee leaders, were framed as a unified approach to replace the growing patchwork of state laws, drawing strong support from industry groups while prompting criticism from lawmakers, regulators and civil society organizations concerned about federal preemption and limits on private enforcement.
Alongside this, the executive branch advanced a series of AI-related initiatives and policy discussions that highlighted the administration’s priorities around national security, procurement and technological leadership. The Trump administration’s fiscal year 2027 budget proposed increasing AI-focused defense spending and cutting funding for agencies such as the National Science Foundation (NSF) and the Cybersecurity and Infrastructure Security Agency (CISA). Federal agencies also moved forward with new AI procurement and governance measures, including a draft General Services Administration (GSA) contract clause that would impose requirements on contractors related to data ownership, licensing and the use of “American AI systems.” These efforts unfolded alongside continued debate in Congress and the courts on surveillance and data collection practices.
Read on to learn more about April developments in US tech policy.
House Republicans’ privacy plan draws industry praise, critique from state lawmakers and civil society
Summary
In April, House Republicans introduced a pair of data privacy bills designed to establish a single national standard governing how companies collect, share and sell Americans’ personal data. Rep. John Joyce (R-PA) and eight Republican co-sponsors introduced the Securing and Establishing Consumer Uniform Rights and Enforcement over Data Act (SECURE Data Act, H.R. 8413) to establish a national framework for consumer protections. Rep. Bill Huizenga (R-MI) and three colleagues introduced the Guidelines for Use, Access, and Responsible Disclosure of Financial Data Act (GUARD Financial Data Act, H.R. 8398) to address data protection within financial institutions.
The bills are intended as complementary legislation, jointly backed by House Energy and Commerce Chair Brett Guthrie (R-KY) and House Financial Services Chair French Hill (R-AR). The SECURE Data Act includes several provisions that consumer advocates and Democrats have sought for years, including data minimization requirements, a data broker registry, and consumer rights to access and delete personal information. However, it limits enforcement to government officials, including the Federal Trade Commission and state attorneys general, and does not provide a private right of action, preventing individuals from suing companies directly. The GUARD Financial Data Act includes similar provisions specifically for financial institutions with the added requirement that institutions obtain affirmative opt-in consent before disclosing sensitive personal information. Crucially, both bills would preempt existing state data privacy laws, replacing them with a single federal standard.
The bills received broad support from industry groups. So far, at least twenty states have enacted comprehensive data privacy laws, and business groups have argued the resulting patchwork of regulations creates compliance burdens that particularly disadvantage small companies. In a statement provided to Politico, major industry groups, including the US Chamber of Commerce, TechNet, NetChoice, the Chamber of Progress, the National Retail Federation, and the Interactive Advertising Bureau, expressed support for the legislation, arguing that a single national framework would “give businesses the certainty needed to innovate, protect data, and drive growth.”
Consumer advocates and civil society organizations pushed back, arguing the legislation would strip away stronger protections Americans already have under state law and give companies broad discretion over data practices. Eric Null, director of the Privacy & Data Project at the Center for Democracy & Technology (CDT), wrote in Tech Policy Press that the bill was modeled on the most industry-friendly state laws and that its data minimization provision is effectively toothless — limiting data collection only to purposes a company discloses in its privacy policy, which is a standard already required under existing law. Alejandra Montoya-Boyer, vice president of The Leadership Conference’s Center for Civil Rights and Technology, warned that despite surface-level civil rights provisions, the bill’s preemption of stronger state laws would block states from preventing data-driven discrimination, including through algorithms used in hiring and tenant screening.
State legislators and privacy regulators were also critical. The California Privacy Protection Agency (CPPA) opposed the bill over the preemption provisions, with Executive Director Tom Kemp calling it “not a real step forward for privacy” that would significantly reduce protections for tens of millions of Americans. Vermont state Rep. Monique Priestley argued that Congress should be building on the work states have already done rather than constraining it, calling on lawmakers not to “undermine stronger state protections.”
Congressional Democrats also expressed opposition to a national standard that is weaker than current state laws. House Energy and Commerce Committee ranking member Frank Pallone (D-NJ) said Republicans had “lost the plot on efforts to pass a strong national privacy bill” and criticized the legislation as subordinating consumer protection to the interests of Big Tech. Despite the cold reception, Rep. Joyce expressed optimism at an industry fireside chat that Democrats would come around, calling on industry leaders to engage with lawmakers on the legislation.
What We’re Reading
- Eric Null, “Congress's New Privacy Bill Is Built on Empty Promises,” Tech Policy Press.
- Justin Hendrix, “Unpacking the SECURE Data Act,” Tech Policy Press.
Tech TidBits & Bytes
Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the White House, agencies, Congress, civil society, industry, and courts.
In the White House:
- Congress passed a 45-day extension of Section 702 of FISA, the second time they passed a short-term extension this month after failing to reach a broader consensus on reforms. Earlier in the month, the House passed a three-year reauthorization of Section 702 by a vote of 235-191. The bill did not include a warrant requirement for law enforcement agencies searching Americans' data collected through the program — a long-sought reform backed by a bipartisan coalition of privacy proponents in both chambers. The bill included modest civil liberties provisions and, to win over conservative holdouts, House Republican leaders attached an unrelated provision banning the Federal Reserve from issuing a central bank digital currency. The Senate countered the bill with a 45-day extension, giving lawmakers another six weeks to reach consensus after an initial 30-day extension earlier in the month.
- President Trump’s fiscal year 2027 budget repurposed $1.2 billion from Biden-era unobligated infrastructure funds to create a new Office of Artificial Intelligence and Quantum within the Department of Energy (DOE) to support AI supercomputers and to support DOE’s “Genesis Mission” to accelerate AI-driven research. The budget also proposed steep cuts to the NSF and CISA, while raising the defense topline to $1.5 trillion (a 42 percent increase) and prioritizing AI within defense systems.
In the agencies:
- The GSA released a draft contract clause imposing broad new AI procurement requirements on federal contractors, including an irrevocable license for “any lawful Government purpose,” full government data ownership, mandatory use of “American AI Systems,” and requirements for “neutral, nonpartisan” outputs. The Information Technology Industry Council called the American AI mandate “overly broad, ill-defined, and unlikely to be technically feasible.”
- US Immigration and Customs Enforcement (ICE) Acting Director Todd Lyons confirmed in a letter to Democratic congressmembers that the agency purchased and deployed Paragon Solutions’ Graphite spyware for counterterrorism and drug trafficking investigations. ICE signed the original $2 million contract in 2024; the Biden administration paused it for compliance review under a 2023 commercial spyware executive order, and the Trump administration reactivated the contract in fall 2025. The letter is ICE's first public acknowledgement of active deployment. Lawmakers and civil liberties groups, including Rep. Summer Lee (D-PA) and the Electronic Frontier Foundation, raised concerns about potential domestic surveillance and misuse.
- The National Institute of Standards and Technology (NIST) announced a narrowing of its priorities for the National Vulnerability Database (NVD). NIST will focus on vulnerabilities listed in CISA’s exploited vulnerabilities catalog, critical software, and software in use by the federal government. Other vulnerabilities will remain listed but will no longer receive full analysis, potentially increasing reliance on private-sector and third-party analysis for vulnerability data.
- The Office of Personnel Management published a brief notice outlining plans to collect detailed health data from federal employees enrolled in government health programs. In response, 10 House Democrats and 16 Senate Democrats sent two separate letters to the agency, urging a halt to the plans, citing Health Insurance Portability and Accountability Act of 1996 (HIPAA) and security concerns. The letters warned the data could be used to target employees who have accessed contraceptives, PrEP, abortion care, gender-affirming care, or IVF, while also suggesting that data centralization “would significantly heighten the risk of misuse, unauthorized disclosure, or exploitation by bad actors.”
In Congress:
- House Select Committee on China Chairman John Moolenaar (R-MI) and House Homeland Security Chairman Andrew Garbarino (R-NY) launched a joint investigation into national security and cybersecurity risks posed by the adoption of Chinese-developed AI models by US companies. The chairmen sent letters to Airbnb and Anysphere (maker of the AI coding tool Cursor) requesting details on their use of Chinese AI, their rationale for those choices, and their communications with Chinese model providers. The probe followed an Office of Science and Technology Policy (OSTP) memo published earlier in the month criticizing Chinese AI distillation operations, in which Chinese AI companies use US frontier models to train their models, allowing them to undercut American AI companies.
In industry:
- Bloomberg reported that OpenAI, Anthropic and Alphabet’s Google are sharing information through the Frontier Model Forum to detect and counter Chinese competitors’ attempts to train their models on frontier US AI models’ outputs. The effort began after Anthropic disclosed in February that it had identified three industrial-scale distillation campaigns by China-based AI labs seeking to replicate Claude’s capabilities.
- Major AI companies intensified their campaigns and policy engagement to influence the direction of AI governance. Anthropic filed with the Federal Election Commission to form AnthroPAC, an employee-funded PAC that will donate to candidates in both parties, joining similar entities operated by Google, Microsoft, Amazon, and Meta. OpenAI published a blueprint for a new “social contract” that included proposals for a Public Wealth Fund seeded by AI companies, grid infrastructure investments to support AI power demand, and new oversight bodies to address risks, among other proposals. Google endorsed over a dozen bipartisan bills aimed at assessing AI’s economic impact, equipping workers with AI skills and encouraging adoption, including the Economy of the Future Commission Act (S. 4046), the AI Workforce Training Act (H.R. 7576) and the AI Workforce PREPARE Act (S. 3339).
- Anthropic briefed senior Trump administration officials on its new frontier model, Claude Mythos Preview, which the company has not publicly released due to it autonomously discovering thousands of previously unknown vulnerabilities in every major operating system and web browser. The briefing came amid ongoing legal disputes over the Department of Defense's (DOD) designation of the company as a “supply-chain risk” designation, with Anthropic co-founder Jack Clark stating “the government has to know about this stuff.” Rather than a public release of the model, Anthropic launched a restricted access program, Project Glasswing, to select partners including AWS, Apple, Google and Microsoft. Concurrently, OpenAI released GPT-5.4-Cyber, a model with reportedly similar defensive capabilities, to a limited group of vetted partners.
- Google amended its $200 million contract signed last year with the DOD to allow its Gemini AI models to operate on classified networks for “any lawful governmental purpose.” The deal followed similar classified AI agreements the Pentagon reached in March with OpenAI and xAI, as Defense Secretary Pete Hegseth pushed to broadly integrate AI across the military. Ahead of the amendment announcement, more than 600 employees in Google’s AI and Cloud divisions signed a letter urging CEO Sundar Pichai to reject the use of the models on classified workloads, warning it could enable lethal autonomous weapons and mass surveillance that Google cannot monitor.
In the courts:
- The Supreme Court heard arguments in Chatrie v. United States, examining whether geofence warrants by law enforcement violate the Fourth Amendment’s prohibition on unreasonable searches. Geofence warrants compel companies to surface data on every device in a specific geographic area during a set time-frame. This allows law enforcement to collect and examine data that may or may not be related to a crime without first establishing probable cause. The case arose from a 2019 Virginia credit union robbery in which investigators employed such a warrant to identify the defendant. The nine justices appeared divided over whether and how warrants should be required for such searches. A final decision is expected in June.
- The US Court of Appeals for the DC Circuit denied Anthropic's request to temporarily block the DoD's supply chain risk designation, allowing the ban to remain in effect while the case proceeds. The decision conflicted with a parallel March federal ruling in California that barred enforcement of the designation and the directive to halt federal use of Anthropic systems, which the Trump administration has since appealed. The GSA restored Anthropic to federal procurement platforms in compliance with the California injunction, leaving Anthropic excluded from DoD contracts but able to work with other federal agencies.
- The NAACP filed a federal lawsuit against xAI and its subsidiary MZX Tech, alleging the company violated the Clean Air Act by operating 27 unpermitted methane gas turbines to power its Colossus 2 data center, used to run the chatbot Grok. The lawsuit argues that turbines sit near majority-Black neighborhoods and have the potential to make the facility the largest industrial source of smog-forming nitrogen oxides in the Memphis metropolitan area.
- The Massachusetts Supreme Judicial Court ruled that Meta must face a lawsuit by Attorney General Andrea Joy Campbell alleging the company designed Instagram to addict children—the first ruling by a state high court on whether Section 230 of the Communications Decency Act shields platforms from design-based claims. The unanimous court held that Meta did not show it was “entitled to the protection” provided by Section 230 since the suit challenges how Meta built the platform rather than content posted by users.
- A federal magistrate judge stayed enforcement of Colorado’s SB 24-205 after the Department of Justice (DOJ), xAI, and Colorado attorney general filed a joint motion to suspend deadlines and hearings in the case, given a possibility that state legislators will amend the law before it takes effect. This followed the DOJ’s movement to join xAI’s lawsuit seeking to block the law earlier in the week — the first time the DOJ has joined a case challenging a state AI law.
- American Oversight filed suit against five federal agencies — the Centers for Disease Control and Prevention (CDC), Department of Homeland Security (DHS), ICE, Internal Revenue Service (IRS), and Social Security Administration (SSA) — seeking records on their use of data analytics tools developed by Palantir, arguing that the agencies failed to fully respond to Freedom of Information Act (FOIA) requests. Public records revealed the IRS alone has paid Palantir $130 million since 2018 for financial crimes investigations.
Legislation Updates
The following bills made progress across the Senate and House in April:
- No Fentanyl on Social Media Act – S. 3618. Introduced by Sen. Jon Husted (R-OH), the bill was ordered to be reported with amendments favorably by the Senate Committee on Commerce, Science, and Transportation.
- Stop the Scroll Act – S. 1885. Introduced by Sen. Katie Britt (R-AL), the bill was ordered to be reported with an amendment in the nature of a substitute favorably by the Senate Committee on Commerce, Science, and Transportation.
- Semiconductor Controls Effectiveness Act of 2026 – H.R. 8287. Introduced by Rep. Greg Stanton (D-AZ), the bill was ordered to be reported as amended by the House Armed Services Committee by a vote of 43-0.
- MATCH Act – H.R. 8170. Introduced by Rep. Michael Baumgartner (R-WA), the bill was ordered to be reported as amended by the Committee on Foreign Affairs by a vote of 36-8.
- Full AI Stack Export Promotion Act – H.R. 6996. Introduced by Rep. Randy Fine (R-FL), the bill was ordered to be reported by the Committee on Foreign Affairs by a vote of 37-7.
- Stop Stealing our Chips Act – H.R. 6322. Introduced by Rep. Thomas Kean (R-NJ), the bill was ordered to be reported as amended by the Committee on Foreign Affairs by a vote of 43-1.
- STRIDE Act – H.R. 6058. Introduced by Rep. Bill Huizenga (R-MI), the bill was ordered to be reported as amended by the Committee on Foreign Affairs by a vote of 44-0.
The following bills were introduced in both the Senate and House in April:
- The PRICE Act – S.4401 / H.R. 8510. Introduced by Sen. Ben Ray Lujan (D-NM) and Rep. Daniel Goldman (D-NY), the bill would “require delivery apps to show consumers the total running ‘all-in’ price of their order as items are added to their carts. It also requires that the apps include, prior to checkout, a clear breakdown that prominently displays the ongoing total purchase amount, along with an explanation of each fee.”
- Economy of the Future Commission Act of 2026 – S. 4046 / H.R. 8345. Introduced by Sen. Mark Warner (D-VA) and Rep. Jay Obernolte (R-CA), the bill would “establish a bipartisan, bicameral commission to study how AI is transforming the American economy and develop consensus-driven policy recommendations for Congress. The Commission will evaluate workforce development, education systems, federal AI adoption, and strategies to strengthen U.S. competitiveness in emerging technologies.”
The following bills were introduced in the Senate or the House in April:
- Multilateral Alignment of Technology Controls on Hardware (MATCH) Act – S. 4281. Introduced by Sen. Pete Ricketts (R-NE), the bill would “provide for export restrictions on certain semiconductor manufacturing equipment and components therefor.”
- Creating Resources for Every American to Experiment with Artificial Intelligence Act (CREATE AI Act) – S. 4441. Introduced by Sen. Todd Young (R-IN), the bill would “establish the National Artificial Intelligence Research Resource (NAIRR), a shared national research infrastructure to connect American researchers and educators to data, software, and tools necessary to advance AI research and development (R&D) and develop AI skills for the U.S. workforce.”
- Literacy in Future Technologies (LIFT) Artificial Intelligence Act – S. 4414. Introduced by Sen. Adam Schiff (D-CA), the bill would” improve educational efforts related to artificial intelligence literacy at the elementary school and secondary school level, and for other purposes.”
- The American Leadership in AI Act – H.R. 8516. Introduced by Rep. Ted Lieu (D-CA), the bill would “strengthen U.S. leadership in AI by improving AI standards and evaluation, expanding research infrastructure and R&D, modernizing federal AI adoption and risk management, supporting workers and small businesses, addressing AI-enabled crimes, and expanding AI education and workforce opportunities.”
- AI Data Center Site Selection Transparency Act – H.R. 8488. Introduced by Rep. LaMonica McIver (D-NJ), the bill would “ensure communities are informed, before deals are finalized, of plans to build AI data centers in their neighborhoods, providing the opportunity for grassroots input.”
- Protecting Consumers From Deceptive AI Act – H.R. 8479. Introduced by Rep. Valerie Foushee (D-NC), the bill would “establish technical standards and guidelines for generative AI content and ensure that the use of this technology is disclosed when it is used to create or modify audio and visual content.”
- Surveillance Accountability Act – H.R. 8470. Introduced by Rep. Thomas Massie (R-KY), the bill would “require government-initiated searches be conducted with a warrant based on probable cause as required by the Fourth Amendment to the United States Constitution. In addition, the legislation creates a private cause of action that allows individuals whose Fourth Amendment protections are violated by government employees to sue for damages.”
- SECURE Data Act – H.R. 8413. Introduced by Rep. John Joyce (R-PA), the bill would “establish a national framework for consumer privacy rights and the protection of personal data.”
- GUARD Financial Data Act – H.R. 8398. Introduced by Rep. Bill Huizenga (R-MI), the bill would “make improvements to Title V of the Gramm-Leach-Bliley Act.”
- AI Children's Toy Safety Act – H.R. 8382. Introduced by Rep. Blake Moore (R-UT), the bill would “ban the manufacturing, importation, sale, or distribution of any children's toy or childcare article that incorporates an artificial intelligence chatbot in the United States.”
- SOUL Act of 2026 – H.R. 8323. Introduced by Rep. Andy Biggs (R-AZ), the bill would “amend title 17, United States Code, to establish sovereign ownership rights in unique likeness for U.S. citizens, to protect against unauthorized digital replications and abuses.”
- Parents Decide Act – H.R. 8250. Introduced by Rep. Josh Gottheimer (D-NJ), the bill would “establish age-verification requirements for providers of operating systems, which includes software that supports the basic functions of a computer, mobile device, or other general purpose computing device.”
- SCALE Act – H.R. 8306. Introduced by Rep. John Moolenaar (R-MI), the bill would require the Secretary of Commerce in coordination with the Director of National Intelligence to implement a process for establishing a rolling annual standard for the sale of certain integrated circuits to certain countries.
We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.
Authors


