White House National AI Policy Framework preempting state regulations
TECH

White House National AI Policy Framework preempting state regulations

40+
Signals

Strategic Overview

  • 01.
    The Trump administration released a four-page National Policy Framework for Artificial Intelligence on March 20, 2026, calling on Congress to establish uniform federal AI governance across seven pillars including innovation, child safety, intellectual property, and free speech.
  • 02.
    The framework's centerpiece recommendation urges Congress to preempt state AI laws that 'impose undue burdens,' replacing what the administration characterizes as 'fifty discordant' regulatory regimes with a single federal standard.
  • 03.
    The framework limits AI developer liability by advising Congress to prevent states from penalizing developers for third-party misuse of their models and to avoid setting 'open-ended liability' standards, while shifting primary child safety responsibility to parents rather than platforms.
  • 04.
    The proposal faces significant political headwinds: over 50 Republican members of Congress have signed a letter opposing federal AI preemption, and previous legislative attempts to include preemption language have twice failed in Congress.

Deep Analysis

Why This Matters

The White House AI framework represents the most consequential attempt to centralize artificial intelligence governance in the United States. At its core, the policy battle is about who gets to regulate AI: the federal government with a single, industry-friendly standard, or individual states that have been moving more aggressively to impose accountability requirements on AI developers. The framework's push for preemption would effectively nullify laws like California's SB 53 and New York's RAISE Act, which took effect just weeks ago on January 1, 2026.

The motivations behind the framework are layered. The tech industry, led by venture capital firms like Andreessen Horowitz, has lobbied intensely for a unified federal approach. Bloomberg's February 2026 investigation revealed a16z as the 'hidden hand' shaping AI policy, with the firm regularly receiving the first call from White House officials on AI matters. For the industry, a patchwork of 50 different state regulatory regimes represents not just compliance costs but existential uncertainty for AI business models that depend on training with copyrighted data and deploying models at scale. The administration frames the urgency in geopolitical terms: House leadership explicitly cited the need to 'beat China in the global AI race' as justification, making deregulation a matter of national competitiveness.

Yet the framework also reveals a deeper ideological commitment. By declining to create any new regulatory body and instead relying on existing agencies for sector-specific oversight, the administration signals a belief that AI does not require fundamentally new governance structures. This stands in sharp contrast to the EU's AI Act, which established dedicated institutional machinery for AI oversight.

How It Works

The framework is structured around seven pillars that together form a legislative blueprint for Congress. The preemption mechanism is the linchpin: the framework calls on Congress to pass federal legislation that would override state AI laws imposing 'undue burdens,' replacing them with a 'minimally burdensome national standard.' However, certain state powers are explicitly preserved, including enforcement of general laws to protect children and prevent fraud, data center zoning decisions, and state procurement of AI tools for law enforcement and education.

On developer liability, the framework takes a distinctly protective stance. It advises Congress to prevent states from 'penalizing AI developers for a third party's unlawful conduct involving their models' and to 'avoid setting ambiguous standards about permissible content, or open-ended liability.' This effectively creates a liability shield similar in philosophy to Section 230 of the Communications Decency Act, which protected internet platforms from liability for user-generated content. For child safety, the framework shifts responsibility to parents rather than platforms, proposing 'commercially reasonable, privacy protective, age assurance requirements' while mandating features to reduce exploitation and self-harm risks.

The intellectual property provisions are particularly notable. The administration takes the position that AI training on copyrighted material does not violate copyright laws, while simultaneously proposing enabling legislation for compensation negotiations between AI companies and rights holders. This attempts to thread the needle between the interests of AI developers who depend on large training datasets and content creators who argue their work is being used without consent. Additional provisions include streamlined permitting for data centers to generate on-site power, regulatory sandboxes for requesting federal rule exemptions, and expanded access to federal datasets for AI training.

By The Numbers

The data surrounding this framework illuminates both the scale of the policy challenge and the political obstacles ahead. According to a February 2026 YouGov poll, 63% of Americans believe AI will reduce job numbers, suggesting significant public anxiety about artificial intelligence that may not align with the framework's innovation-first approach. This public sentiment creates a challenging backdrop for legislators considering whether to support or oppose the framework.

On the political front, over 50 Republican members of Congress have signed a March 2026 letter opposing federal AI preemption, a remarkable show of intraparty dissent. Federal AI preemption has already been attempted and failed twice: it was removed from the GOP budget reconciliation bill and excluded from the defense policy bill. The framework itself is just four pages long, containing seven pillars, a brevity that critics argue reflects a lack of regulatory specificity. The contrast between the document's slim length and the complexity of the AI governance challenge it attempts to address has been noted by legal analysts at major firms including Paul Hastings and Latham & Watkins, who assess significant legal and political hurdles ahead.

Social media engagement data provides another lens. Michael Kratsios's announcement thread on X garnered 2,300 likes and 860 retweets, while a Defense Now livestream on YouTube drew over 27,000 views, indicating substantial public interest. However, the most engaged-with content skewed toward government and industry voices rather than critical perspectives.

Impacts & What's Next

In the near term, the framework sets the stage for a major legislative battle in Congress. House Speaker Mike Johnson and Majority Leader Steve Scalise have pledged to work 'across the aisle' to enact the framework, but the opposition from over 50 Republican members makes passage far from certain. The framework's recommendations are non-binding, so without congressional action, state AI laws remain in full effect. Legal analysts expect protracted negotiations over the scope of preemption, particularly regarding which state laws qualify as imposing 'undue burdens.'

If enacted, the impacts would be sweeping. State laws like California's SB 53 and New York's RAISE Act, which mandate whistleblower protections and safety event reporting, could be overridden. The liability shield for AI developers would fundamentally reshape the legal landscape for AI harms, potentially making it significantly harder for individuals and communities affected by AI systems to seek redress. The shift of child safety responsibility to parents could weaken platform accountability requirements that child safety advocates have spent years establishing.

The regulatory sandbox provision could create a new dynamic where individual companies negotiate bespoke exemptions from federal rules, raising concerns about regulatory capture and uneven playing fields. Meanwhile, the intellectual property provisions will likely intensify ongoing legal battles over AI training data, as rights holders may view the administration's position as prejudging cases currently working through the courts. The broader global implications are also significant: a deregulatory US approach contrasts sharply with the EU's more prescriptive AI Act, potentially creating transatlantic friction and forum-shopping opportunities for AI companies.

The Bigger Picture

This framework crystallizes a fundamental tension in technology governance that has defined the past three decades: the choice between permissive federal standards that prioritize innovation and stronger state-level protections that prioritize accountability. The comparison to Section 230 is instructive and deliberate. Just as that 1996 law shielded internet platforms from liability for user content and enabled the rise of social media giants, this framework aims to create similar conditions for AI companies. The question is whether the analogy holds: the harms enabled by AI systems may prove qualitatively different from those of social media platforms.

The role of Andreessen Horowitz as a 'hidden hand' in shaping policy underscores a deeper structural dynamic. Venture capital firms that have invested billions in AI companies have a direct financial interest in minimizing regulatory friction. When the head of government affairs at a16z calls the framework 'a big step,' it reflects alignment between policy outcomes and investor interests. Meanwhile, the opposition from Americans for Responsible Innovation, backed by Anthropic, reveals that the AI industry itself is not monolithic on regulation. Some AI companies see thoughtful regulation as a competitive advantage and a way to build public trust.

The intraparty Republican opposition is perhaps the most telling signal. Over 50 GOP members opposing preemption suggests that federalism and state sovereignty concerns may ultimately prove more powerful than industry lobbying. If the framework fails legislatively, as previous preemption attempts have, the result will be an accelerating patchwork of state laws that the industry dreads. Either way, March 2026 marks a defining moment in the governance of artificial intelligence in the United States, one whose resolution will shape the trajectory of AI development, deployment, and accountability for years to come.

Historical Context

2025-07-23
Released a sweeping AI Action Plan to boost US AI development by loosening regulations and expanding energy supply for data centers.
2025-12-11
Signed executive order directing development of a federal framework to preempt state AI laws and establishing a DOJ AI Litigation Task Force.
2026-01-01
New state AI laws took effect including California's SB 53 and New York's RAISE Act, mandating whistleblower protections and safety event reporting for AI systems.
2026-02-10
Bloomberg reported the VC firm as the 'hidden hand' steering AI policy in Washington, regularly serving as the first outside call on AI-related policy moves.
2026-03-20
Released the National Policy Framework for Artificial Intelligence, a four-page document with seven pillars urging Congress to pass federal AI legislation preempting state laws.

Power Map

Key Players
Subject

White House National AI Policy Framework preempting state regulations

DA

David Sacks

White House Special Adviser for AI and Crypto who co-directed the framework's development. As the administration's AI Czar, Sacks serves as the primary interface between Silicon Valley and the White House on AI policy.

AN

Andreessen Horowitz (a16z)

Major venture capital firm described as the 'hidden hand' steering Trump's AI policy. Bloomberg reported a16z is regularly the first outside call White House officials make on AI-related policy, making the firm a key force behind the deregulatory approach.

MI

Michael Kratsios

White House OSTP Director who co-led the framework's development per Trump's December 2025 executive order. Publicly promoted the framework's seven pillars on social media, generating the highest-engagement policy thread on X.

AM

Americans for Responsible Innovation (ARI)

Anthropic-backed advocacy group leading opposition to the framework, arguing it shields AI developers from accountability and strips states of their ability to protect consumers from AI harms.

OV

Over 50 Republican members of Congress

Bipartisan coalition within the GOP that signed a March 2026 letter opposing AI preemption, arguing that efforts to halt state AI legislation prevent accountability and erode federalism.

HO

House Speaker Mike Johnson and Majority Leader Steve Scalise

Republican congressional leadership who pledged to work 'across the aisle to enact a national framework,' positioning themselves as the legislative vehicle for the administration's AI agenda.

THE SIGNAL.

Analysts

"Called the framework 'exactly the kind of AI agenda Congress should have been pursuing all along: one that addresses legitimate public concerns without smothering innovation.'"

Daniel Castro
Director, Center for Data Innovation

"Warned the framework offers 'another chance for tech companies to launch harmful products with no accountability,' highlighting the tension between innovation-friendly policy and consumer protection."

Brad Carson
President, Americans for Responsible Innovation

"Argued the Trump administration 'understands that a light-touch regulatory environment, not 50 different regulatory regimes, enabled the internet revolution,' drawing a direct analogy to Section 230-era internet governance."

Patrick Hedger
Director of Policy, NetChoice

"Assessed significant legal and political challenges ahead for implementation, noting that while Congress has the constitutional authority to preempt state AI laws, it has thus far declined to do so in two separate legislative attempts."

Paul Hastings and Latham & Watkins (legal analysts)
Major law firms

"Called the framework 'a big step' for AI policy, reflecting the venture capital industry's strong support for federal preemption of state-level AI regulations."

Collin McCune
Head of Government Affairs, Andreessen Horowitz
The Crowd

"Today, the @WhiteHouse released a commonsense National AI Policy Framework that ensures every American benefits from AI. As @POTUS has said — we need one federal AI policy, not a 50 state patchwork. This gets us there. Eager to work with Congress on this important legislation."

@@mkratsios472300

"White House unveils its first national AI framework, pushes Congress to act this year"

@@PressSec478

"Here are the most pressing topics in AI policy the National Framework addresses: 1. Protecting Children and Empowering Parents: Many Americans are concerned about children interacting with AI. Congress should require age-assurance tools and ensure AI platforms give parents control."

@@mkratsios47338
Broadcast
White House unveils AI framework for Congress

White House unveils AI framework for Congress

LIVE: President Trump Signs Executive Order to Control ARTIFICIAL INTELLIGENCE Policy Nationwide

LIVE: President Trump Signs Executive Order to Control ARTIFICIAL INTELLIGENCE Policy Nationwide

Scaling Laws: Rapid Response on the AI Preemption Executive Order

Scaling Laws: Rapid Response on the AI Preemption Executive Order