The White House has released its framework for national AI legislation, and the framing is masterful. The headline features are about protecting children from AI-generated harm and “boosting America’s AI industry.” Who could object to that? The answer, as always, is in the detail — and the detail reveals a document designed primarily to serve the interests of the companies building AI, not the public supposedly being protected.

The framework calls for “sharp limits on legal liability for developers” and seeks to pre-empt state laws that the administration says would “slow down the technology.” Read that again. The federal government is proposing to shield AI companies from lawsuits and override state-level regulation in a single document. This is not consumer protection. This is corporate protection with a child-safety sticker on the front.

The Liability Shield

The liability provisions are the most consequential element. Under the framework, AI developers would face significantly reduced exposure to lawsuits arising from the outputs of their systems. If an AI system generates harmful content, provides dangerous medical advice, or produces deepfakes used in fraud, the developers would enjoy protections that no other industry receives for its products.

Compare this to any other sector. Car manufacturers are liable for design defects. Pharmaceutical companies are liable for side effects they failed to disclose. Food producers are liable for contamination. But AI companies, under this framework, would operate in a regulatory environment where the harm caused by their products is largely their users’ problem, not theirs.

The State Pre-emption

The second major element is the pre-emption of state regulation. Several states — California, Colorado, Illinois, and others — have passed or are developing AI-specific legislation that imposes disclosure requirements, bias auditing and accountability measures. The White House framework would override these laws in favour of a lighter federal standard.

This is a familiar playbook. When state-level regulation threatens corporate interests, the federal government steps in with weaker national standards that supersede the stronger local ones. It happened with banking regulation. It happened with telecommunications. And now it is happening with AI.

The Child Safety Fig Leaf

The child protection provisions are real but limited. The framework proposes age verification requirements for AI-powered services targeting minors and restrictions on certain types of AI-generated content involving children. These are welcome measures. But they are also the least controversial elements of AI regulation — the kind of provisions that generate bipartisan support and favourable headlines while the real action happens elsewhere in the document.

The question Congress should be asking is simple: why does protecting children from AI require simultaneously shielding AI companies from legal liability? The two objectives are not only unrelated — they are contradictory. You do not incentivise safety by reducing the consequences of unsafe behaviour.

But then, this framework was never really about safety. It was about ensuring that the companies building the most powerful technology in human history can do so with minimal accountability. The child safety provisions are the wrapping paper. The liability shield is the gift.