xAI’s Challenge to California’s AI Training Data Transparency Law (AB2013)
Summary
- California’s Generative Artificial Intelligence Training Data Transparency Act (AB2013) requires developers of generative AI systems made available in California to publish high-level summaries of the data used to train those systems starting on January 1, 2026.
- xAI, the developer of Grok, has filed a federal lawsuit seeking to block the statute, arguing that it compels disclosure of trade secrets in violation of the Fifth Amendment and forces speech in violation of the First Amendment.
- Although AB2013 is modest on its face and has not yet been enforced, the lawsuit is worth following: the complaint previews constitutional arguments that could be raised against future state and federal AI transparency requirements.
- Notably, unlike xAI, other major AI developers, including OpenAI and Anthropic, haven’t sued and have already posted AB2013 disclosures on their websites.
What AB2013 Requires (and What It Does Not)
AB2013 applies broadly to developers who provide generative AI systems to Californians, whether offered for free or for compensation. Covered developers must post documentation on their website describing data used to train, test, validate, or fine-tune their models. The statute requires high-level disclosures, including:
- general sources and characteristics of training data,
- how datasets relate to the system’s intended purpose,
- approximate size of the data (expressed in ranges or estimates),
- whether the data includes copyrighted or licensed material,
- whether personal or aggregate consumer information is involved, and
- whether synthetic data are used.
These requirements apply to models released since January 1, 2022. The disclosures must also be updated for any new models or substantial modifications to existing models.
AB2013 includes exemptions for systems used solely for security and integrity, aircraft operation, or national security and defense purposes available only to federal entities. Critically, the statute does not specify how detailed what it calls a “high-level summary” must be, and the Attorney General has not yet issued guidance or initiated enforcement. The statute includes no standalone enforcement provision. Enforcement would likely proceed through California’s Unfair Competition Law, likely at the discretion of the Attorney General.
The Fifth Amendment Claim: Trade Secrets
The Fifth Amendment’s Takings Clause prohibits the government from taking private property without just compensation. xAI argues that information about its training datasets constitutes protected trade secrets and that AB2013 affects an unconstitutional taking by forcing public disclosure. The complaint advances both a per se takings theory and a regulatory takings theory, asserting interference with xAI’s reasonable investment-backed expectations.
The Supreme Court has recognized that trade secrets can be property for Takings Clause purposes. Whether a taking occurs turns on whether the trade secrets’ owner had a reasonable expectation of confidentiality based on the state of applicable laws and regulations at the time the information was developed. AB2013 applies retroactively to models released before the statute was enacted, which could strengthen a takings claim compared to a regime where transparency obligations were known in advance.
At the same time, xAI’s Fifth Amendment claim depends on whether AB2013 actually requires the disclosure of information that qualifies as a trade secret. Trade secret protection generally depends on whether the business or technical information at issue derives independent economic value from not being publicly known and is subject to reasonable efforts to maintain its secrecy. That inquiry is necessarily fact-specific, and it depends on what level of detail AB2013 ultimately requires developers to disclose.
That analysis is informed by how AB2013 is being implemented in practice. OpenAI and Anthropic have already posted AB2013 disclosures that appear to be high-level and general—OpenAI’s particularly so. If the California Attorney General takes the position, whether explicitly or implicitly, that those disclosures satisfy the statute, that would substantially weaken any claim that compliance necessarily requires revealing proprietary or economically valuable information. In that case, xAI would likely bear the burden of showing that its own disclosures would be materially different such that compliance would diminish the value of its trade secrets in a way not shared by its competitors.
These issues are not unique to AB2013. Other state and federal proposals, such as California’s SB53 and New York’s recently signed RAISE Act, also involve disclosure obligations that may call for sensitive commercial information. Unlike AB2013, which explicitly mandates public disclosure, other AI regulations may rely on disclosures directed to regulators rather than the public and explicit limits on public release. For example, SB53 explicitly permits AI developers to redact trade secret information from public disclosures and excludes trade secret information from the government’s public reports based on submitted data. While those provisions don’t eliminate all trade secret concerns and may undercut some transparency objectives, they function as a safety valve that can also reduce exposure to trade secret takings claims.
Still, the underlying question—how to balance transparency objectives against trade secret protections—will keep coming up as state and federal AI laws and regulations continue to develop.
The First Amendment Claim: A Potentially Broader Challenge to Disclosure Mandates
The complaint also argues that AB2013 violates the First Amendment by compelling speech. Under existing Supreme Court precedent from a case called Zauderer, the government may generally require disclosure of “purely factual and non-controversial information” under the more deferential standard of rational basis review.
At this stage, xAI first contends that AB2013 is a content-based regulation triggering heightened scrutiny, pointing to the statute’s exemptions. That argument appears weak: purpose-based exemptions for security and defense applications do not obviously constitute viewpoint or content discrimination. xAI also suggests that AB2013 was motivated, at least in part, by concerns about bias in AI systems and therefore implicates politically controversial issues. This theory draws on case law that treats certain mandated disclosures, such as those imposed on crisis pregnancy centers, as outside the category of “purely factual and non-controversial” speech. Notably, however, the statute itself does not require reporting on bias or anti-bias measures and instead focuses narrowly on the sources and technical characteristics of training data.
More broadly, xAI argues that the Supreme Court’s Zauderer doctrine should be narrowed so that it doesn’t apply to statutes like AB2013 at all. Specifically, xAI urges limiting that doctrine to disclosures aimed at preventing consumer deception in advertising, or, alternatively, speech that “proposes a commercial transaction.”
These arguments would, if accepted, call into question many proposed AI transparency requirements, including those in California’s SB 53 and New York’s recently signed RAISE Act. The same logic would extend beyond AI, potentially constraining disclosure requirements that are common across financial, environmental, and health and safety regulations. In fact, the Supreme Court recently declined to revisit the scope of disclosure doctrine in litigation over graphic cigarette warning requirements, leaving intact lower court decisions that upheld disclosure mandates on the ground that they were “purely factual and non-controversial,” while rejecting further limits on Zauderer.
Overall, xAI’s First Amendment theories rest on areas where First Amendment law is not fully settled—and appear aimed more at appeals courts (or even ultimately the Supreme Court).
Bottom Line
xAI’s lawsuit raises constitutional arguments that are likely to recur as governments pursue AI transparency and oversight. That makes the case worth following regardless of the ultimate outcome. At the same time, xAI’s specific claims in this lawsuit face significant hurdles. The Fifth Amendment claim depends on whether AB2013 requires the disclosure of valuable trade secrets—but Anthropic and OpenAI have already published AB2013 disclosures without apparent difficulty. The First Amendment claim, meanwhile, seeks to narrow the government’s ability to mandate factual commercial disclosures. If accepted, xAI’s position would have implications well beyond this statute—potentially calling into question a range of recently enacted and proposed AI transparency regimes, as well as other regulations beyond AI. As such, even if AB2013 itself proves limited or short-lived, xAI’s lawsuit previews important legal issues that will shape future AI regulation.