Commentary | 
January 2026

xAI’s Trade Secrets Challenge and the Future of AI Transparency

Julius Hattingh

xAI is challenging a California state law that took effect at the beginning of this year, requiring xAI and other generative AI developers who provide services to Californians to publicly disclose certain high-level information about the data they use to train their AI models. 

According to its drafters, the law aims to increase transparency in AI companies’ training data practices, helping consumers and the broader public identify and mitigate potential risks and use cases associated with AI. Supporters of this law view it as an important step toward a more informed public. Detractors view it as innovation-stifling. Other developers, including Anthropic and OpenAI, have already released their training data summaries in compliance with the new law.

xAI challenges AB-2013 on the grounds that it would force it to disclose its proprietary trade secrets, thereby destroying their economic value, in violation of the Fifth Amendment Takings Clause. It also claims that the law constitutes compelled speech in violation of the First Amendment and is unconstitutionally vague because it does not provide sufficient detail on how to comply. In this note, I focus on the trade secrets claim.

At the core of this dispute lies a tension between the values of commercial secrecy and transparency. In other industries and contexts, this tension is a familiar one: a company develops commercially valuable information – a recipe, a special sauce, or a novel way to produce goods efficiently – that it wishes, for good reason, to keep secret from competitors; at the same time, consumers of that company’s goods or services wish to know, for good reason, the nature and risks of what they are consuming. Sometimes, there is an overlap between the secrets a company wishes to keep and the information the public wishes to know. When that happens, the law plays an important role in resolving that tension one way or the other. How it does so can be as much a political question as a legal one. 

This is what AB-2013 and xAI’s challenge to it is about. The AI industry is highly competitive, and companies have a legitimate interest in protecting any hard-won competitive edge that their secret methods provide. At the same time, the public has many unanswered questions about the nature of these services, which are increasingly embedded in their lives. There are weighty principles on either side. The outcome of this dispute could shape the legal treatment of these competing interests for years to come.

What does AB-2013 ask for?

Under section 3111a of AB-2013, developers must disclose a “high-level summary” of aspects of their training data, including of:

  • The sources or owners of the datasets.
  • The purpose and methods of collecting, processing, and modifying datasets.
  • Descriptions of their data points, including what kinds of data are being used and the scale of the datasets. 
  • Whether personal information or aggregate consumer information is being used for training. 
  • Whether datasets include third-party intellectual property, including copyright and patent materials. 
  • The date ranges of when the datasets were used.  

The law came into effect on the 1st of January this year. It applies retroactively to datasets of models released on or after January 1, 2022. 

There are a few bespoke exceptions to this bill for particular AI models, namely those used for security and integrity purposes, for the operation of aircraft, and for national security, military, or defense purposes. There are, however, no exceptions for information that constitutes a trade secret. 

What is a trade secret?

The crux of xAI’s position is that complying with AB-2013 would force it to reveal its trade secrets. 

Broadly, a trade secret is any information that a company has successfully kept secret from competitors, and that confers a competitive advantage because of its secrecy. In other words, it must both be a secret in fact and generate independent economic value as a result of that secrecy. Trade secrets receive protection under state and federal law, and since the US Supreme Court’s 1984 decision in Ruckelshaus v Monsanto Co., they can constitute property protected by the Fifth Amendment’s Takings Clause. 

While in principle the definition of a trade secret is broad enough to encompass virtually any information that meets its criteria, it is easier to claim trade secrecy for specific ‘nuts and bolts’ information, such as particular manufacturing instructions or the specific recipe for a food product. This is because revealing those details directly enables competitors to replicate them. Conversely, claims for general and abstract information are harder to establish because they tend to give less away about a company’s internal strategies. This is relevant to AB-2013, since it requires only a “high-level summary” of the disclosure categories. 

Before applying this to xAI’s claim, it is important to note that regulations restricting the scope and protection of trade secrets are not necessarily unconstitutional. Constitutional doctrine balances trade secret protection against other interests, including the state’s inherent authority to regulate its marketplace by imposing conditions on companies that wish to participate in it. In some circumstances, disclosure of trade secrets may be one such condition.

Against this backdrop, xAI brings two trade secret challenges against AB-2013:

  • First, it claims that AB-2013 constitutes a “per se” taking, meaning that the government is outright appropriating its property without compensation. 
  • Second, it claims that AB-2013 constitutes a regulatory taking, meaning that the government is imposing unjustified conditions on its property that substantially undermine its economic value to xAI.

xAI’s first claim: AB-2013 is a per se taking

xAI’s per se takings challenge is its most aggressive and atypical. Traditionally, this type of claim applies to government actions that would assume control or possession of tangible property, for example, to build a road through a person’s land. A per se taking can also occur when regulations would totally prevent an owner from using their property.  

The court will need to consider first, whether AB-2013 targets xAI’s proprietary trade secrets and second, whether the law would appropriate or otherwise eviscerate xAI’s property interest in them. To my knowledge, no one has ever successfully argued a per se taking in the context of trade secrets, and there are good reasons to think xAI will not be the first.

(a) Does AB-2013 target xAI’s proprietary trade secrets?

xAI claims that through significant research and development, it has developed novel methods for using data to train its AI models, and that the secrecy of this information is paramount to its competitive advantage. It claims that its trade secret lies in the strategies and judgments xAI makes about which datasets to use and how to use them. To demonstrate the importance of secrecy, xAI cites various security protocols and confidentiality obligations it imposes internally to protect this information from getting out. 

Given that the information in question remains undisclosed, it is difficult to assess the value and status of the information that xAI is required to disclose. We can reasonably assume that xAI does indeed possess some genuinely valuable secrets about how to effectively and efficiently approach training data. Yet it is much less clear whether any such secrets are implicated by the high-level summary required by AB-2013. 

For example, suppose that xAI has developed a specific novel heuristic for curating and filtering datasets that allows it to achieve a particular capability more efficiently than publicly known methods. It could still disclose the more general fact that its datasets are curated and filtered, without jeopardizing the secrecy of that particular heuristic. Likewise, perhaps a specific method for allocating datasets between pre-training and post-training constitutes a trade secret. AB-2013 does not ask what the specific allocation method is. To that end, if xAI’s disclosure were comparable in scope to those of OpenAI and Anthropic, it would be highly unusual for this degree of detail about a company to constitute a trade secret. 

Yet, this is precisely what xAI must demonstrate. To constitute a per se taking, it is not enough that disclosure provides clues about underlying secrets or even that it partially reveals them. xAI must show a more direct connection between the disclosure categories and their trade secrets. 

(b) Does AB-2013 appropriate xAI’s proprietary trade secrets?

If the above analysis is correct, xAI will struggle at this second stage to show that disclosure would constitute a categorical appropriation or elimination of all economically beneficial value in the relevant property. If xAI lacks a discrete property interest in the disclosable information, it is hard to envision a court finding that AB-2013 would nevertheless indirectly appropriate some other property interest.

There are a few additional issues to mention. For one, unlike in a classic per se takings claim, here the claimed property would be extinguished by the law, rather than transferred to the control or possession of another entity. This is for the simple reason that, like ordinary secrets, a trade secret ceases to exist (ceases to be a secret) if it is publicly known. Since AB-2013 would destroy any trade secrecy in the disclosable information, the application of traditional takings analysis is a bit awkward.

Further, California can argue that AB-2013 is a conditional regulation: it requires disclosure only as a condition for developers operating in the California marketplace, and developers may choose whether to do so. This makes it seem less like an outright taking by the government and more like a quid pro quo that companies may choose to engage in voluntarily. 

However, this argument is considerably weaker with respect to AB-2013’s retroactive application to services provided since 2022, as companies affected by that clause cannot now choose to opt out. This raises a further question: whether these regulations were foreseeable, or whether xAI had a reasonable expectation that they would not be introduced. That question is central to the second claim advanced by xAI, and I will analyze it below. 

xAI’s second claim: that AB-2013 is a regulatory taking

xAI’s second argument is more orthodox. xAI argues that, even if AB-2013 is not an outright appropriation of its trade secrets, it imposes regulations that so significantly interfere with them as to amount to a taking. This argument avoids some of the hurdles of the first: it does not require that AB-2013 completely eviscerate the claimed trade secret, and there are several precedents in which this argument has been successfully made. 

To determine the constitutionality of AB-2013, the court will balance the following factors established in Penn Central:

(a) The economic damage that xAI would suffer by complying with the law; 

(b) The character of the government action, including the public purpose that disclosure is intended to serve; and 

(c) Whether xAI had a reasonable investment-backed expectation that it would not be required to disclose this information at the time that it developed it. 

(a) Economic damage to xAI 

As noted above, the present information asymmetry makes it difficult to assess the harm disclosure would cause to xAI, and there are reasons to be skeptical that a high-level summary would meaningfully disadvantage xAI. Nevertheless, let’s assume that compliance would indeed destroy something valuable to xAI. In that case, the state would need to justify this disadvantage to xAI on further grounds.

(b) The character of the government action 

As noted, states have the authority to regulate their marketplaces. This gives them some scope to regulate trade secrets in the service of a legitimate public interest. The public interest in the disclosable information is therefore key to California’s defense of AB-2013.

While xAI emphasizes the disadvantages that disclosure would cause for its business, it discredits the public interest in this information. It questions why the public needs to know these details and argues that they would be largely unintelligible and uninteresting. 

Despite what xAI suggests, there are reasons to be interested in disclosable information, both for direct consumers and for researchers, journalists, and other third parties who could use it to enhance public understanding. For example: 

  • Whether a model is trained on proprietary, personal, or aggregate consumer information can help users understand the legal and ethical implications of using such models and enable them to make educated choices among their options. 
  • Understanding the sources, purposes, and types of training data may help identify the biases and limitations of particular models and appropriate use cases.
  • The date ranges of datasets can help identify gaps in a model’s capabilities and areas in which its responses may rely on outdated data. 
  • Information about training data sources can be used to assess the risk of data poisoning attacks that could cause the model to behave unsafely in certain circumstances, posing risks to both consumers and third parties.
  • Information about training data can be used to assess the risk that data contamination makes model evaluations (including evaluations on which consumers rely) less reliable.

Note that even if it is not known in advance precisely why certain metadata is relevant to consumers, this is not an argument for secrecy. Some risks will only be identified once the information is made public, as when an ingredient or chemical is identified as toxic after the fact. There may be highly consequential decisions in training data that are only understood later. Given the current opacity of generative AI, it is reasonable for the public to expect greater transparency. AB-2013 is, in this sense, a precautionary regulation.

There are two considerations to note in favour of xAI here. First, although there is some public interest in the relevant information, the degree of interest may not seem as immediately apparent as in other contexts. For example, ingredient lists of food products may seem more immediately consequential to consumers. Second, it is plausible that some of the public interest could be met by a more controlled disclosure environment, such as to regulators, rather than the public at large. 

(c) Did xAI have a reasonable investment-backed expectation?

A crucial element of xAI’s regulatory takings challenge is the claim that it developed the information with a reasonable expectation that the law would protect it as a trade secret. A takings challenge can make sense in such cases, since states cannot capriciously revoke title to property it previously recognized and that companies relied on – at least not without compensation.

xAI claims that it had no reason to suspect this information might become disclosable, and that doing so is contrary to a long tradition of trade secret protection in the US. It points out that the regulations came to its attention only a full calendar year after it commenced operations. 

There are important counterarguments to this. First, the tradition of protection that xAI cites is in fact one of balancing the protection of commercial secrets with the public interest in being informed – xAI’s characterization of the law ignores the equally old tradition of states regulating commerce in ways that protect this public interest. In California alone, there are many laws requiring some form of disclosure, whether it concerns the chemicals in cleaning products, cookware, menstrual products, or pesticides, or the privacy policies and automatic renewal practices of digital services. Second, there is no long-standing tradition of the law protecting high-level summaries of AI training data from regulation, as this is a novel form of information in a new field of industry. At the time xAI invested in and developed this information, the regulatory regime was in its infancy, and it would not have been reasonable then to assume regulation would not follow. The regulators’ response time is reasonable.

Indeed, the reason this issue is so important now is precisely that there is a window to regulate trade secrets in a way that fosters appropriate expectations. 

The broader implications of xAI’s challenge 

Separately from AB-2013, other state laws are beginning to require AI models to disclose information relevant to AI. Laws such as SB 53 and the RAISE Act would require frontier AI companies to disclose mitigation strategies for catastrophic risks posed by AI. 

Those particular disclosure laws are likely to be more secure against similar challenges for a few reasons. First, they target information with a more immediate and overwhelming public interest, since they are directly concerned with mitigating major loss of life and billion-dollar damage. Second, they explicitly exempt trade secrets from disclosure. As I have argued elsewhere, that creates a new set of problems. 

Nevertheless, the outcome of this case could shape the future of transparency in those and other areas of AI. The outcome will help establish the expectations that are reasonable for AI developers to have when structuring their commercial strategies. Reliance on those expectations makes it difficult for regulators to change the transparency rules in the future. While trade secrets are not a trump against transparency measures, they are strongest when legal expectations are well established. Yet here, where the AI industry is new, opaque, and the public has a genuine interest in greater transparency, there is an opportunity to strike a reasonable compromise between competing interests. This makes it all the more important to find the appropriate balance between commercial secrecy and transparency in the AI industry today. 

Share
xAI’s Trade Secrets Challenge and the Future of AI Transparency
Julius Hattingh
xAI’s Trade Secrets Challenge and the Future of AI Transparency
Julius Hattingh