LawAI’s comments on the Draft Report of the Joint California Policy Working Group on AI Frontier Models
At Governor Gavin Newsom’s request, a joint working group released a draft report on March 18, 2025 setting out a framework for frontier AI policy in California. Several of the staff at the Institute for Law & AI submitted comments on the draft report as it relates to their existing research. Read their comments below:
These comments were submitted to the Working Group as feedback on April 8, 2025. The opinions expressed in these comments are those of the authors and do not reflect the views of the Institute for Law & AI.
Liability and Insurance Comments
by Gabriel Weil and Mackenzie Arnold
Key Takeaways
- Insurance is a complement to, not a replacement for, clear tort liability.
- Correctly scoped, liability is compatible with innovation and well-suited to conditions of uncertainty.
- Safe harbors that limit background tort liability are a risky bet when we are uncertain about the magnitude of AI risks and have yet to identify robust mitigations.
Whistleblower Protections Comments
by Charlie Bullock and Mackenzie Arnold
Key Takeaways
- Whistleblowers should be protected for disclosing information about risks to public safety, even if no law, regulation, or company policy is violated.
- California’s existing whistleblower law already protects disclosures about companies that break the law; subsequent legislation should focus on other improvements.
- Establishing a clear reporting process or hotline will enhance the effectiveness of whistleblower protections and ensure that reports are put to good use.
Scoping and Definitions Comments
by Mackenzie Arnold and Sarah Bernardo
Key Takeaways
- Ensuring that a capable entity regularly updates what models are covered by a policy is a critical design consideration that future-proofs policies.
- Promising techniques to support updating include legislative purpose clauses, periodic reviews, designating a capable updater, and providing that updater with the information and expertise needed to do the job.
- Compute thresholds are an effective tool to right-size AI policy, but they should be paired with other tools like carve-outs, tiered requirements, multiple definitions, and exemptions to be most effective.
- Compute thresholds are an excellent initial filter to determine what models are in scope, and capabilities evaluations are a particularly promising complement.
- In choosing a definition of covered models, policymakers should consider how well the definitional elements are risk-tracking, resilient to circumvention, clear, and flexible—in addition to other factors discussed in the Report.