You will take the lead on a research project, with supervision from a member of the LawAI team, and mentorship and support from other LawAI researchers. Together with your supervisor, you will determine what project and output will be most valuable for you to work towards, for example: publishing a report, journal/law review article, or blog post(s).
We expect fellows to attend regular meetings with the team, give occasional presentations on their research, and provide feedback on research pieces from others in the team and on the fellowship program. There will also be opportunities to learn more broadly about the AI risk space from practitioners at other organizations in our network, as well as career mentorship and guidance from our team.
The following are some examples of topics and questions we’d be particularly keen for fellows to research (though we are open to suggestions of other topics from candidates, which focus on mitigating risks from transformative AI):
- Liability – How will existing liability regimes apply to AI-generated or -enabled harms? What unique challenges exist, and how can legislatures and courts respond?
- Existing authority – What powers do US agencies currently have to regulate transformative AI? What constraints or obstacles exist to exercising those powers? How might the major questions doctrine or other administrative law principles affect the exercise of these authorities?
- First Amendment – How will the First Amendment affect leading AI governance proposals? Are certain approaches more or less robust to judicial challenge? Can legislatures and agencies proactively adjust their approaches to limit the risk of judicial challenge?
- International institutions – How might one design a new international organization to promote safe, beneficial outcomes from the development of transformative artificial intelligence? What role and function should such an organization prioritize?
- Comparative law – Which jurisdictions are most likely to influence the safe, beneficial development of AI? What opportunities are being under-explored relative to the importance of law in that jurisdiction?
- EU law – What existing EU laws influence the safe, beneficial development of AI? What role can the EU AI Act play, and how does it interact with other relevant provisions, such as the precautionary principle under Art. 191 TFEU in mitigating AI risk?
- Anticipatory regulation – What lessons can be learned from historic efforts to proactively regulate new technologies as they developed? Do certain practices or approaches seem more promising than others?
- Adaptive regulation – What practices best enable agencies to quickly and accurately adjust their regulations to changes in the object of their regulation? What information gathering practices, decision procedures, updating protocols, and procedural rules help agencies keep pace with changes in technology and consumer and market behaviors?
- Developing other specific AI-governance proposals – For example: How might a government require companies to maintain the ability to take down, patch, or shutdown their models? How might a government regulate highly capable, but low-compute models? How might governments or private industry develop an effective insurance market for AI?
The fellowship can be a great way to explore possibilities for future collaborations, as we may continue to support specific fellows through grants after the fellowship ends. We may also consider hiring fellows who have demonstrated a strong commitment to our work and values. Fellows may also be invited to future events organized by LawAI.