🎟️

Proposal: Licensing for AI researchers

👉
This is an idea I had for my capstone project on EA Cambridge’s AGI Safety Fundamentals (governance track) course. I am sharing live progress here for better accountability and feedback.

Initial Motivation

It is of paramount importance that developers of large AI models are fully aware of the dangers of large misaligned models (which is perhaps the greatest source existential of risk facing humanity this century1^1). My hope is that if we could ensure that all developers of large models had both a deep understanding of these risks, and that these risks were salient in their minds whilst working, then the chance of building a misaligned human level artificial intelligence would be reduced, and the chance of existential catastrophe would accordingly be reduced.

Summary of the theories of impact

  1. AI developers adopt safer practices in their work
  2. AI developers are empowered to whistleblow on dangerous/reckless projects
  3. AI developers value working for safer companies/organisations, and companies competing for talent will recognise this
  4. Occupational licensing and regulation discourages the supply of AI developers, causing progress to slow

Research questions

The outside view of how valuable this could be
Caroline’s view
Regulation design
  • How to make it happen
    • Politics and history of how industries become regulated. specifically with occupational licensing.

Open-ended investigations

Financial stabilityLL4AT - regulation: strategies and enforcement

Footnotes

  • 1^1 The Precipice, Toby Ord, 2020