This is an idea I had for my capstone project on EA Cambridge’s AGI Safety Fundamentals (governance track) course. I am sharing live progress here for better accountability and feedback.
Initial Motivation
It is of paramount importance that developers of large AI models are fully aware of the dangers of large misaligned models (which is perhaps the greatest source existential of risk facing humanity this century). My hope is that if we could ensure that all developers of large models had both a deep understanding of these risks, and that these risks were salient in their minds whilst working, then the chance of building a misaligned human level artificial intelligence would be reduced, and the chance of existential catastrophe would accordingly be reduced.
Summary of the theories of impact
- AI developers adopt safer practices in their work
- AI developers are empowered to whistleblow on dangerous/reckless projects
- AI developers value working for safer companies/organisations, and companies competing for talent will recognise this
- Occupational licensing and regulation discourages the supply of AI developers, causing progress to slow
Research questions
‣
‣
‣
- How to make it happen
- Politics and history of how industries become regulated. specifically with occupational licensing.
Open-ended investigations
Financial stabilityLL4AT - regulation: strategies and enforcementFootnotes
- The Precipice, Toby Ord, 2020