“AI Safety Landscape & Strategic Gaps” by MichaelDickens
EA Forum Podcast (All audio) - A podcast by EA Forum Team

Categories:
The content of this report was written by me, Michael Dickens, and does not represent the views of Rethink Priorities. The executive summary was prepared by Rethink Priorities; I edited the final revision, so any mistakes are my own. Executive Summary Why AI Landscape Research and Incubating New Projects Matters TAI may be a decade or less away, and misalignment could be catastrophic. We need repeated fast, rigorous scans of the field—what exists, what works, where funding and talent are concentrated, and where the next dollar or FTE most reduces risk. To that end, Rethink Priorities commissioned a rapid landscape analysis to surface neglected, high-leverage opportunities. The report, authored by Michael Dickens (views his own), reviews 34 areas of interest and identifies 77 organizations in the relevant fields. Key Findings 1. Implementation-focused policy advocacy and communication are neglected Both AI-safety technical research and [...] ---Outline:(00:26) Executive Summary(00:29) Why AI Landscape Research and Incubating New Projects Matters(01:09) Key Findings(01:13) 1. Implementation-focused policy advocacy and communication are neglected(02:56) 2. Non-technical, short-timelines work is undervalued(04:02) 3. Work on AI for animal welfare is neglected & high-leverage(05:18) Introduction(08:30) Prelude(09:58) Some positions Im going to take as given(10:34) Definitions(11:28) Prioritization(12:31) Why not technical safety research?(13:44) Why not AI policy research?(15:25) Downsides of AI policy/advocacy (and why theyre not too big)(21:35) What kinds of policies might reduce AI x-risk?(23:41) Some AI policy ideas I like(25:47) Maybe prioritizing post-TAI animal welfare(27:08) Why not prioritize digital minds / S-risks / moral error / better futures / AI misuse x-risk / gradual disempowerment?(30:44) Whos working on them?(32:19) Some relevant research agendas(33:48) General recommendations(33:52) Advocacy should emphasize x-risk and misalignment risk(36:56) Prioritize work that pays off if timelines are short(38:06) Top project ideas(38:24) Talk to policy-makers about AI x-risk(42:35) Write AI x-risk legislation(46:07) Advocate to change AI training to make LLMs more animal-friendly(49:48) Develop new plans / evaluate existing plans to improve post-TAI animal welfare(54:07) Honorable mentions(54:10) Directly push for an international AI treaty(56:06) Organize a voluntary commitment by AI scientists not to build advanced AI(01:00:17) Peaceful protests(01:04:08) Media about dangers of AI(01:05:54) Message testing(01:07:33) Host a website for discussion of AI safety and other important issues(01:09:28) List of other project ideas(01:10:09) AI-for-animals ideas(01:10:14) Neartermist animal advocacy(01:12:11) Using TAI to improve farm animal welfare(01:14:45) Lobby governments to include animal welfare in AI regulations(01:15:56) Traditional animal advocacy targeted at frontier AI developers(01:17:14) Research which alignment strategies are more likely to be good for animals(01:18:44) AI policy/advocacy ideas(01:18:49) Improving US China relations / international peace(01:20:31) Talk to international peace orgs about AI(01:21:23) Increasing government expertise about AI(01:24:32) Policy/advocacy in China(01:26:29) Corporate campaigns to advocate for safety(01:28:05) Develop AI safety/security/evaluation standards(01:29:37) Slow down Chinese AI development via ordinary foreign policy(01:32:07) Whistleblower protection/support(01:33:26) Opinion polling(01:34:32) Help AI company employees improve safety within their companies(01:35:35) Direct talks with AI companies to make them safer(01:36:58) Monitor AI companies on safety standards(01:38:09) Create a petition or open letter on AI risk(01:39:37) Create demonstrations of dangerous AI capabilities(01:41:10) Sue OpenAI for violating its nonprofit mission(01:42:50) Send people AI safety books(01:44:15) AI research ideas(01:44:34) Research on how to get people to extrapolate(01:47:33) Investigate how to use AI to reduce other x-risks(01:50:10) A short-timelines alignment plan that doesnt rely on bootstrapping(01:52:15) Rigorous analysis of the various ways alignment bootstrapping could fail(01:53:31) Future work(01:53:35) Pros and cons of slowing down AI development, with numeric credences(01:55:02) Quantitative model on AI x-risk vs. other x-risks(01:56:21) Deeper investigation of the AI arms race situation(01:56:53) Does slowing down/pausing AI help solve non-alignment problems?(01:58:49) Determine when will be the right time to push for strong restrictions on AI (if not now)(01:59:49) SupplementsThe original text contained 8 footnotes which were omitted from this narration. --- First published: September 17th, 2025 Source: https://forum.effectivealtruism.org/posts/CbHX5zL2uEvTasuiP/ai-safety-landscape-and-strategic-gaps --- Narrated by TYPE III AUDIO.