“AI Rights for Human Safety” by Matthew_Barnett

EA Forum Podcast (All audio) - A podcast by EA Forum Team

This is a link post. I thought this was an interesting paper by Peter Salib and Simon Goldstein, and it reflects many of my thoughts about AI governance as well. Here's the abstract: AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”—pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, the result could be violent and catastrophic conflict. Existing legal institutions are unprepared for the AGI world. New foundations for AGI governance are needed, and the time to begin laying them is now, before the critical [...] --- First published: August 3rd, 2024 Source: https://forum.effectivealtruism.org/posts/jqpp6wp2dqDD7XkY3/ai-rights-for-human-safety --- Narrated by TYPE III AUDIO.