“How to make the future better (other than by reducing extinction risk)” by William_MacAskill
EA Forum Podcast (All audio) - A podcast by EA Forum Team

Categories:
What projects today could most improve a post-AGI world? In “How to make the future better”, I lay out some areas I see as high-priority, beyond reducing risks from AI takeover and engineered pandemics. These areas include: Preventing post-AGI autocracy Improving the governance of projects to build superintelligence Deep space governance Working on AI value-alignment; figuring out what character AI should have Developing a regime of AI rights Improving AI for reasoning, coordination and decision-making. Here's an overview. First, preventing post-AGI autocracy. Superintelligence structurally leads to concentration of power: post-AGI, human labour soon becomes worthless; those who can spend the most on inference-time compute have access to greater cognitive abilities than anyone else; and the military (and whole economy) can in principle be aligned to a single person. The risk from AI-enabled coups in particular is detailed at length here. To reduce this risk, we can try to [...] --- First published: August 15th, 2025 Source: https://forum.effectivealtruism.org/posts/rmjJPuziRk69esh3N/how-to-make-the-future-better-other-than-by-reducing --- Narrated by TYPE III AUDIO.