“Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development” by Jan_Kulveit

EA Forum Podcast (All audio) - A podcast by EA Forum Team

This is a link post. Full version on arXiv | X Executive summary AI risk scenarios usually portray a relatively sudden loss of human control to AIs, outmaneuvering individual humans and human institutions, due to a sudden increase in AI capabilities, or a coordinated betrayal. However, we argue that even an incremental increase in AI capabilities, without any coordinated power-seeking, poses a substantial risk of eventual human disempowerment. This loss of human influence will be centrally driven by having more competitive machine alternatives to humans in almost all societal functions, such as economic labor, decision making, artistic creation, and even companionship. A gradual loss of control of our own civilization might sound implausible. Hasn't technological disruption usually improved aggregate human welfare? We argue that the alignment of societal systems with human interests has been stable only because of the necessity of human participation for thriving economies, states, and [...] --- First published: January 30th, 2025 Source: https://forum.effectivealtruism.org/posts/vLDd4tit9gJtLmZZY/gradual-disempowerment-systemic-existential-risks-from --- Narrated by TYPE III AUDIO.