[Linkpost] “Attention on Existential Risk from AI Likely Hasn’t Distracted from Current Harms from AI” by Erich_Grunewald

EA Forum Podcast (Curated & popular) - A podcast by EA Forum Team

Summary. In the past year, public fora have seen growing concern about existential risk (henceforth, x-risk) from AI. The thought is that we could see transformative AI in the coming years or decades, that it may be hard to ensure that such systems act with humanity's best interests in mind and that those highly advanced AIs may be able to overpower us if they aimed to do so, or otherwise that such systems may be catastrophically misused. Some have reacted by arguing that concerns about x-risk distract from current harms from AI, like algorithmic bias, job displacement and labour issues, environmental impact and so on. And in opposition to those voices, others have argued that attention on x-risk does not draw resources and attention away from current harms -- that both concerns can coexist peacefully. The claim that x-risk distracts from current harms is contingent. It may be [...] ---Outline:(02:15) The Argument(09:31) Evidence(10:46) AI Policy(14:33) Search Interest(19:40) Twitter/X Followers(20:55) Funding(22:20) Climate Change(25:39) Maybe the Real Disagreement Is about How Big the Risks AreThe original text contained 9 footnotes which were omitted from this narration. --- First published: December 21st, 2023 Source: https://forum.effectivealtruism.org/posts/hXzB72kfdAk6PTzio/attention-on-existential-risk-from-ai-likely-hasn-t Linkpost URL:https://www.erichgrunewald.com/posts/attention-on-existential-risk-from-ai-likely-hasnt-distracted-from-current-harms-from-ai/ --- Narrated by TYPE III AUDIO.