“A note of caution about recent AI risk coverage” by Sean_o_h
EA Forum Podcast (Curated & popular) - A podcast by EA Forum Team
Categories:
Epistemic status: some thoughts I wanted to get out quicklyA lot of fantastic work has been done by people in the AI existential risk research community and related communities over the last several months in raising awareness about risks from advanced AI. However, I have some cause for unease that I’d like to share.These efforts may have been too successful too soon.Or, more specifically, this level of outreach success this far ahead of the development of AI capable of posing existential risk may have fallout. We should consider steps to mitigate this. (1) TimelinesI know that there are well-informed people in the AI and existential risk communities who believe AI capable of posing existential risk may be developed within 10 years. I certainly can’t rule this out, and even a small chance of this is worth working to prevent or mitigate to the extent possible, given the possible consequences. My own timelines are longer, although my intuitions don’t have a rigorous model underpinning them (my intuitions line up similarly to the 15-40 year timelines mentioned in this recent blog post by Matthew Barnett from Epoch).Right now the nature of media communications means that the message is coming across with a lot of urgency. From speaking to lay colleagues, impressions often seem to be of short timelines (and some folks e.g. Geoff Hinton have explicitly said 5-20 years, sometimes with uncertainty caveats and sometimes without).It may be that those with short (10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world? The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don’t transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look [...] --- Source: https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage --- Narrated by TYPE III AUDIO.