“Questionable Narratives of ‘Situational Awareness’” by fergusq
EA Forum Podcast (All audio) - A podcast by EA Forum Team
Categories:
Introduction. This is a response to the Situational Awareness essay series by Leopold Aschenbrenner. As a disclaimer, I am an AI pessimist, meaning that I don't believe there is evidence for AGI appearing any time soon. I do also believe that even if you are an AI optimist, you should view Aschenbrenner's text critically, as it contains numerous flawed arguments and questionable narratives, which I will go through in this post. The text has numerous dubious technical claims and flawed arguments, including misleading statements regarding RLHF[1], uncited claims of human intelligence[2], use of made-up units such as OOM[3] without any serious technical argumentation, use of made-up charts that extrapolate these made-up units, claims that current models could be "unhobbled"[4], and baseless claims such as that current AI is at the level of a preschooler or a high school student[5]. I have given some thoughts on these in the [...] ---Outline:(02:09) Conspiracy narratives(04:16) US-centric narratives(07:20) The AGI narrative(09:58) Learning-planning(14:33) Definition of AGI(19:31) The future narrative(24:20) Concluding remarks(26:18) Footnotes and responses to technical argumentsThe original text contained 14 footnotes which were omitted from this narration. --- First published: June 16th, 2024 Source: https://forum.effectivealtruism.org/posts/WuPs6diJQnznmS4bo/questionable-narratives-of-situational-awareness --- Narrated by TYPE III AUDIO.