“Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety)” by Andrew Critch

EA Forum Podcast (All audio) - A podcast by EA Forum Team

As an AI researcher who wants to do technical work that helps humanity, there is a strong drive to find a research area that is definitely helpful somehow, so that you don’t have to worry about how your work will be applied, and thus you don’t have to worry about things like corporate ethics or geopolitics to make sure your work benefits humanity. Unfortunately, no such field exists. In particular, technical AI alignment is not such a field, and technical AI safety is not such a field. It absolutely matters where ideas land and how they are applied, and when the existence of the entire human race is at stake, that's no exception. If that's obvious to you, this post is mostly just a collection of arguments for something you probably already realize. But if you somehow think technical AI safety or technical AI alignment is somehow intrinsically or [...] ---Outline:(01:44) Myths vs reality(05:03) What to make of all this(07:20) Recap of key points--- First published: June 14th, 2024 Source: https://forum.effectivealtruism.org/posts/8nrdzoRgeSdjFq74k/safety-isn-t-safety-without-a-social-model-or-dispelling-the --- Narrated by TYPE III AUDIO.