“Alignment Faking in Large Language Models” by Ryan Greenblatt

EA Forum Podcast (All audio) - A podcast by EA Forum Team

What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. Abstract We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...] ---Outline:(00:24) Abstract(02:20) Twitter thread(05:43) Blog post(07:43) Experimental setup(12:04) Further analyses(15:47) Caveats(17:19) Conclusion(18:00) Acknowledgements(18:11) Career opportunities at Anthropic(18:43) Career opportunities at Redwood ResearchThe original text contained 2 footnotes which were omitted from this narration. The original text contained 8 images which were described by AI. --- First published: December 18th, 2024 Source: https://forum.effectivealtruism.org/posts/RHqdSMscX25u7byQF/alignment-faking-in-large-language-models --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.