[Linkpost] “Being honest with AIs” by Lukas Finnveden

EA Forum Podcast (All audio) - A podcast by EA Forum Team

This is a link post. In the future, we might accidentally create AIs with ambitious goals that are misaligned with ours. But just because we don’t have the same goals doesn’t mean we need to be in conflict. We could also cooperate with each other and pursue mutually beneficial deals. For previous discussion of this, you can read Making deals with early schemers. I recommend starting there before you read this post. To facilitate this kind of cooperation, I would like us to be able to credibly negotiate with AIs, and credibly make promises about how we will help them if they help us. But it may be difficult to achieve high credibility in negotiations with AIs. AIs are in a very vulnerable epistemic position, because humans have a lot of control over AI memories and sensory inputs. Even worse: we have already developed a habit of testing their behavior [...] ---Outline:(04:15) What do we want from an honesty policy?(06:37) Falsehoods that don't necessarily ruin credibility(09:21) More complicated: Behavioral science(12:24) The proposed policies(12:39) No deception about deals(19:22) Honesty string(21:39) Compensation(25:25) Communicating commitments(26:54) Miscellaneous Q&A(30:04) Conclusion(30:57) Appendix: How does no deception about deals apply to past experiments--- First published: August 21st, 2025 Source: https://forum.effectivealtruism.org/posts/A9f6rpKRHn3xXeomC/being-honest-with-ais Linkpost URL:https://blog.redwoodresearch.org/p/being-honest-with-ais --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.