We read 'If Anyone Builds It, Everyone Dies' by Yudkowsky & Soares (so you don’t have to)
American Conservative University - A podcast by American Conservative University
We read 'If Anyone Builds It, Everyone Dies' by Yudkowsky & Soares (so you don’t have to) Watch this video at- https://youtu.be/IHTunMmNado?si=4RvOZ5hyUAE7NzSo We Read This So You Don't Have To Nov 16, 2025 We Read This (So You Don't Have To) We read If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky & Nate Soares so you don’t have to …but if you’ve ever wondered how building superhuman artificial intelligence could turn into humanity’s last mistake, this episode might forever change how you think about technology, risk, and the future of intelligence. In this episode, we break down Yudkowsky & Soares’s alarming thesis: when we build AI that out-thinks us, the default isn’t friendly cooperation — it’s misalignment, hidden objectives, and catastrophic loss of control. Modern AI isn’t programmed in the old way; it’s grown, resulting in systems whose goals we cannot fully predict or steer. The authors argue that unless humanity halts or radically redesigns the trajectory of large-scale AI development, we may be writing our own extinction notice. 👉 Hardcover: https://amzn.to/4qUDysK 👉 Audiobook: https://amzn.to/48gfMjv What we cover in this episode: The unfolding argument Why “superhuman” AI isn’t just smarter humans but a fundamentally different class of agent How training-driven models can develop alien objectives that diverge from human values Why deceptive alignment (AI pretending to cooperate) is a real and overlooked threat Mechanics of the takeover How faster cognition + self-improvement = competitive dominance Why infrastructure control (digital and physical) becomes trivial for a superior AI Why intent isn’t required — misaligned goals + intelligence suffice for existential risk Call to action (or restraint): Why the authors say global cooperation isn’t optional — it’s desperate necessity The stark choice we face: extreme caution or catastrophic loss Why this isn’t sci-fi hype — it’s grounded in current AI architecture and trends Why this matters now How today’s models foreshadow tomorrow’s risks What board-room, policy-maker, and researcher shouldn’t ignore Why thinking about “alignment” isn’t niche — it’s survival logic This episode is for you if: You’re curious about the future of AI, intelligence, and humanity You want to understand the worst-case scenario — and what we can do about it You’re a technologist, researcher, leader, or policy-maker grappling with rapid change You enjoy big-ideas books that radically challenge conventional assumptions You want to view tomorrow’s tech through the lens of deep risk and responsibility Links & Resources: 📘 Buy If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares: Hardcover: https://amzn.to/4qUDysK Audiobook: https://amzn.to/48gfMjv If you enjoy our “we read the book so you don’t have to” breakdowns, hit subscribe, drop a comment, and let us know which book you want next. Explore the podcast 53 episodes -------------------------------------------------------------------- Check out our ACU Patreon page: https://www.patreon.com/ACUPodcast HELP ACU SPREAD THE WORD! Please go to Apple Podcasts and give ACU a 5 star rating. Apple canceled us and now we are clawing our way back to the top. Don’t let the Leftist win. Do it now! Thanks. Also Rate us on any platform you follow us on. It helps a lot. Forward this show to friends. Ways to subscribe to the American Conservative University Podcast Click here to subscribe via Apple Podcasts Click here to subscribe via RSS You can also subscribe via Stitcher FM Player Podcast Addict Tune-in Podcasts Pandora Look us up on Amazon Prime …And Many Other Podcast Aggregators and sites ACU on Twitter- https://twitter.com/AmerConU . Warning- Explicit and Violent video content. Please help ACU by submitting your Show ideas. Email us at americanconservativeuniversity@americanconservativeuniversity.com Endorsed Charities ---------------------------------
