“Opinionated take on EA and AI Safety” by sammyboiz
EA Forum Podcast (All audio) - A podcast by EA Forum Team

Categories:
Claude 3.7's coding ability forced me to reanalyze whether where will be a SWE job for me after college. This has forced me to re-explore AI safety and its arguments, and I have been re-radicalized towards the safety movement. What I can’t understand, though, is how contradictory so much of Effective Altruism (EA) feels. It hurts my head, and I want to explore my thoughts in this post. EA seems far too friendly toward AGI labs and feels completely uncalibrated to the actual existential risk (from an EA perspective) and the probability of catastrophe from AGI (p(doom)). Why aren’t we publicly shaming AI researchers every day? Are we too unwilling to be negative in our pursuit of reducing the chance of doom? Why are we friendly with Anthropic? Anthropic actively accelerates the frontier, currently holds the best coding model, and explicitly aims to build AGI—yet somehow, EAs rally behind [...] --- First published: March 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/t5rr7usjNGsqTCCLz/opinionated-take-on-ea-and-ai-safety --- Narrated by TYPE III AUDIO.