EA - A Quick Qualitative Analysis of Laypeople’s Critiques of Longtermism by Roh
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund
Categories:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Quick Qualitative Analysis of Laypeople’s Critiques of Longtermism, published by Roh on August 15, 2022 on The Effective Altruism Forum. While preparing this post, How to Talk to Lefties in Your Intro Fellowship was posted and it received some interesting pushback in the comments. The focus of this post is not reframing EA marketing that may weaken its epistemic status but exploring misconceptions of EA longtermism that its marketing may produce. See footnote for more details. Summary I coded 104 comments of Ezra Klein and William MacAskill’s podcast episode “Three Sentences that Could Change the World — and Your Life” to better understand the critiques a layperson has of longtermism. More specifically, I wanted to capture the gut-instinct, first-pass reactions to longtermism, and not true philosophical arguments against longtermism. Because of this particular sample, this analysis is especially relevant to left-leaning, EA-inclined but EA-unfamiliar individuals. One reason examining people’s first takes in detail might be informative is that it helps us identify the way in which longtermist messaging can be misinterpreted when there’s a large inferential gap between EAs and well-meaning people. Finally, following MacAskill’s What We Owe the Future book release, I anticipate a surge of discussion on longtermism generally. Summarized Takeaways (see Interpretation for more detail & context): In discussions about longtermism with left-leaning / progressive people completely new to the movement, here are things to keep in mind. Note that other groups may not generate such misconceptions from traditional EA rhetoric. Prepare for concerns about longtermism being anti-climate change Explain how the future world's well-being is also affected by longtermist causes (i.e. elephants can also be turned into paperclips) Make explicit ways the animal welfare is being considered in EA efforts Discuss how overpopulation may be overestimated as an issue and not a huge contributor to climate change when discussing the long term future Prepare for despair about the future Challenge underlying assumptions that big change is made only through political venues by pointing out effective change pathways outside of politics (academic research, non-profit work, non-partisan policy work) & generally EA focuses that have have high tractability Clarify longtermist approaches to assessing and overcoming Challenge underlying assumptions about how soon and how likely the end of the world is Legitimizing the movement Emphasize EA as a Question, not a set of cause areas Explain longtermism's role as a practical movement (and not just a thought experiment Reference the Iroquois tribe’s efforts towards sustainability as an historical precedence for longtermism & acknowledge of non-white philosophical predecessors Highlight non-white-male EA work in longtermist discussions Discuss neartermist work, animal welfare work, and donation work to legitimize EA as a whole Emphasize EA’s efforts towards moral circle expansion Things I think would be useful going forward: A comprehensive list of big EA achievements in longtermism For encouraging optimism and legitimizing the movement’s efforts A historical track record of what important work EA has done. For legitimizing the movement, both in and out of the longtermism space Previous people have said this before A EA survey response to the statement: we should not destroy the earth in the effort to sustain human life. Methodology I read the online comments responding to the podcast, and summarized eight themes that the critiques fell into. Then, I went back and coded each comment into which themes they fit. I excluded comments that replied to other comments (either in a reply chain or that @’ed someone else’s comment). This was done in a vaguely scientific manne...
