EA - The EA community might be neglecting the value of influencing people by JulianHazell
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund
Categories:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA community might be neglecting the value of influencing people, published by JulianHazell on August 22, 2022 on The Effective Altruism Forum. TL;DR Most community-building effort currently goes towards creating new, highly-engaged EAs, yet the vast majority of people who can do things to help further EA goals will not wish to be highly-engaged. You don’t need to actually be an “EA” to do effectively altruistic things, hence why influencing people with EA ideas can be very useful. While we already do some influencing, we might want to do more on the margin, especially if we feel urgency about problems like AI alignment. EA is special; we should try sharing the ideas/mental models/insights we have with those who can do good things with them. It would be useful to have an idea as to how cost-effective it is to try influencing others relative to creating highly-engaged EAs. Epistemic status Quite uncertain. I’m more confident in the sign of these arguments than the magnitude. This post was mostly informed by my impressions from being highly involved within the EA community over the last year or so, as well as the time I spent working at Giving What We Can and my current work at GovAI. All views articulated here are my own and do not reflect the stances of either of these organisations, or any other institutions I’m affiliated with. Finally, I’m hoping to spark a conversation rather than to make any sweeping declarations about what the community’s strategy ought to be. I was inspired to write this post after reading Abraham Rowe’s list of EA critiques he would like to read. Introduction I’m writing this post because I think that gearing the overwhelming majority of EA community building effort towards converting young people to highly-engaged EAs might neglect the value of influence, to the detriment of our ability to get enough people working on difficult problems like AI alignment. Providing altruistically-inclined young people with opportunities to pursue highly-impactful career paths is great. I’m happy this opportunity was provided to me, and I think this work is incredibly valuable insofar as it attracts more smart, ambitious, and well-intentioned people into the EA community for comparatively little cost. But my impression is that I (alongside other highly-engaged EAs) am somewhat unusual with respect to my willingness to severely constrain the list of potential career paths I might go down, and how much I weigh my social impact relative to other factors like salary and personal enjoyment. Most people will be unwilling to do this, or might be initially turned off from the seemingly large commitment that comes with being a highly-engaged EA. Does that mean they can’t contribute to the world’s most pressing problems, or only do so via donating? I don’t think so — working directly on pressing problems doesn’t necessarily have to be all or nothing. But my outside impression is that the current outreach approach might neglect the value of influencing the thinking of those who might never wish to become highly involved in the community, and/or those who already have influence over the state of affairs. You know, the people who (by and large) control/will control the levers of power in government, business, academia, thought leadership, journalism, and policy. To be clear, I don’t think we should deliberately soft sell EA when communicating externally, but we should be aware that some ideas that sound perfectly reasonable to us might sound off-putting to others. Moreover, we should also be aware that packaging these ideas together as part of the “EA worldview” might risk even just one weird part of the package putting someone off the other ideas (that they might otherwise endorse!) entirely. In this case, it could be better to just strategically push the core thing ...
