EA - Interesting vs. Important Work - A Place EA is Prioritizing Poorly by Davidmanheim

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categories:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interesting vs. Important Work - A Place EA is Prioritizing Poorly, published by Davidmanheim on July 28, 2022 on The Effective Altruism Forum. There are many important issues in the world, and many interesting topics. But these are not the same thing, and we should beware of suspicious convergence. Given that, our assumption should be that the most interesting topics we hear about are far less important than the attention they receive. Heeding Scott Alexander’s recent warning, I’ll therefore ask much more specifically, what are the most intellectually interesting topics in Effective Altruism, and then I’ll suggest that we should be doing less work on them - and list a few concrete suggestions for how to do that. What are the interesting things? Here are some of my concrete candidates for most interesting work: infinite ethics, theoretical AI safety, rationality techniques, and writing high-level critiques of EA. And to be clear, all of these ARE important. But the number of people we need working on them should probably be more limited than the current trajectory, and we should probably de-emphasize status for the most theoretical work. To be clear, I love GPI, FHI, CSER, MIRI, and many other orgs doing this work. The people I know at each org are great, and I think that many of the things they do are, in fact, really important. And I like the work they do - not only do I think it’s important, it’s also SUPER interesting, especially to people who like philosophy, math, and/or economics. But the convergence between important and interesting is exactly the problem I'm pointing towards. Motivating Theoretical Model Duncan Sabien talks about Monks of Magnitude, where different people work on things that have different feedback loop lengths, from 1 day, to 10-days, to people who spend 10,000 days thinking. He more recently mentioned that he noticed “people continuously vanishing higher into the tower,” that is, focusing on more abstract and harder to evaluate issues, and that very few people have done the opposite. One commenter, Ben Weinstein-Raun, suggested several reasons, among them that longer-loop work is more visible, and higher status. I think this critique fits the same model, where we should be suspicious that such long-loop work is over-produced. (Another important issue is that “it's easier to pass yourself off as a long-looper when you're really doing nothing,” but that’s a different discussion.) The natural tendency to do work that is more conceptual and/or harder to pin to a concrete measurable outcome is one we should fight back against, since by default it is overproduced. The basic reason it is overproduced is because people who are even slightly affected by status or interesting research, i.e. everyone, will give it at least slightly more attention than warranted, and further, because others are already focused on it, the marginal value is lower. This is not to say that the optimal amount of fun and interesting research is zero, nor that all fun and interesting work is unimportant. We do need 10,000 day monks - and lots of interesting questions exist for long-termism that make them significant moral priorities. And I agree with the argument for a form of long-termism. But this isn’t a contradiction - and work on long-termism can be concrete and visible, isn’t necessarily conceptual, and doesn’t necessarily involve slow feedback loops. Towards fixing the problem Effective Altruism needs to be effective, and that means we need evaluable outputs wherever possible. First, anyone and everyone attempting to be impactful needs a theory of change, and an output that has some way of impacting the world. That means everyone, especially academics and researchers, should make this model clear, at least to themselves, but ideally also to others. If you’re writing f...

Visit the podcast's native language site