EA - GiveWell should use shorter TAI timelines by Oscar Delaney
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund
Categories:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should use shorter TAI timelines, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell’s discount rate of 4% includes a 1.4% contribution from ‘temporal uncertainty’ arising from the possibility of major events radically changing the world.This is incompatible with the transformative artificial intelligence (TAI) timelines of many AI safety researchers.I argue that GiveWell should increase its discount rate, or at least provide a justification for differing significantly from the commonly-held (in EA) view that TAI could come soon.Epistemic Status: timelines are hard, and I don’t have novel takes, but I worry perhaps GiveWell doesn’t either, and they are dissenting unintentionally.In my accompanying post I argued GiveWell should use a probability distribution over discount rates, I will ignore that here though, and just consider whether their point estimate is appropriate.GiveWell’s current discount rate of 4% is calculated as the sum of three factors. Quoting their explanations from this document:Improving circumstances over time: 1.7%. “Increases in consumption over time meaning marginal increases in consumption in the future are less valuable.â€Compounding non-monetary benefits: 0.9%. “There are non-monetary returns not captured in our cost-effectiveness analysis which likely compound over time and are causally intertwined with consumption. These include reduced stress and improved nutrition.â€Temporal uncertainty: 1.4%. “Uncertainty increases with projections into the future, meaning the projected benefits may fail to materialize. James [a GiveWell researcher] recommended a rate of 1.4% based on judgement on the annual likelihood of an unforeseen event or longer term change causing the expected benefits to not be realized. Examples of such events are major changes in economic structure, catastrophe, or political instability.â€I do not have a good understanding of how these numbers were derived, and have no reason to think the first two are unfair estimates. I think the third is a significant underestimate.TAI is precisely the sort of “major change†meant to be captured by the temporal uncertainty factor. I have no insights to add on the question of TAI timelines, but I think absent GiveWell providing justification to the contrary, they should default towards using the timelines of people who have thought about this a lot. One such person is Ajeya Cotra, who in August reported a 50% credence in TAI being developed by 2040. I do not claim, and nor does Ajeya, that this is authoritative, however it seems a reasonable starting point for GiveWell to use, given they have not and (I think rightly) probably will not put significant work into forming independent timelines. Also in August, a broader survey of 738 experts by AI Impacts resulted in a median year for TAI of 2059. This source has the advantage of including many more people, but conversely most of them will have not spent much time thinking carefully about timelines.I will not give a sophisticated instantiation of what I propose, but rather gesture at what I think a good approach would be, and give a toy example to improve on the status quo. A naive thing to do would be to imagine that there is a fixed annual probability of developing TAI conditional on not having developed it to date. This method gives an annual probability of 3.8% under Ajeya’s timelines, and 1.9% under the AI Impacts timelines.[1] In reality, more of our probability mass should be placed on the later years between now and 2040 (or 2059), and we should not simply stop at 2040 (or 2059). A proper model would likely need to dispense with a constant discount rate entirely, and instead track the probability the world has not seen a “major change†by each year. A model that accomplishes ...
