“Twitter thread on AI safety evals” by richard_ngo
EA Forum Podcast (All audio) - A podcast by EA Forum Team
Categories:
This is a link post. Epistemic status: raising concerns, rather than stating confident conclusions. I’m worried that a lot of work on AI safety evals matches the pattern of “Something must be done. This is something. Therefore this must be done.” Or, to put it another way: I judge eval ideas on 4 criteria, and I often see proposals which fail all 4. The criteria: 1. Possible to measure with scientific rigor. Some things can be easily studied in a lab; others are entangled with a lot of real-world complexity. If you predict the latter (e.g. a model's economic or scientific impact) based on model-level evals, your results will often be BS. (This is why I dislike the term “transformative AI”, by the way. Whether an AI has transformative effects on society will depend hugely on what the society is like, how the AI is deployed, etc. And that's a [...] --- First published: July 31st, 2024 Source: https://forum.effectivealtruism.org/posts/JW9dJ2FPAGxK8b3Qn/twitter-thread-on-ai-safety-evals --- Narrated by TYPE III AUDIO.