EA - I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him? by Robert Wiblin

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categories:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him?, published by Robert Wiblin on September 29, 2022 on The Effective Altruism Forum. Next week I'm interviewing Richard Ngo, current AI (Safety) Governance Researcher at OpenAI and previous Research Engineer at DeepMind. Before that he was doing a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?" He is focused on making the development and deployment of AGI more likely to go well and less likely to go badly. Richard is also a highly prolific contributor to online discussion of AI safety in a range of places, for instance: Moral strategies at different capability levels on his blog Thinking Complete The alignment problem from a deep learning perspective on the EA Forum Some conceptual alignment research projects on the AI Alignment Forum Richard Ngo and Eliezer Yudkowsky policely debating AI Safety on Less Wrong The AGI Safety from First Principle education series And on his Twitter What should I ask him? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Visit the podcast's native language site