2558 Episodes

  1. EA - 13 background claims about EA by Akash

    Published: 07/09/2022
  2. EA - Value of Life: VSL Estimates vs Community Perspective Evaluations by Joel Tan

    Published: 06/09/2022
  3. EA - Announcing the Change Our Mind Contest for critiques of our cost-effectiveness analyses by GiveWell

    Published: 06/09/2022
  4. EA - EA and LW Forums Weekly Summary (28 Aug - 3 Sep 22’) by Zoe Williams

    Published: 06/09/2022
  5. EA - Say “nay!” to the Bay (as the default)! by Kaleem

    Published: 06/09/2022
  6. EA - Alex Lawsen On Forecasting AI Progress by Michaël Trazzi

    Published: 06/09/2022
  7. EA - AI Governance Needs Technical Work by Mauricio

    Published: 06/09/2022
  8. EA - Selfish Reasons to Move to DC by Anonymous EA

    Published: 05/09/2022
  9. EA - An entire category of risks are undervalued by EA [Summary of previous forum post] by Richard Ren

    Published: 05/09/2022
  10. EA - Do AI companies make their safety researchers sign a non-disparagement clause? by ofer

    Published: 05/09/2022
  11. EA - The Base Rate of Longtermism Is Bad by ColdButtonIssues

    Published: 05/09/2022
  12. EA - Chesterton Fences and EA’s X-risks by jehan

    Published: 03/09/2022
  13. EA - Igor Kiriluk (1974-2022) by turchin

    Published: 03/09/2022
  14. EA - The discount rate is not zero by Thomaaas

    Published: 03/09/2022
  15. EA - Peter Eckersley (1979-2022) by Gavin

    Published: 03/09/2022
  16. EA - The impact we achieved to date: Animal Advocacy Careers by SofiaBalderson

    Published: 02/09/2022
  17. EA - EA is about maximization, and maximization is perilous by Holden Karnofsky

    Published: 02/09/2022
  18. EA - An Evaluation of Animal Charity Evaluators by eaanonymous1234

    Published: 02/09/2022
  19. EA - Celebrations and gratitude thread by Lizka

    Published: 02/09/2022
  20. EA - Applications for the 2023 Tarbell Fellowship now open (one-year journalism programme) by Cillian Crosson

    Published: 02/09/2022

116 / 128

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

Visit the podcast's native language site