2558 Episodes

  1. EA - Counterproductive EA mental health advice (and what to say instead) by Ada-Maaria Hyvärinen

    Published: 11/10/2022
  2. EA - Some Carl Sagan quotations by finm

    Published: 11/10/2022
  3. EA - On absurdity by OllieBase

    Published: 10/10/2022
  4. EA - Is there any general advice you have? by lynettebye

    Published: 10/10/2022
  5. EA - When reporting AI timelines, be clear who you're (not) deferring to by Sam Clarke

    Published: 10/10/2022
  6. EA - Lessons learned from talking to >100 academics about AI safety by mariushobbhahn

    Published: 10/10/2022
  7. EA - We can do better than argmax by Jan Kulveit

    Published: 10/10/2022
  8. EA - Possible miracles by Akash

    Published: 10/10/2022
  9. EA - A peek at pairwise preference estimation in economics, marketing, and statistics by Jonas Moss

    Published: 09/10/2022
  10. EA - Why defensive writing is bad for community epistemics by Emrik

    Published: 09/10/2022
  11. EA - Introducing the EA Good Governance Project by Grayden

    Published: 09/10/2022
  12. EA - Why I think there's a one-in-six chance of an imminent global nuclear war by Tegmark

    Published: 09/10/2022
  13. EA - SERI MATS Program - Winter 2022 Cohort by Ryan Kidd

    Published: 08/10/2022
  14. EA - Changing Licences on the EA Forum by Ben West

    Published: 08/10/2022
  15. EA - Grow your Mental Resilience to Grow your Impact HERE - Introducing Effective Peer Support by Inga

    Published: 08/10/2022
  16. EA - Consider entering the 2024 US diversity visa lottery by November 8 2022 - it's free and fast to do by Tyrone Barugh

    Published: 08/10/2022
  17. EA - Don't leave your fingerprints on the future by So8res

    Published: 08/10/2022
  18. EA - Things for grant applicants to remember about living expenses by Julia Wise

    Published: 07/10/2022
  19. EA - Getting on a different train: can Effective Altruism avoid collapsing into absurdity? by Peter McLaughlin

    Published: 07/10/2022
  20. EA - What does it mean for an AGI to be 'safe'? by So8res

    Published: 07/10/2022

108 / 128

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

Visit the podcast's native language site