“Summary: Against the singularity hypothesis” by Global Priorities Institute, David Thorstad, rileyharris
EA Forum Podcast (All audio) - A podcast by EA Forum Team
Categories:
This is a link post. This is a summary of the GPI Working Paper “Against the singularity hypothesis” by David Thorstad (published in Philosophical Studies). The summary was written by Riley Harris. The singularity is a hypothetical future event in which machines rapidly become significantly smarter than humans. The idea is that we might invent an artificial intelligence (AI) system that can improve itself. After a single round of self-improvement, that system would be better equipped to improve itself than before. This process might repeat many times, and each time the AI system would become more capable and better equipped to improve itself even further. At the end of this (perhaps very rapid) process, the AI system could be much smarter than the average human. Philosophers and computer scientists have thought we should take the possibility of a singularity seriously (Solomonoff 1985, Good 1996, Chalmers 2010, Bostrom 2014, Russell 2019). [...] ---Outline:(01:39) Reasons to think the singularity is unlikely(04:25) Arguments for the singularity hypothesis(07:07) Implications for longtermism and AI SafetyThe original text contained 2 footnotes which were omitted from this narration. --- First published: May 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/N8CF4Nb7JHxRM2jKB/summary-against-the-singularity-hypothesis --- Narrated by TYPE III AUDIO.