#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)

Academics think of themselves as trailblazers, explorers — seekers of the truth. Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering. Unfortunately, this also means that most research careers will invariably be failures at least if failures are measured via “objective” metrics like citations. Today we discuss the recent article from Mark Saroufim called Machine Learning: the great stagnation. We discuss the rise of gentleman scientists, fake rigor, incentives in ML, SOTA-chasing, "graduate student descent", distribution of talent in ML and how to learn effectively.   With special guest interviewer Mat Salvaris.  Machine learning: the great stagnation [00:00:00] Main show kick off [00:16:30] Great stagnation article / Bad incentive systems in academia [00:18:24] OpenAI is a media business [00:19:48] Incentive structures in academia [00:22:13] SOTA chasing [00:24:47] F You Money [00:28:53] Research grants and gentlemen scientists [00:29:13] Following your own gradient of interest and making a contribution [00:33:27] Marketing yourself to be successful [00:37:07] Tech companies create the bad incentives [00:42:20] GPT3 was sota chasing but it seemed really... "good"? Scaling laws? [00:51:09] Dota / game AI [00:58:39] Hard to go it alone? [01:02:08] Reaching out to people [01:09:21] Willingness to be wrong [01:13:14] Distribution of talent / tech interviews [01:18:30] What should you read online and how to learn? Sharing your stuff online and finding your niece [01:25:52] Mark Saroufim: https://marksaroufim.substack.com/ http://robotoverlordmanual.com/ https://twitter.com/marksaroufim https://www.youtube.com/marksaroufim Dr. Mathew Salvaris: https://www.linkedin.com/in/drmathewsalvaris/ https://twitter.com/MSalvaris

Om Podcasten

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).