We Invented Momentum Because Math is Hard [Dr. Jeff Beck]

Machine Learning Street Talk (MLST) - A podcast by Machine Learning Street Talk (MLST)

Podcast artwork

Categories:

Dr. Jeff Beck, mathematician turned computational neuroscientist, joins us for a fascinating deep dive into why the future of AI might look less like ChatGPT and more like your own brain.**SPONSOR MESSAGES START**—Prolific - Quality data. From real people. For faster breakthroughs.https://www.prolific.com/?utm_source=mlst—**END***What if the key to building truly intelligent machines isn't bigger models, but smarter ones?*In this conversation, Jeff makes a compelling case that we've been building AI backwards. While the tech industry races to scale up transformers and language models, Jeff argues we're missing something fundamental: the brain doesn't work like a giant prediction engine. It works like a scientist, constantly testing hypotheses about a world made of *objects* that interact through *forces* — not pixels and tokens.*The Bayesian Brain* — Jeff explains how your brain is essentially running the scientific method on autopilot. When you combine what you see with what you hear, you're doing optimal Bayesian inference without even knowing it. This isn't just philosophy — it's backed by decades of behavioral experiments showing humans are surprisingly efficient at handling uncertainty.*AutoGrad Changed Everything* — Forget transformers for a moment. Jeff argues the real hero of the AI boom was automatic differentiation, which turned AI from a math problem into an engineering problem. But in the process, we lost sight of what actually makes intelligence work.*The Cat in the Warehouse Problem* — Here's where it gets practical. Imagine a warehouse robot that's never seen a cat. Current AI would either crash or make something up. Jeff's approach? Build models that *know what they don't know*, can phone a friend to download new object models on the fly, and keep learning continuously. It's like giving robots the ability to say "wait, what IS that?" instead of confidently being wrong.*Why Language is a Terrible Model for Thought* — In a provocative twist, Jeff argues that grounding AI in language (like we do with LLMs) is fundamentally misguided. Self-report is the least reliable data in psychology — people routinely explain their own behavior incorrectly. We should be grounding AI in physics, not words.*The Future is Lots of Little Models* — Instead of one massive neural network, Jeff envisions AI systems built like video game engines: thousands of small, modular object models that can be combined, swapped, and updated independently. It's more efficient, more flexible, and much closer to how we actually think.Rescript: https://app.rescript.info/public/share/D-b494t8DIV-KRGYONJghvg-aelMmxSDjKthjGdYqsE---TIMESTAMPS:00:00:00 Introduction & The Bayesian Brain00:01:25 Bayesian Inference & Information Processing00:05:17 The Brain Metaphor: From Levers to Computers00:10:13 Micro vs. Macro Causation & Instrumentalism00:16:59 The Active Inference Community & AutoGrad00:22:54 Object-Centered Models & The Grounding Problem00:35:50 Scaling Bayesian Inference & Architecture Design00:48:05 The Cat in the Warehouse: Solving Generalization00:58:17 Alignment via Belief Exchange01:05:24 Deception, Emergence & Cellular Automata---REFERENCES:Paper:[00:00:24] Zoubin Ghahramani (Google DeepMind)https://pmc.ncbi.nlm.nih.gov/articles/PMC3538441/pdf/rsta201[00:19:20] Mamba: Linear-Time Sequence Modelinghttps://arxiv.org/abs/2312.00752[00:27:36] xLSTM: Extended Long Short-Term Memoryhttps://arxiv.org/abs/2405.04517[00:41:12] 3D Gaussian Splattinghttps://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/[01:07:09] Lenia: Biology of Artificial Lifehttps://arxiv.org/abs/1812.05433[01:08:20] Growing Neural Cellular Automatahttps://distill.pub/2020/growing-ca/[01:14:05] DreamCoderhttps://arxiv.org/abs/2006.08381[01:14:58] The Genomic Bottleneckhttps://www.nature.com/articles/s41467-019-11786-6Person:[00:16:42] Karl Friston (UCL)https://www.youtube.com/watch?v=PNYWi996Beg

Visit the podcast's native language site