#036 - Max Welling: Quantum, Manifolds & Symmetries in ML

Today we had a fantastic conversation with Professor Max Welling, VP of Technology, Qualcomm Technologies Netherlands B.V.  Max is a strong believer in the power of data and computation and its relevance to artificial intelligence. There is a fundamental blank slate paradgm in machine learning, experience and data alone currently rule the roost. Max wants to build a house of domain knowledge on top of that blank slate. Max thinks there are no predictions without assumptions, no generalization without inductive bias. The bias-variance tradeoff tells us that we need to use additional human knowledge when data is insufficient. Max Welling has pioneered many of the most sophistocated inductive priors in DL models developed in recent years, allowing us to use Deep Learning with non-euclidean data i.e. on graphs/topology (a field we now called "geometric deep learning") or allowing network architectures to recognise new symmetries in the data for example gauge or SE(3) equivariance. Max has also brought many other concepts from his physics playbook into ML, for example quantum and even Bayesian approaches.  This is not an episode to miss, it might be our best yet!  Panel: Dr. Tim Scarfe, Yannic Kilcher, Alex Stenlake 00:00:00 Show introduction  00:04:37 Protein Fold from DeepMind -- did it use SE(3) transformer?  00:09:58 How has machine learning progressed  00:19:57 Quantum Deformed Neural Networks paper  00:22:54 Probabilistic Numeric Convolutional Neural Networks paper 00:27:04 Ilia Karmanov from Qualcomm interview mini segment 00:32:04 Main Show Intro  00:35:21 How is Max known in the community?  00:36:35 How Max nurtures talent, freedom and relationship is key  00:40:30 Selecting research directions and guidance  00:43:42 Priors vs experience (bias/variance trade-off)  00:48:47 Generative models and GPT-3  00:51:57 Bias/variance trade off -- when do priors hurt us  00:54:48 Capsule networks  01:03:09 Which old ideas whould we revive  01:04:36 Hardware lottery paper  01:07:50 Greatness can't be planned (Kenneth Stanley reference)  01:09:10 A new sort of peer review and originality  01:11:57 Quantum Computing  01:14:25 Quantum deformed neural networks paper  01:21:57 Probabalistic numeric convolutional neural networks  01:26:35 Matrix exponential  01:28:44 Other ideas from physics i.e. chaos, holography, renormalisation  01:34:25 Reddit  01:37:19 Open review system in ML  01:41:43 Outro 

Om Podcasten

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).