- Feb. 24, 2022, 2:30 pm US/Central
- Dan Roberts, MIT
- Ryan Janish
Deep learning is an exciting approach to modern artificial intelligence based on artificial neural networks. The goal of this talk is to provide a blueprint — using tools from physics — for theoretically analyzing deep neural networks of practical relevance. This task will encompass both understanding the statistics of initialized deep networks and determining the training dynamics of such an ensemble when learning from data. Borrowing from the “effective theory” framework of physics and developing a perturbative 1/n expansion around the limit of infinite hidden-layer width, we will find a principle of sparsity that will let us describe effectively-deep networks of practical large-but-finite-width networks.
This talk is based on a book, “The Principles of Deep Learning Theory,” co-authored with Sho Yaida and based on research also in collaboration with Boris Hanin. It will be published this summer by Cambridge University Press.