[finished] ipi seminar 17:00-18:30, Tuesday July 13, 2021

News 2021/06/15

知の物理学研究センター / Institute for Physics of Intelligence (ipi)


【Speaker】Masaaki IMAIZUMI (The University of Tokyo)

【Date】July 13 (Tuesday), 17:00-18:30JST

【Title】"Generalization Analysis of Deep Learning: Implicit Regularization and Over-parameterization"

【Abstract】Deep learning achieves high generalization performance, but a theoretical understanding of its principles is still a developing topic. In this talk, I will present two theoretical results on this topic: (i) loss surface-oriented implicit regularization, and (ii) double descent for deep models.

(i) Implicit regularization argues that a learning algorithm implicitly constrains the degrees of freedom of neural networks. However, a specific implicit regularization achieved by deep neural networks has not been clarified. In this paper, we theoretically show that when a loss surface has many local minima satisfying certain assumptions, its shape constrains a learning algorithm to achieve regularization. In this case, we also show that a generalization error of deep neural networks has an upper bound independent of the number of parameters.
 
(ii) Asymptotic risk analysis, including double descent, is a theoretical framework to analyze the generalization error of models with excessive parameters. Although it has attracted strong attention, it can analyze linear models in features such as random feature models. We show that, for a family of models without linearity constraints, the upper bound of the generalization error follows the theory of asymptotic risk. By investigating our regularity condition, we show that specific nonlinear models, such as parallelized deep neural networks, obey our result.
 
※To receive the Zoom invitation and monthly reminders, please register via this google form: https://forms.gle/dqxhpsZXLNYvbSB38
Your e-mail addresses will be used for this purpose only, you can unsubscribe anytime, and we will not send more than three e-mails per month.
 
 
Tilman HARTWIG, Takashi TAKAHASHI & Ken NAKANISHI
 
 
Related Links :
  • Bookmark