[finished] ipi seminar 10:30-12:00, Tuesday Nov. 21, 2023

知の物理学研究センター / Institute for Physics of Intelligence (ipi)

Nov 21, 2023 10:30 - 12:00 (JST)

Liu Ziyin /The University of Tokyo

“Understand and analyze deep learning through the lens of symmetry”

Due to common architecture designs, symmetries exist extensively in contemporary neural networks. In this presentation, we discuss the importance of the loss function symmetries in affecting, if not deciding, the learning behavior of machine learning models. We present of theory of how symmetry is related to the learning process and outcome of neural networks and show that every mirror reflection symmetry of the loss function leads to a structured constraint, which becomes a favored solution when either the weight decay or gradient noise is large. As direct corollaries, we show that rescaling symmetry leads to sparsity, rotation symmetry leads to low rankness, and permutation symmetry leads to homogeneous ensembling. We apply our results to understand the loss function, the learning dynamics, and algorithm design in deep learning. This presentation is partially based on the recent work: [2309.16932] Symmetry Leads to Structured Constraint of Learning (arxiv.org).

*To receive the Zoom invitation and monthly reminders, please register via this google formhttps://forms.gle/dqxhpsZXLNYvbSB38

Your e-mail addresses will be used for this purpose only, you can unsubscribe anytime, and we will not send more than three e-mails per month.

Takashi Takahashi and Ken Nakanishi

⇦Top page

  • Bookmark