[finished] ipi seminar 10:30-12:00, Thursday Nov. 24, 2022

知の物理学研究センター / Institute for Physics of Intelligence (ipi)

【Date】
10:30-12:00, Thur. Nov. 24

【Speaker】
Kenji Harada/ Kyoto University

【Title】
"Compressing neural networks by tensor networks"

【Abstract】
Neural networks in artificial intelligence are computational models to simulate living neural networks that perform various learning tasks. They are highly important in recent breakthroughs in artificial intelligence and have been the subject of various studies. In general, recent neural networks tend to be large, with many parameters, and their compression is desired. In this talk, I will introduce studies in which tensor networks are applied to compress neural networks[1,2]. Tensor networks can compress high degrees of freedom with strong correlations, such as quantum states. I will also report on recent research on various aspects of tensor networks in neural networks.
Reference:
[1] A. Novikov, D. Podoprikhin, A. Osokin and D. P. Vetrov, ``Tensorizing Neural Networks’‘, NIPS (2015).
[2] Z.-F. Gao, S. Cheng, R.-Q. He, Z. Y. Xie, H.-H. Zhao, Z.-Y. Lu and T. Xiang, ``Compressing deep neural networks by matrix product operators’’, Phys. Rev. Res. 2, 023300 (2020).

PDF

 

*To receive the Zoom invitation and monthly reminders, please register via this google form: https://forms.gle/dqxhpsZXLNYvbSB38
Your e-mail addresses will be used for this purpose only, you can unsubscribe anytime, and we will not send more than three e-mails per month.
 
Tilman HARTWIG, Ken NAKANISHI, Shinichiro AKIYAMA and Takashi TAKAHASHI
 
  • Bookmark