[終了しました] ipi seminar [ハイブリッド開催] 2025年4月21日(月)13:30~15:00
知の物理学研究センター / Institute for Physics of Intelligence (iπ)
【日時/Date】
4月21日(月)13:30-15:00
【場所/Venue】
理学部1号館9階913セミナー室 & Zoom
【発表者/Speaker】
豊泉 太郎先生(理化学研究所)
【タイトル/Title】
"Modeling how the brain learns to represent the world"
【概要/Abstract】
Our adaptive behavior is underpinned by activity-dependent synaptic plasticity—a fundamental mechanism that enables neural circuits to refine their computations. One of the brain’s critical capabilities is developing internal models of the world. In this talk, I examine three key aspects of how these models are learned.
The first aspect is dimensionality reduction, a process that extracts a concise summary of the world. Traditional linear methods fall short when it comes to separating entangled objects. To overcome this limitation, we developed a three-factor synaptic plasticity model tailored for nonlinear dimensionality reduction, implemented in a three-layer neural network akin to the Drosophila olfactory system. This model effectively approximates the t-distributed stochastic neighbor embedding (t-SNE) algorithm from machine learning and produces neural representations that align with experimental observations in Drosophila.
The second aspect involves learning the association between tangible and conceptual representations of memories. Drawing inspiration from the dual input pathways to the hippocampus, we created a dual-pathway associative memory model. In our model, the direct pathway encodes relationships between different objects, while the indirect pathway enhances the ability to distinguish among them. Additionally, by modulating the inhibitory tone that governs output sparsity, the model can control the form of recall, supporting both homo- and hetero-associative processes for tangible and conceptual memories.
The third aspect addresses representing a probabilistic world. Cortical neurons exhibit irregular activity whether or not explicit stimuli are present—a phenomenon that theoretical models attribute to chaotic dynamics. Building on this insight, we developed a recurrent neural network that harnesses chaotic fluctuations to sample from Bayesian posterior distributions. Using a cue-integration task, we demonstrated that networks trained with biologically plausible learning algorithms can reliably represent target Bayesian posteriors through sampling despite the sensitivity to initial conditions due to chaos.
Together, these studies shed light on how nonlinear dimensionality reduction, hetero-associative memory, and chaotic probabilistic sampling might contribute to the neural coding of internal models.
- K. Yoshida and T. Toyoizumi, Sci. Adv. 11, eadp9048 (2025). DOI:10.1126/sciadv.adp9048
A biological model of nonlinear dimensionality reduction - L. Kang and T. Toyoizumi, Nat. Commun. 15, 647 (2024). DOI:10.1038/s41467-024-44877-0
Distinguishing examples while building concepts in hippocampal and artificial networks. - Y. Terada and T. Toyoizumi, PNAS 121:18, e2312992121 (2024). DOI:10.1073/pnas.2312992121
Chaotic neural dynamics facilitate probabilistic computations through sampling.
※本セミナーは英語で開催されます。
これらの講演に関するZoomのリンク等の案内を受け取ることを希望されるかたは、下記のgoogle formからメールアドレスをご記入ください。こちらに登録頂いた情報は、案内の配信のみに利用いたします。
対面のセミナー会場にはコーヒー、お茶菓子を用意いたしますのでぜひご利用ください。
登録フォーム:https://forms.gle/xnLmd9Kc1BaaNPgq8
過去の発表リスト:https://www.phys.s.u-tokyo.ac.jp/about/17106/
世話人:知の物理学研究センター 髙橋昂