Abstracts
-
Andre Bastos
Department of Psychology and
Vanderbilt Brain Institute,
Vanderbilt UniversityAn updated perspective on canonical
microcircuits for predictive coding -
Zenas Chao
International Research Center for
Neurointelligence (WPI-IRCN), UTIAS,
The University of TokyoSearching for the Complete
Prediction ObjectThe brain builds an understanding of the external world through an internal model, which is refined by generating predictions and updating them when prediction errors occur. A complete prediction should specify what will happen, when it will happen, and with what probability. I refer to the integration of these components as a “complete prediction object.” In this talk, I will present a series of studies across animal microcircuits, human macrocircuits, and computational models that aim to identify how such complete prediction objects may be represented in the brain, and how prediction signals are coordinated across different levels of neural organization. -
Anne K. Churchland
UCLA Department of Neurobiology
Life can only be understood backwards:
predicting future outcomes from past actionsThe anterior cingulate cortex (ACC) is believed to support reward-based learning in uncertain, dynamic environments. However, it remains unclear whether ACC neurons encode behavioral history in deterministic perceptual decision-making tasks and, if so, how history signals interact with action signals in ACC. We measured the activity of ACC neurons in freely moving mice performing visual evidence accumulation. Many ACC neurons had mixed selectivity: they were strongly driven by non-linear combinations of previous choices and outcomes (trial history). Trial history could be decoded well from population activity and neural representations of trial history remained stable over seconds. Using linear encoding models, we demonstrate that both trial history and movements strongly drive neural activity in ACC. The neural dynamics encoding trial history were low-dimensional, similar between sessions from the same subjects and conserved across different subjects. These findings suggest that the ACC implements trial history monitoring while simultaneously tracking ongoing actions. -
Hiroshi Makino
Department of Physiology,
Keio University School of MedicineLearning in intelligent systems
Recent years have witnessed a renewed convergence between artificial intelligence (AI) and neuroscience. AI has offered new theoretical frameworks for understanding how the brain solves complex computational problems, while neuroscience has inspired novel algorithms and architectures that allow machines to emulate biological cognitive abilities. Despite this growing synergy, direct comparisons between artificial and biological intelligent systems remain limited. We address this gap by examining behavior and neural representations across multiple domains of intelligence in both systems. By deriving theoretical predictions from AI models and empirically testing them in mice, we demonstrate that the brain employs representations strikingly similar to those observed in artificial systems. Specifically, we identify parallel mechanisms in the compositional representation of subtasks and shared value representations during cooperation. These findings highlight fundamental similarities between biological and artificial intelligence and emphasize the importance of comparative approaches for uncovering the general principles of cognition. -
Jun Tani
Okinawa Institute of Science and Technology
Exploring robotic minds by extending the idea of
predictive coding and active inferenceThe focus of my research has been to investigate how cognitive agents can develop structural representation and functions via iterative interaction with the world, exercising agency and learning from resultant perceptual experience. For this purpose, my team has developed various models analogous to predictive coding and active inference frameworks based on the free energy principle. Those models have been used for conducting diverse robotics experiments which include goal-directed planning and replanning in a dynamic environment, social embodied interactions with others, development of the higher cognitive competency for executive control for attention and working memory, embodied language, and others. The talk focuses on a set of emergent phenomena which we observed in those robotics experiments. These findings could inform us of possible non-trivial accounts for understanding embodied cognition including the issues of subjective experiences. -
Pablo Lanillos
Neuro AI and Robotics group,
Cajal Neuroscience Center,
Spanish National Research CouncilActive Inference in Robotics
Traditional robotics research has focused on precision to perform a repetitive task as accurately as possible. This has resulted in very reliable machines, but with challenges when adapting to environmental changes. Conversely, biological intelligence, understood as adaptation, has endorsed reactive mechanisms that deal with unexpected circumstances to maintain its optimal state for survival. Accidentally, these functionalities have evolved into the backdrop of natural intelligence. How can we achieve natural intelligence in artificial systems without paying attention to these basic mechanisms of perception and control? Drawing inspiration from how the brain processes sensory information and generates control actions, my research group (https://neuro-ai-robotics.github.io/) focuses on enhancing artificial intelligence for robotics. In the long term, we want to provide robots with human-like body perception and action. In this talk, I will summarize some state-of-the-art Active Inference approaches for robotics through examples (e.g., humanoid control, multidrone navigation, neuromorphic control), where it is explicit that understanding human embodied cognition leads to novel research directions that improve current robotics and AI. I will end with some new applications that we are pursuing in currently running projects, such as Metatool, robots that invent tools, which will make our imagination fly into the future. -
Philip R. Corlett
Yale University
Department of Psychiatry
Wu Tsai Institute for CognitionPredictive Coding and Delusions:
Specific Contents from General MechanismsIt has been argued that social processes are relevant to belief formation and maintenance and thence to persecutory delusions – the fixed false beliefs that others intend harm. I call this the social turn in delusions research. It suggests that paranoia is the purview of a specialized mechanism for coalitional cognition – thinking about group membership and reputation management. I suggest instead that a simpler, pseudo-social, learning mechanism may underwrite persecutory and other delusions. I make my case in terms of computations (prediction, not coalition), algorithm (association rather than recursion), and implementation (dopaminergic domain-general rather than social-specific regions). I will conclude by presenting new experimental data that clarify the contributions of domain-general versus social-specific processes to psychotic symptoms. -
Xiaosi Gu
Yale School of Medicine
The Craving Brain:
Computational Mechanisms and
Implications for PsychiatryCraving is the subjective urge for an external stimulus, whether it is drugs, alcohol, food, or social others. Addiction neuroscience has traditionally used cue reactivity paradigms to study craving, where craving and reward sensitivity are often intertwined. Here, I will present how computational modeling may help us delineate craving and reward processing and quantify their bidirectional interactions, using cannabis and alcohol as test cases. Finally, I will present more recent work on social craving – the subjective urge for social contact, which has shared neural substrates as drug craving and is closely linked to mental health. -
Takuya Isomura
RIKEN
-
Misako Komatsu
Institute of Integrated Research, Institute of Science Tokyo
Hierarchical representation of auditory predictive signal in the cerebral cortex of non-human primates
In everyday life, we consciously and unconsciously predict how the environment changes based on sensory inputs. This process is often referred to as “predictive coding”. To investigate cortical-wide auditory prediction processing, we developed a cortical-wide electrocorticographic (ECoG) array for the common marmoset, a small non-human primate. The array provides an opportunity to capture global cortical information processing with high resolutions at a sub-millisecond order in time and millimeter order in space. We have applied this system to marmosets exposed several auditory stimuli involved in different auditory predictions such as classical, roving, and local-global oddball paradigm. In this talk, I will present an overview of cortical information dynamics for auditory predictive coding process. In particular, the frontotemporal circuit involves in auditory predictions and that the activity in the prefrontal cortex is modulated by complexity of the stimuli. -
Teppei Ebina
The University of Tokyo
-
Hitoshi Okamoto
RIKEN Center for Brain Science
Waseda University
Inst. of NeuropsychiatryThe evolutionarily conserved basic scaffolds of mind in zebrafish brain
The recent discovery of the conservation of basic brain structure throughout evolution has made the adult zebrafish brain a suitable model for studying conserved brain functions. To study the mechanisms of the cortico-basal ganglia circuit in decision-making, we generated transgenic lines targeting specific subpopulations of this circuit. Combining the closed-loop virtual-reality system with two-photon Ca2+ imaging during Go/No-go tasks, we have identified the neural ensemble encoding "state prediction error" between an ongoing perceptual state and an expected ideal state. The state prediction error activates the NPY-positive neurons in the globus pallidus and induces the state instability in the pallial neurons so that they make a state transition to the most favorable (stable) state where the prediction error is minimized. -
Masahiro Suzuki
The University of Tokyo
-
Akihiro Funamizu
Institute for Quantitative Biosciences (IQB),
the University of Tokyo.Neural encoding of prior knowledge in the mouse cerebral cortex
Our decision making relies on prior knowledge, especially when sensory inputs are uncertain. Prior knowledge here includes reward expectation for each action, state estimation using transition probabilities and state estimation from sensory inputs. Especially, optimal decisions with integrating state estimation and sensory inputs are often characterized with Bayesian inference. This talk overviews how the mouse cerebral cortices represent prior knowledge, sensory inputs and these integrations for actions. In summary, although the posterior parietal cortex is a key brain region for dynamic Bayesian inference, the prior knowledge is widely distributed in cortical regions. The encoding of action and sensory inputs are relatively localized.
I then introduce our recent challenges to investigate how such prior knowledge is utilized to efficiently learn new tasks, or how the prior is acquired by experience. These studies use artificial neural networks to model mice choice behaviors. Our findings provide the neural basis of Bayesian computation in the mouse brain. -
Hidehiko Takahashi
Department of Psychiatry and Behavioral Sciences,
Graduate School of Medical and Dental Sciences
Institute of Science TokyoReconstruction of illusion and hallucination
We investigated the neural basis of auditory illusions and hallucinations by reconstructing subjective auditory content from fMRI activity. In healthy participants, signal detection analysis revealed that "false alarms" (illusory voice perception) yielded reconstructed sounds with distinct voice-like features. To overcome data limitations in clinical settings, we developed a neural converter enabling cross-subject decoding. Applying this to a schizophrenia patient with frequent auditory hallucinations, we tried to reconstruct hallucinations occurring during both rest and natural sound listening. In addition, we are attempting to reconstruct visual hallucinations in Charles Bonnet syndrome. We are also working on cross-modal decoding, in this case decoding auditory imagery evoked by visual stimuli. We asked participants to watch silent videos, and we attempted to decode the imagined sounds from brain activity. Quantitative evaluations indicate that the higher auditory areas represent the features of the imagined sounds. -
Kenji Doya
Okinawa Institute of Science and Technology