Currently, 3 ongoing projects in our lab focused on the neural mechanisms of (1) adaptive inductive bias, (2) sequential planning during virtual social interaction, and (3) abstract navigation or mental simulation.
1. Generalization and Adaptive Inductive Bias
A hallmark of human intelligence is the ability to learn and generalize from limited experiences. We study the inductive biases humans or animals use to generalize across conditions, and how they adapt their inductive biases to environmental statistics. We developed a multi-feature category learning task to test the adaptation of inductive biases about feature informativeness on categorization across stimulus sets. Our design allows us to disentangle the fast learning of specific stimulus-response mappings and the slow adaptation of inductive biases. We utilize neural network modeling and multi-channel electrophysiological recording to unravel the computational and neural mechanisms underlying learning and generalization.
Movie: Example trials for the cross-talk classification task used to study the neural mechanisms of adaptive inductive bias. A small red cross shows the animal’s gaze.
2. Competitive Social Interaction
Humans and other primates can use their social knowledge to predict the behaviors of other agents and use these predictions to plan their future behaviors. To understand the neural basis of such inferences, we have trained rhesus monkeys to play a virtual competitive board game called 4-in-a-row. We use the computational modeling based on reinforcement learning and other machine learning methods to understand how the animals flexibly adjust their strategies during this game, and also investigate how the neural activity in the prefrontal cortex, basal ganglia, and hippocampus might underlie the corresponding computations.
Movie: Example plays of the 4-in-a-row game. Yellow disk shows the cursor controlled by the animal, while red cross its gaze. The animal and computer opponent place cyan and gray stones alternatingly.
3. Navigation and Mental Stimulation
Flexible planning is essential in our daily lives, as when facing a dynamic environment or when the behavioral goals change. To investigate the neural algorithm of flexible planning, we developed an abstract maze task that requires the subject to learn the abstract graph corresponding to their environment and produce sequential choices as the behavioral goals change randomly across trials. This task enables us to test, for example, how the animal might flexibly switch between model-free vs. model-based reinforcement learning algorithms. We are testing how the prefrontal cortex and hippocampus contribute to the evaluation and implementation of different learning algorithms, and how the neural activity in these brain areas support mental simulation to evaluate the outcomes of hypothetical choices.
Movie: Example trials of abstract navigation task. The current fractal target in the center is flanked by two choice fractal targets, while the goal fractal is shown at the bottom of the screen. A small red cross shows the animal’s gaze.