Professor, Princeton Neuroscience Institute
I study how people and animals learn from trial and error (and from rewards and punishments) to make decisions, combining computational, economic, neural, and behavioral perspectives. I focus on understanding how subjects cope with computationally demanding decision situations, notably choice under uncertainty and in tasks (such as mazes or chess) requiring many decisions to be made sequentially. In engineering, these are the key problems motivating reinforcement learning and Bayesian decision theory. I am particularly interested in using these computational frameworks as a basis for analyzing and understanding biological decision making. Some ongoing projects include:
Computational models in neuroscientific experiments
Computational models (such as reinforcement learning algorithms) are more than cartoons: they can provide detailed trial-by-trial hypotheses about how subjects might approach tasks such as decision making. By fitting such models to behavioral and neural data, and comparing different candidates, we can understand in detail the processes underlying subjects’ choices. I am interested in developing new techniques for such analyses, and applying them in behavioral and functional imaging experiments to study human decision making.
Interactions between multiple decision-making systems
The idea that the brain contains multiple, separate decision systems is as ubiquitous (in psychology, neuroscience, and even behavioral economics) as it is bizarre. For instance, much evidence points to competition between more cognitive and more automatic processes associated with different brain systems. Such competition has often been implicated in self-control issues such as drug addiction. But (as these examples suggest) having multiple solutions to the problem of making decisions actually compounds the decision problem, by requiring the brain to arbitrate between the systems. We are pursuing this arbitration using a combination of computational and experimental methods.
Learning and neuromodulation
Much evidence has amassed for the idea that the neuromodulator dopamine serves as a teaching signal for reinforcement learning. This relatively good characterization can now provide a foothold for extending in a number of exciting new directions. These include computational (e.g., how can this system balance the need to explore unfamiliar options versus exploit old favorites), behavioral (how is dopaminergically mediated learning manifest; how is it deficient in pathologies such as drug addiction or Parkinson’s disease), and neural (what is the contribution of systems that interact with dopamine, such as serotonin and the prefrontal cortex).