David Heeger

Cognitive and Computational Neuroscience, Vision, fMRI

 

Julius Silver, Roslyn S. Silver, and Enid Silver Winslow Professor
Professor of Psychology and Neural Science

I received my Ph.D. in computer science from the University of Pennsylvania. I was a postdoctoral fellow at MIT, a research scientist at the NASA-Ames Research Center, and an Associate Professor at Stanford before coming NYU. I was awarded the David Marr Prize in computer vision in 1987, an Alfred P. Sloan Research Fellowship in neuroscience in 1994, the Troland Award in psychology from the National Academy of Sciences in 2002, and the Margaret and Herman Sokol Faculty Award in the Sciences from New York University in 2006. I was elected to the National Academy of Sciences in 2013.

The research in my lab spans an interdisciplinary cross-section of neuroscience (visual, cognitive, and computational neuroscience), psychology (psychophysics), and engineering (image processing, computer vision, computer graphics).

We develop computational theories of brain function. A variety of anatomical, physiological, and behavioral evidence suggests that the brain performs computations using modules that are repeated across species, brain areas, and modalities. We have developed and tested a model of canonical neural computation, called “the normalization model”. The normalization model was initially developed to explain stimulus-evoked responses of individual neurons in primary visual cortex (V1) and has since been applied to explain neural activity in a wide variety of neural systems ranging from fruit fly olfaction to human attention and decision-making. The defining characteristic of the model is that the response of each neuron is divided by a factor that includes a weighted sum of activity of a pool of neurons.

I have recently developed a unified theoretical framework for guiding both neuroscience and artificial intelligence research. The theory offers an empirically testable framework for understanding how the brain accomplishes three key functions: (i) inference: perception is nonconvex optimization that combines sensory input with prior expectation; (ii) exploration: inference relies on neural response variability to explore different possible interpretations; (iii) prediction: inference includes making predictions over a hierarchy of timescales. These three functions are implemented in a recurrent and recursive neural network, providing a role for feedback connections in cortex, and controlled by state parameters hypothesized to correspond to neuromodulators and oscillatory activity.

We also use functional magnetic resonance imaging (fMRI) to quantitatively investigate the relationship between brain and behavior. The vast majority of neuroimaging experiments from other labs around the world have focused on which parts of the brain are involved in a particular cognitive or perceptual task. Although this has been an important first step, perception and cognition depend not only on which brain areas are active, but also on how neuronal activity within each of those areas varies over space and time. We are using fMRI to measure the timing and amplitude of brain activity, for testing computational theories of the neural processing underlying cognition and perception. Part of my own excitement about this work is that it brings together my engineering training with my interest in neuroscience, as we routinely develop new image processing and computer vision algorithms for analyzing our functional and structural MRI data. We are using fMRI to study visual awareness, visual pattern detection/discrimination, visual motion perception, stereo depth perception, attention, working memory, the control of eye and hand movements, and neural processing of complex audio-visual and emotional experiences (movies, music, narrative).