DESCRIPTION (provided by applicant): Intellectual merit: A central question in neuroscience is understanding how cortical networks process complex natural stimuli. Neurophysiological studies and computational models have traditionally focused on simple stimuli, such as gratings and bars. While providing important insights, it is difficult to extrapolate from these studies to an understanding of the processing of more natural input. On the other hand, a main hurdle to making progress in the field is that natural scenes are complex and it is not clear what it is about a given scene that evokes a given neural response. To overcome this limitation and to push forward our understanding of cortical processing of natural inputs, we will make use of recent advances in understanding natural scene statistics to closely integrate theory and neurophysiological experiments. We posit that a key factor distinguishing natural images and movies from random scenes are joint statistical dependencies in space and time. Further, we hypothesize that visual neurons are sensitive to these dependencies. We will build a unified modeling framework of spatiotemporal contextual effects in neurons, which is determined by the statistical dependencies in scenes. Importantly, the predictions of the model will be used to guide neurophysiology experiments and to interpret the results. Using natural stimuli, we will measure effects of spatial, temporal, and spatiotemporal context in single neurons and in populations of cells, including determining how interactions between neurons contribute to contextual effects. We will record in primary visual cortex (V1) because it provides a solid background on which to base our experiments. We will conduct parallel recordings in extrastriate area V2 because previous work suggests that it may have different sensitivity to contextual information. The experimental results will validate and guide the modeling framework. Our approach will be a significant advance over previous scene statistics modeling work that has focused on explaining limited contextual physiology data for simple stimuli such as gratings, and will for the first time make full use of the power of scene statistics to answer a fundamental question. Most importantly, our work will make significant strides in elucidating how cortical circuits process natural scenes, within a theoretical framework that provides both predictive and explanatory power. Collaboration: The project will involve a collaborative effort between two young investigators with expertise in computational visual neuroscience and systems physiology;it combines state-ofthe- art algorithms from computational vision and technology for recording populations of neurons in early visual cortex. We will achieve our goal by closely integrating theory and model development with electrophysiological experiments, an approach fostered by the proximity of the two investigators. Broader Impacts: This proposal is expected to have broad impacts in five main areas. First, the work will have broad impact for basic, biomedical, and applied disciplines, including: studying other sensory systems under natural input;building superior visual aids;designing artificial systems;and advancing image and signal processing. Second, the data and stimuli will be made broadly available to the community through the CRCNS data sharing website. Third, the project will be used to train and mentor postdoctoral fellows to become independent research scientists. Fourth, the project will for the first time introduce students at Albert Einstein to the combination of theoretical and experimental approaches for solving fundamental questions in neuroscience. Finally, the project will be used as part of an outreach effort to expose local underrepresented high school students in the Bronx to exciting scientific research.
|Effective start/end date||8/1/10 → 7/31/15|
- Statistics and Probability
- Statistics, Probability and Uncertainty
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.