Researchers at Weill Cornell Medicine, Cornell Tech, and Cornell’s Ithaca campus have made significant advancements in understanding how the brain processes visuals. By utilizing AI-selected natural images and AI-generated synthetic images, the team has developed neuroscientific tools that may help uncover the mysteries of visual processing. The study, published in Communications Biology, aims to apply a data-driven approach to understanding vision organization while mitigating potential biases that may arise when studying a limited set of researcher-selected images.
The researchers conducted an experiment where volunteers were presented with images that had been selected or generated based on an AI model of the human visual system. Using functional magnetic resonance imaging (fMRI) to record the brain activity of the volunteers, the researchers found that both the AI-selected natural images and AI-generated synthetic images significantly activated the targeted visual processing areas in the brain, outperforming control images.
Moreover, the researchers discovered that they could enhance the performance of their vision model by tailoring it to individual volunteers. By fine-tuning the model for each person, images generated specifically for that individual were more effective in maximizing activation in the target areas compared to images generated using a general model.
Dr. Amy Kuceyeski, the senior author of the study and a professor of mathematics in radiology and neuroscience at Weill Cornell Medicine, expressed optimism about this new approach to studying the neuroscience of vision. She believes that the systematic and unbiased mapping and modeling of the visual system, even using images that individuals may not typically encounter, holds great promise for future research.
The collaboration between Weill Cornell Medicine, Cornell Engineering, and Cornell Tech proved fruitful in this study. Dr. Mert Sabuncu, a professor of electrical and computer engineering at Cornell Engineering and Cornell Tech, and his laboratory contributed significantly to the research. Additionally, Dr. Zijin Gu, a former doctoral student co-mentored by Dr. Sabuncu and Dr. Kuceyeski, served as the study’s first author.
Mapping and modeling the human visual system accurately is a challenging task within the field of neuroscience. Non-invasive methods, such as fMRI, are commonly used due to the complexity and risks associated with direct recordings using implanted electrodes. In this study, the researchers leveraged an existing dataset of natural images and corresponding fMRI responses to train an artificial neural network (ANN) capable of modeling the human brain’s visual processing system. This ANN, in conjunction with an AI-based image generator, successfully predicted images that maximally activated various targeted visual areas of the brain.
The insights gained from this study open up new possibilities for studying individual differences in visual system organization and potentially developing personalized visual-system models. The researchers will conduct further experiments using an advanced image generator called Stable Diffusion.
Expanding this approach beyond visuals, the researchers anticipate it could be applied to studying other senses such as hearing.
Dr. Kuceyeski envisions a future where this approach may even have therapeutic applications. By strategically designing stimuli, researchers could potentially modify the connectivity between different brain areas to address various neurological conditions.
With its potential to uncover the intricacies of visual processing and individualized brain modeling, this research marks a significant step forward in the field of neuroscience. As the team at Weill Cornell Medicine and Cornell Tech continues to refine their methods and explore additional sensory domains, our understanding of the human brain may reach unprecedented depths.
—
Note: This news article has not been written or endorsed by Technology Networks Ltd.