Research in many fields, including neuroscience, has benefited from the acceleration and innovation brought about by machine learning techniques in recent years. For instance, these models could predict the neural processes related to particular experiences or the processing of sensory stimuli by identifying patterns in experimental data.
Recently, experts at CNRS, Université d’Aix-Marseille, and Maastricht University have attempted to use computational models to foretell how the human brain translates acoustic cues into an understanding of the world.
They found that some models based on deep neural networks (DNNs) were better than others at predicting neural processes using neuroimaging and experimental data.
The findings were released in Nature Neuroscience.
“Our findings indicate that superior temporal gyrus (STG) entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for,” the study reads. “These representations are compositional in nature and relevant to behavior.”