Skip to main content

Highlight

Encoding of Ultrasonic Vocalizations in the Auditory Cortex

Achievement/Results

As President Obama laid out in his recent address, developing new technologies for exploring the function of circuits of neurons is the next frontier of American research. Our research is focused on the discovery of new techniques and frameworks for understanding neural circuitry, and on the application of previously successful techniques to new systems. We have taken several techniques with proven ability to explain phenomena specific to the visual system and applied these techniques to the auditory system. Doing so has allowed us to directly compare cortical mechanisms between sensory systems in ways that were not previously possible. We have also developed a number of novel techniques specifically for analyzing the stimulus features driving neural responses in the auditory cortex. These techniques allow us new insights into the workings of the auditory system. By attempting to identify and understand the mechanisms that result in our ability to perceive salient auditory stimuli we are making headway into unexplored areas of scientific knowledge. The computations responsible for the recognition of auditory objects have proven to be less intuitive than their counterparts in the visual system, despite acting on what is an ostensibly lower-dimensional stimulus. The specific signal transformations that give rise to distinct objects in the auditory stream are still largely unknown, and the progress we make towards describing these transformations represents a significant step forward in our knowledge about the auditory system, and cortical processing in general.

The discovery of new neural mechanisms responsible for the identification of auditory objects will have a deep and lasting impact on the treatment of hearing disorders. Many of the challenges in improving hearing-aids and cochlear implants stem from our lack of understanding of the formation of auditory objects. Cutting-edge hearing-aids rely on adaptive filters to accentuate some auditory features while suppressing others, but these devices still fail to properly allow users to distinguish auditory objects from background noise. With a greater understanding of which features contribute most to object recognition in cortical circuits, we would provide a clear direction for increasing the efficacy of these devices.

Another important goal of our work is to attract and train a diverse group of future researchers, contributing to both the deepening pool of future scientists in our nation, and to the broadening appeal of scientific disciplines. We have successfully attracted, employed, and trained researchers and technicians from a wide range of backgrounds representative of the nation as a whole. In particular, our efforts to involve undergraduates in our research has allowed us to guide several students – including 4 women and 2 members of minority groups, who are underrepresented in computational and systems neuroscience – through the completion of significant projects, laying the foundations of promising scientific careers.

Address Goals

One of the central tasks of the mammalian auditory system is to represent information about acoustic communicative signals, such as vocalizations. However, the neuronal computations underlying vocalization encoding in the central auditory system are poorly understood. While neurons in the lower brain areas of the auditory stream are somewhat understood in terms of the spectral and temporal features that drive them, these simple features do a poor job of explaining the activity of neurons in the cortex. The auditory cortex is the area of the brain where we start to see neurons that respond selectively to distinct auditory objects, such as vocalizations, regardless of small spectral and temporal perturbations of these objects.

Rats communicate via ultra-sonic vocalizations (USVs). Despite their behavioral prevalence, little is known about how these vocalizations are encoded in the auditory cortex. To learn how the rat auditory cortex encodes information about con-specific vocalizations, we presented a library of natural and temporally transformed ultra-sonic vocalizations (USVs) to awake rats, while recording neural activity in the primary auditory cortex (A1) using chronically implanted multi-electrode probes. The USVs that we presented were recorded during a friendly interaction between two male rats, and were typical of vocalizations that previous studies have associated with the communication of positive affect. We found that the auditory cortex exhibited specialized circuits that encoded the USVs. Many neurons reliably and selectively responded to USVs. The response strength to USVs correlated strongly with the response strength to frequency modulated sweeps and the FM rate tuning index, suggesting that related mechanisms generate responses to USVs as to FM sweeps. The response strength further correlated with the neuron’s best frequency, with the strongest responses produced by neurons whose best frequency was in the ultra-sonic frequency range. We identified the specific computation that the auditory system performs in order to represent USVs with high accuracy.

For responses of each neuron to each stimulus group, we fitted a novel predictive model: a reduced generalized linear-non-linear model (GLNM) that takes the frequency modulation and single-tone amplitude as the only two input parameters. The GLNM accurately predicted neuronal responses to previously unheard USVs, and its prediction accuracy was higher than for an analogous spectrogram-based linear non-linear model. This computation was specific to the statistical structure of the original USVs, as the response strength of neurons and the model prediction accuracy were higher for original, rather than temporally transformed vocalizations. These results indicate that A1 processes original USVs differentially than transformed USVs, indicating preference for temporal statistics of the original vocalizations. Our study identified the neuronal computation that underlies specialization for behaviorally important sounds. Similar processing is likely to play an important role in speech encoding.