A Prediction-Based Model of Human Hearing

Zachary Brown ’24, Biological Sciences, Winter 2021

Hearing as a subjective, personal experience (Source: Pixahive)

“Is reality objective or subjective?”

The question sounds like an introduction to a classic philosophical dilemma, but it too has been the subject of modern scientific inquiry. Suffice it to say, recent evidence gathered by researchers at the Max Planck Institute for Human Cognitive and Brain Sciences in Germany might make you feel like you’re living in a fiction.

In order to understand their findings, it would aid us to consider how human sensory systems (distinct from the sensory systems of other animals) actually function. It is well-known that our nervous systems take in information and process it in the brain, a feat developed over millions of years of evolution which allows us to experience everything from brilliant colors to the softness of a cat’s fur. To survive in the chaotic minutiae of sensations comprising our daily experiences, we need a means of controlling what sensory information we’re receiving at any given time – to immediately recognize a lion’s roar or to ignore a cricket’s constant chirping. The human auditory system accomplishes this through a complex neurological process described below.

Dividing up the work, the brain processes sensory information in stages as a hierarchy. In this system, the lower “subcortical” structures are involved in encoding and moving information from its source (in the ear, the cochlea) up the chain. It’s been shown that the higher “cortical” structures are involved in processing and filtering out irrelevant information while maintaining selective attention and becoming alert to new stimuli (Kok and de Lange, 2015). These neurons go through what is known as stimulus-specific adaptations, or SSAs, in order to filter out repetitive information (Ayala et al., 2013). Whether all of these adaptations occur in the higher cortical structures or if they are also employed in the subcortical structures is still unclear, but there has been ample speculation about how we tune out irrelevant sounds (Tabas et al., 2020).

Currently, two scientific hypotheses are available to explain the neurology behind these stimulus adaptations. The first is known as habituation and the second is called prediction error.  The habituation process involves the progressive attenuation (reduction) of neural signaling as neurons fire, resulting in what could be considered an environmentally dependent “tuning out” of recurring signals. This primes the sensory system, making it more sensitive to changes in environmental stimuli as a result. A habituation hypothesis assumes the most basic processing possible on the subcortical level, where stimulus-specific adaptations occur in a passive-learning style and strong neural firing will occur every time an unexpected sound is heard (Tabas et al., 2020). On the other hand, the prediction error hypothesis suggests a more active process, where the brain forms expectations to predict what stimulus will come from the environment, and whenever there is a mismatch in the prediction and the actual environmental stimulus, strong neural firing results (Tabas et al., 2020).

Oftentimes these two concepts get confused. To illustrate the differences, imagine you’ve been asked listen to a series of 8 tones. You are told one tone will be deviant out of this set, and this deviant tone will only occur at the 4th, 5th, or 6th place in the set, so you will be able to always hear at least 3 standard tones to start with and 2 to end with. If your audio processing neurons used habituation, they would become accustomed to the 3-5 standard tones very quickly, firing with diminishing strength until you heard a deviant tone, at which point they would strongly fire to alert the brain of the change. You could expect this neural response to be the same regardless of what position the deviant tone was in (that is, the 4th, 5th, or 6th tone), because as long as it does not match the preceding tones, the neurons would have had no way of expecting the change.

Alternatively, suppose your audio processing neurons used predictive coding. You could expect the same attenuation over the 3-5 standard tones, but the firing strength that occurs with the deviant tone would be entirely dependent on which tone it was: the 4th, 5th, or 6th. This would happen because your brain knows the likelihood of the deviant tone’s presence, a probability dependent on how many tones you’ve already heard. Out of 3 possible deviant tone placements, the probability of hearing a deviant tone in the 4th position would be a 1/3 chance. If this is not heard, the odds of hearing the tone go up, because now the deviant tone could only be in the remaining 5th or 6th position, a probability of ½ for the 5th. Finally, if the tone has not been heard in the 4th or 5th positions, by definition of the deviant tone placement, it must be present in the 6th position. With this in mind, predictive coding suggests neural firing when the deviant tone is heard in the 4th position will be the strongest, since it will be the most unexpected, followed by the deviant tone in the 5th position at a slightly lower firing value to match the higher likelihood, and ending with the lowest firing value if the deviant is heard in the 6th position, since the sensory system would be 100% certain its prediction will be accurate.

This simple, yet elegant, model of testing how expectations influence perception was used by researchers from the Max Planck Institute to assess which processing model best described subcortical neural activity (Tabas et al., 2020). Using fMRI to measure this activity, 19 subjects’ brains were scanned as they listened to the 8-tone sets. Specifically, researchers were watching for activity related to sensory processing in two regions of the audio cortex: the auditory midbrain (inferior colliculus or IC) and auditory thalamus (medial geniculate body or MGB). What they found was a remarkable similarity to the neural firing patterns that would be seen if the predictive coding model were correct. The neural activity followed the characteristic pattern: activity attenuated as the standard tones were heard until the first deviant tone was detected, and the placement of the deviant in either the 4th, 5th, or 6th position was directly related to how strongly the neurons fired in response (Tabas et al., 2020). It is important to note the differences in neural activity based on the placement of the deviant tone were statistically significant for 14 of the 19 subjects (Tabas et al., 2020), indicating more research will help confirm these conclusions.

However, if what these researchers found proves to be the general rule, we may need to rethink our idea of hearing and perhaps start thinking about our other senses too. These subcortical systems can be considered as close to the information source as the brain can get, and yet, the subjects’ expectations seemed to directly influence how these lower, often considered to be “information transport” parts of the brain function. “It is tempting,” as indicated in the original article (Tabas et al., 2020), to think the predictions guiding these subcortical structures came from the cerebral cortex, but even if this weren’t true, these findings suggest your subjective expectations actually shape your reality – at least when it comes to your hearing. So, the next time you’re listening to a presidential speech or a new song, you might pause to consider how your expectations about what you’re about to listen to might influence what you actually hear.

 

References

Tabas, A., Mihai, G., Kiebel, S., Trampel, R., & von Kriegstein, K. (2020). Abstract rules drive adaptation in the subcortical sensory pathway. ELife, 9, e64501. https://doi.org/10.7554/eLife.64501

Ayala, Y. A., Pérez-González, D., Duque, D., Nelken, I., & Malmierca, M. S. (2013). Frequency discrimination and stimulus deviance in the inferior colliculus and cochlear nucleus. Frontiers in Neural Circuits, 6. https://doi.org/10.3389/fncir.2012.00119

Kok, P., & de Lange, F. P. (2015). Predictive coding in sensory cortex. In An introduction to model-based cognitive neuroscience, pp. 221–244. Springer Science + Business Media.

 

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *