Emotional expressivity in speech is one of the most important characteristics of human communication. Although emotions do not have a commonly agreed theoretical definition and their perception is subjective, the footprint of emotions on speech is recognized by everyone. The high-level objective of this project is to establish the impact of different hearing aid signal processing algorithms on emotional cues present in the target speech signal.
Combining Idiap’s technologies for speech emotion recognition and Sonova’s expertise in hearing aid algorithmic processing approaches, we will identify approaches that are most detrimental to acoustic correlates of emotional states and develop alternative processing approaches preserving these correlates.