Dorothea Wendt

Scientist, PostDoc

Focusing on a specific speaker when multiple sound sources are present is not easy – not even with the clever hearing aids we have today. We want to make it possible for the hearing aid to identify the attended speaker by reading brain signals.

Present hearing aids are characterised by impressive algorithms, which can improve the sound perception of the listener. These algorithms consist among others of beamformers to obtain signals from a specific direction, streaming of auditory signals from remote microphones, and advanced auditory scene analysis to segregate multiple speakers.

However, there is one important limitation in utilising these advanced algorithms to improve user benefit for the hearing aid users: The hearing aids do not know how to steer the algorithms in situations with multiple sound sources.

Identifying the attended speaker using EEG

Such steering signals cannot be obtained from the environment but must be extracted from the person with hearing impairment to reflect the auditory attention and intention in various situations. One possible way to identify selective auditory attention is by recording electroencephalography (EEG), which reflects the neural responses in the brain with high temporal resolution.

By formulating an individual transfer function (decoding algorithm), it is possible to identify the attended speaker by correlation of the decoded EEG signals with the audio streams. This phenomenon has been demonstrated in several research laboratories all over the world, and has high scientific value, but little value to the hearing aid industry as the EEG signals are recorded from a grid of electrodes covering the entire scalp.

Ear-EEG may be the part of future hearing aids

However, to overcome this issue, we have shown that the brain signals can be retrieved by electrodes positioned in the ear canal. This is called Ear-EEG. Such electrodes may be embedded in the ear moulds of the hearing aids, and hence provide a non-invasive and feasible solution for everyday use.
We have worked intensively in this research area for several years and been an active partner in the EU Horizon 2020 project “Cognitive control of a hearing aid” (COCOHA), which was successfully completed by the end of 2018. To read more about the COCOHA project click here.


Monitoring cognitive load

Additionally, the Ear-EEG sensors may also be used to continuously monitor the cognitive load and work task of interest for the hearing aid user. As an example, the EEG sensors could detect if the hearing impaired is actively listening or performing a visual task such as reading or working on the computer. In the latter case, it may be advantageous for the hearing aid to lower the gain to provide a more silent work environment in a big office and simultaneously save the battery.

In the auditory condition a detection of the cognitive load in different SNR conditions could automatically adjust the advanced algorithms in the hearing aid to improve speech intelligibility and thereby reduce the cognitive load. An example of this approach is presented in the figure to the right.