Spatial Hearing Support for Hearing-Impaired Users: Augmentation and Adaptation
We all hear the world differently. This diversity in spatial hearing is central to everyday life, helping us detect danger, orient ourselves, and communicate effectively in noisy environments. While spatial hearing evolved as a survival mechanism, it remains essential today for daily tasks such as crossing roads or following conversations in complex acoustic scenes.
Unfortunately, spatial hearing is often degraded by hearing loss and further compromised by hearing assistive technologies, particularly hearing aids (HAs). Although HAs amplify sound, they also commonly distort spatial cues due to microphone placement and noise-reduction algorithms, such as beamforming. Such processing often results in a loss of autonomy and situational awareness, two major frustrations cited by users and reasons why many avoid wearing HAs despite their clinical need.
This project aims to introduce a new paradigm. Rather than allowing AI and other signal processing algorithms to decide what sounds to amplify or suppress, we will develop AI tools that restore (and even enhance) lost spatial cues. These cues will be delivered via optimised 'superhuman' head-related transfer functions (HRTFs), which are artificial spatial filters derived from virtual 3D ear shapes that will exaggerate or selectively enhance spatial features. This enables users to cognitively perform complex auditory tasks, such as spatial release from masking and localisation, with greater autonomy.
This project aims to explore the following:
1. Determining which spatial cues best support behavioural tasks (e.g. localisation and spatial release from masking).
2. Optimising superhuman HRTFs using AI to enhance the most relevant cues.
3. Investigating the degree and speed of user adaptation to these non-individualised, enhanced spatial cues.
4. Evaluating the performance of this approach against current HA algorithms on real-world behavioural tasks.
The project will combine computational auditory modelling, signal processing, perceptual testing, and virtual reality training tools. Outcomes could include optimal superhuman HRTFs for specific behavioural tasks, guidelines for perceptual training programmes, and prototypes for real-world HA applications.
Project Objectives
The successful candidate will work on some of the objectives below.
1. Identify spatial cues most relevant to specific listening tasks.
2. Design and optimise superhuman HRTFs for these tasks using computational models.
3. Evaluate user adaptation to these HRTFs and measure performance gains over state-of-the-art HA processing.
4. Develop and test VR-based perceptual training to improve speech-in-noise and localisation abilities.
5. Assess transfer of skills from VR to real-world listening situations.
There will also be opportunities to explore related directions, such as applying spatial hearing augmentation to other assistive technologies, the possibility of developing VR-based cognitive training for different user groups, or extending HRTF enhancement methods to AR/VR consumer applications.
Candidate Requirements
1. Keen interest in audio and hearing assistive technologies.
2. Integrated Master’s (MEng) in electrical engineering or related subject at 2.1 or 1st, OR a BEng at 2.1 or 1st plus a Master’s degree at Merit or higher.
3. Higher English language competency level.
4. Strong background in at least one of:
○ Psychoacoustics or auditory perception
○ Audio signal processing
○ Computational modelling of hearing
○ Virtual/Augmented reality for audio
○ Hearing science or biomedical acoustics
5. Programming experience (MATLAB, Python, or C++) is desirable.
PhD Funding
The PhD student will receive tuition fees and a London stipend at QMUL stipend rates (currently in 2025/26 of £21,874 per year, to be confirmed for 2026/27) annually during the PhD period, which can span up to 3 years.
How to Apply
Queen Mary is interested in developing the next generation of outstanding researchers and has decided to invest in specific research areas. Applicants should work with their prospective supervisor and submit their application following the instructions.
The application should include the following:
● CV (max 2 pages)
● Cover letter (max 4,500 characters) stating clearly on the first page whether you are eligible for a scholarship as a UK resident (https://epsrc.ukri.org/skills/students/guidance-on-epsrc-stu-dentships/eligibility)
● Research Interest Statement (max 500 words)
● 2 references
● Certificate of English Language (for students whose first language is not English)
● Other Certificates
Please note that to qualify as a home student for the purpose of the scholarships, a student must have no restrictions on how long they can stay in the UK and have been ordinarily resident in the UK for at least 3 years prior to the start of the studentship. For more information.
Application Deadline
The application deadline is 30 November 2025 for a flexible start date of January or April 2026.
Contacts
For general enquiries, contact Mrs Melissa Yeo at m.yeo@qmul.ac.uk (administrative enquiries) or Dr Arkaitz Zubiaga at a.zubiaga@qmul.ac.uk (academic enquiries) with the subject “EECS 2025 PhD scholarships enquiry”. For specific enquiries, contact Dr Aidan Hogg (a.hogg@qmul.ac.uk)
Google Scholar: https://scholar.google.co.uk/citations?user=XrScWlwAAAAJ&hl
VIABAL Lab Homepage: https://viabal.eecs.qmul.ac.uk/
C4DM Group Homepage: https://www.c4dm.eecs.qmul.ac.uk/
AXD Group Homepage: https://www.axdesign.co.uk/