The three-and-a-half-day event welcomed around 275 researchers, practitioners, and industry experts from across the globe, all united by a shared interest in how artificial intelligence (AI) and machine learning are shaping the future of audio.
The programme was packed with activity: three poster and demo sessions, around 24 oral presentations, as well as a pre-conference hackathon that set the tone for collaboration and innovation. Attendees also took part in tutorials on real-time neural audio inference and large-scale acoustic datasets, alongside panel sessions exploring issues such as privacy in audio AI and neural audio coding techniques.
Highlights included invited talks from leading figures in the field:
- Ed Newton-Rex on ethics and copyright
- Jonathan Wyner on education
- Hazel Savage on entrepreneurship
- Xavier Serra on ethics and legislation
Together, these sessions offered fresh perspectives that complemented the cutting-edge research being presented.
C4DM members played a central role in organising AIMLA 2025, with Professor Josh Reiss serving as General Chair, supported by George Fazekas, Soumya Vanka, Franco Caspe, Farida Yusuf, Emmanouil Benetos, Nelly Garcia, Ilias Ibnyahya, Chin-Yun Yu, and Marikaiti Primenta. Their efforts ensured a dynamic, inclusive, and forward-looking programme.
C4DM researchers were also well represented in the conference proceedings, presenting innovative work such as:
-
NablAFx: A Framework for Differentiable Black-box and Gray-box Modeling of Audio Effects (Marco Comunità, Christian Steinmetz, Joshua Reiss)
-
Transfer Learning for Neural Modelling of Nonlinear Distortion Effects (Tara Vanhatalo, Pierrick Legrand, Myriam Desainte-Catherine, Pierre Hanna, Guillaume Pille, Antoine Brusco, Joshua Reiss)
-
Sound Matching an Analogue Levelling Amplifier Using the Newton-Raphson Method (Chin-Yun Yu, George Fazekas)
-
Procedural Music Generation Systems in Games (Shangxuan Luo, Joshua Reiss)
-
Neutone SDK: An Open Source Framework for Neural Audio Processing (Christopher Mitcheltree, Bogdan Teleaga, Andrew Fyfe, Naotake Masuda, Matthias Schäfer, Alfie Bradic, Nao Tokui)
Alongside these papers, C4DM members also contributed late-breaking posters and tutorials on topics ranging from expressive piano performance to intelligent music education and spatial audio data.
The conference wasn’t only about research — it was about building community. Social highlights included a Sunday evening talk and gathering at BLOC Studios, a Monday drinks reception at Drapers, Tuesday’s conference dinner at The Crown, and a lively closing jam session at Queen Mary’s Performance Lab on Wednesday. These events gave attendees space to connect, collaborate, and celebrate the shared progress of the audio AI community.
Reflecting on the event, Professor Josh Reiss said:
“AIMLA 2025 marks a milestone for the Audio Engineering Society and the wider research community. By bringing together diverse voices from academia and industry, we’ve taken an important step towards shaping the future of AI and machine learning for audio.”
All accepted papers from AIMLA 2025 will be published open access in the AES Library at no additional cost to authors, ensuring global accessibility to the research.
With its mix of technical innovation, lively discussion, and a strong sense of community, AIMLA 2025 set a high standard for future conferences in this exciting and fast-evolving field.
Find out more about our MSc in Sound and Music computing.