Skip to main content
School of Electronic Engineering and Computer Science

Neural Dynamics of Perceptually Aligned Artificial Intelligence

Supervisor: Dr Iran R. Roman

Project Description

The human brain effortlessly makes sense of the world’s complexity: tracking objects through cluttered scenes, parsing melodies from noise, and anticipating another person’s movements. Yet today’s most sophisticated AI systems, despite impressive benchmark results, still struggle with perceptual tasks that humans find trivial. This PhD project addresses these challenges by bridging computational neuroscience, machine learning, and multimodal perception to build AI that perceives the world as humans do.

The Challenge

Current AI systems often succeed through statistical shortcuts rather than genuine perceptual understanding. Models may label objects or identify musical genres while fundamentally misunderstanding relational structure: the invariances, symmetries, and temporal dependencies that define perceptual experience. This gap reveals a deep problem: AI lacks the dynamical principles that organize biological perception.

The Approach

This project draws on Neural Resonance Theory to understand and replicate perceptual alignment. Biological neural networks don't merely process information: they resonate, synchronize, and form stable attractors that embody perceptual structure. Oscillatory dynamics enable brains to lock onto rhythmic patterns, nonlinear resonance extracts harmonic relationships, and attunement mechanisms learn statistical regularities across timescales. These aren't metaphors but mathematical principles governing how perception emerges from neural population dynamics.

We propose that AI systems must incorporate these principles to achieve genuine perceptual grounding. This means developing architectures where representations naturally capture invariances, temporal predictions arise from dynamical anticipation rather than statistical prediction, and learning reflects attunement to environmental structure rather than mere pattern memorization.

Research Directions

The successful candidate will pursue research spanning theory, algorithms, and applications. Potential directions include:

Spatial Perception and Scene Understanding: Advancing AI for acoustic and visual scene analysis: localizing sound sources, separating onscreen from offscreen events, integrating spatial audio with limited visual fields. These tasks require genuine spatial reasoning, not mere correlation detection.

Multimodal Self-Supervised Learning: Designing learning algorithms that discover shared structure across audio, visual, and physiological modalities without labeled supervision. Can systems learn to "listen" and "see" by exploiting natural correspondences between sensory streams?

Neurodynamical Foundations: Developing computational models that formalize how oscillation, resonance, and attunement principles can be embedded in neural networks. How do stability and attraction relationships in dynamical systems correspond to perceptual phenomena like expectancy, surprise, and recall?

Probing AI Perception: Developing rigorous evaluation frameworks to test whether models capture relational and structural understanding rather than surface correlations. This includes assessing whether AI systems recognize invariant relationships, including temporal, causal, spatial, and cross-modal, across diverse sensory inputs. Novel benchmarks are essential to measure perceptual alignment and expose gaps between machine and human perception.

Embodied and Interactive AI: Building perceptually-grounded systems for augmented reality and human-AI interaction, where agents must perceive user actions, anticipate intentions, and provide adaptive feedback in real-time multimodal contexts.

Who Should Apply

We seek candidates from diverse backgrounds (computer science, neuroscience, physics, engineering, cognitive science, mathematics) united by curiosity about intelligence in natural and artificial systems.

This project offers the opportunity to work at the frontier of perceptually intelligent AI, publish across premier venues, collaborate internationally, and help define what it means for machines to truly perceive. The successful candidate will also collaborate closely with the project’s undisclosed industry partner: a major global technology company at the forefront of AI research.

The PhD student will receive tuition fees at the home rate and a London stipend at QMUL stipend rates (currently in 2025/26 of £21,874 per year, to be confirmed for 2026/27) annually during the PhD period, which can span for 3 years.

For more information about the project, please contact Iran Roman (i.roman@qmul.ac.uk).

Supervisor Dr Iran Roman (he/him) – i.roman@qmul.ac.uk Centre for Fundamental Computer Science www.seresearch.qmul.ac.uk/cmai/people/iroman/ Personal Homepage: iranroman.github.io Google Scholar: https://scholar.google.com/citations?user=W_PoFfkAAAAJ&hl

How to apply

Queen Mary is interested in developing the next generation of outstanding researchers and decided to invest in specific research areas. For further information about potential PhD projects and supervisors please see the list of the projects at the end of this page.

Applicants should work with their prospective supervisor and submit their application following the instructions at: http://eecs.qmul.ac.uk/phd/how-to-apply/

The application should include the following:

● CV (max 2 pages)

● Cover letter (max 4,500 characters) stating clearly in the first page whether you are eligible for a scholarship as a UK resident (https://epsrc.ukri.org/skills/students/guidance-on-epsrc-studentships/eligibility)

● Research proposal (max 500 words)

● 2 References

● Certificate of English Language (for students whose first language is not English)

● Other Certificates

Please note that to qualify as a home student for the purpose of the scholarships, a student must have no restrictions on how long they can stay in the UK and have been ordinarily resident in the UK for at least 3 years prior to the start of the studentship. For more information please see: (https://epsrc.ukri.org/skills/students/guidance-on-epsrc-studentships/eligibility)

Application Deadline

The deadline for applications is the 9th January 2026.

For specific enquiries contact Dr Iran Roman at i.roman@qmul.ac.uk

For general enquiries contact Mrs Melissa Yeo at m.yeo@qmul.ac.uk (administrative enquiries) or Dr Arkaitz Zubiaga at a.zubiaga@qmul.ac.uk (academic enquiries) with the subject “EECS 2026 PhD scholarships enquiry”.

Back to top