From hands-on workshops on assessment redesign at the University of London to student-led innovations at King’s AI Festival, different CPD events showcased a wide range of initiatives exploring accessible, inclusive and authentic learning in the age of AI. In this latest instalment of our regular ‘Recent lessons’ series, we share our CPD highlights from June and July.
Developing Inclusive Education for Neurodivergent Learners Workshop
The workshop was co-organised by Giorgia Pigato from Queen Mary Academy and Lucia Evans from the Disability and Dyslexia Service (DDS) on 5 June. Participants gathered in person to reflect on and share inclusive teaching and learning practices. Two student volunteers also shared their lived experiences as neurodivergent learners.
Students shared their personal experiences, including the challenge of accessing lecture notes in advance to the teaching session, or not all. A student further reflected on how a formal diagnosis can unlock support but also pose barriers, such as difficulty accessing assessments through the NHS or complications with work visas in countries like Australia and New Zealand.
Dr. Ruth Rose and Dr. Timothy Fulton from the School of Biological and Behavioural Sciences (SBBS) shared their comprehensive approach to student support. Their approach includes:
- Proactive outreach to every student with a Student Support Summary.
- Centring the student in conversations about themselves and co-creating reasonable adjustments.
- Two-way conversations with professional and external services (e.g. DDS/ NHS) as required and within professional boundaries.
- Weekly Neurodiversity Drop-In sessions, offering a safe and consistent space to share successes, explore challenges, and build a supportive community.
Graduate Coach Michelle Taylor discussed her work with an increasing number of neurodivergent students and graduates. John Seamons from the TELT team showcased how digital tools can support inclusive teaching. He introduced Microsoft Accessibility Checkers and Brickfield Accessibility Toolkit (inbuilt to QMplus). Both help staff identify and fix accessibility issues.
Find out more
Register interest on CPD training platform to be notified for new dates for Developing Inclusive Education for Neurodivergent Learners Workshop.
Empowering learning through AI: highlights from King’s AI Festival
As part of our ongoing exploration into AI and education, we attended the King’s AI Festival, a thought-provoking event curated and chaired by Dr Martin Compton. The festival featured lightning talks and showcases across disciplines, with a strong focus on critical AI literacy, co-creation, and inclusive education. Here are some highlights:
- Dr Chahna Gonsalves proposed an expanded Bloom’s taxonomy to include reflection, emotional awareness, and decision-making — skills that can’t be outsourced to AI. Her model provides a foundation for curriculum and assessment redesign in the AI era.
- Isaac Ng, a medical student from King’s, and Dr Mandeep Gill Sagoo introduced ai, a student–staff co-created platform combining AI-powered flashcard generation, chatbot feedback, and virtual patient simulations.
Watch recordings from the Festival on the King's Institute for Artificial Intelligence YouTube channel. To learn more about Queen Mary’s initiatives in this area, hear from Shoshi Ish-Horowicz, Head of Innovation and Learning from Queen Mary Academy, on how the Centre for Excellence in AI in Education is supporting staff and students to engage critically and creatively with generative AI.
Rethinking Assessment with AI: CODE Workshop Highlights
In July, we attended the AI and Assessment Online Workshop organised by the Centre for Online and Distance Education (CODE), which brought together colleagues from across the University of London.
The session began with a clear explanation of the University of London’s three-tier policy on AI use in assessment, illustrated by the figure below, which presents examples across the three permitted levels.
How are we already using AI in assessment?
Level 0: No AI | Timed MCQ and essay writing exams using Inspera proctoring (e.g., English and Philosophy). Locked-down browser, record and review → controlled environment. UG Laws and EMFSS utilising ProctorU or live invigilation in exam centres. |
Level 1: AI Assistant | Coursework using AI tools (e.g., Studiosity) to assist students during writing process (e.g., GMBA). Provides draft feedback, helps with time management, navigating deadlines (Personal Study Plan), connecting to live support team to ask questions (Study Assist), and supporting with academic writing and study skills → AI assistance within a controlled and regulated environment. Formative/continuous assessment → AI Coaching (check-ins to support student learning), e.g., Noodle Factory and Coursera AI Coach, used to assist with improving skills (e.g., MOOCs). In formative tasks by critiquing, reviewing, and providing feedback; students can engage in continuous learning conversations with AI coach. |
Level 2: AI Integral | Formative task → LTHE → AI Coaching (e.g., Noodle Factory). Student responses get feedback from both peer review and AI tool. As a follow-up task, students critically analyse differences between AI and peer feedback through a forum post. Coursework pilot → Divinity → reflection on an AI-generated essay. |
(Source: Rethinking Assessment with AI: CODE Workshop from University of London)
To evaluate the susceptibility of our assessment to AI-assisted completion, we conducted vulnerability testing by trying to use ChatGPT to complete the current assessments and determine how easily they could be completed with minimal human input. This hands-on activity highlighted the risks of over-relying on traditional formats such as essays or MCQs.
Building on these insights, we then used Dr Martin Compton (King’s College London)’s custom AI and Assessment bot to test and redesign existing assignments, with the goal of minimising the risk of students misusing AI tools. We generated ideas such as asking students to critique AI-generated answers, incorporate personal reflection tied to their own learning context, or present their work through a viva-style format. These adjustments not only promote transparent AI use but also align with authentic learning principles. However, participants also raised valid concerns: viva assessments may not scale well in large cohorts, and adding reflective elements can create challenges for anonymous marking. The group discussion reflected a broader tension between pedagogical integrity and administrative feasibility—highlighting the need for institutional support, assessment literacy, and flexible quality assurance processes when shifting to more AI-resilient designs.
To complement our hands-on testing and redesign efforts, we also explored the QAA document Reconsidering Assessment for the ChatGPT Era, which provides practical guidance on designing sustainable, AI-resilient assessments.
Find out more
Need some more guidance on how we are approaching AI and assessment in Queen Mary? Visit Queen Mary’s Staff Guide to Generative AI.