Randolph Quirk Fellow Public Lecture: Professor Martina Wiltschko
When: Thursday, May 22, 2025, 2:00 PM - 3:30 PM
Where: GO Jones Lecture Theatre and online, Mile End Campus
Lecture title: Language makes us think AI knows stuff
Click here for free in-person tickets
Click here for free online tickets
We are delighted to be welcoming Professor Martina Wiltschko (ICREA/Universitat Pompeu Fabra) to Queen Mary University of London as our Randolph Quirk Fellow for 2025. Join us on the week of 19th May 2025 for a series of three workshops with Professor Wiltschko on consecutive days, culminating in the Randolph Quirk Fellow Public Lecture on 22nd May at 14:00.
Martina Wiltschko is an ICREA research Professor at the Universitat Pompeu Fabra in Barcelona, Catalonia. She is a theoretical linguist, specializing in syntax and its interfaces. Working broadly within the generative tradition, she aims to bridge the gap to other traditions, including functionalist, cognitivist, and interactionalist approaches towards language. The big questions she pursues include the following: What is the relation between sound, meaning, and category? What is the relation between language, thought, and communication? What drives human interaction? And how does knowledge of language fit into cognition?
Language makes us think AI knows stuff
It is well-known that AI frequently produces false information: output that appears plausible but is not factual. This is known as ‘hallucinations’ or ‘bullshit’. Yet, AI pervades our lives including in domains where one might hope that factuality matters, eg., medicine, warfare, law, and education. Thus, we find ourselves in a curious situation: Why do we place our trust in an intelligence that is not trustworthy? In this talk I explore this question from a linguistic angle. I argue that one reason for our trust in AI has to do with our unconscious knowledge of language.
I start by showing that there is a grammar of certainty which is characterized by the absence of marking uncertainty. That is, when humans are certain about a fact, they use an unmarked declarative sentence (‘It is raining’) to express this certainty. It is only when humans are not certain that sentences must be marked to indicate this uncertainty (‘It might be raining’; ‘Apparently it is raining’; ‘I think it is raining’). I argue that this is an intrinsic characteristic of all human languages, and hence part of our unconscious linguistic knowledge. When someone utters ‘It is raining’, we are led to believe that they know that it is raining. Significantly, AI presents information with the use of such unmarked declaratives. In fact, it is trained to assert that it doesn’t have beliefs or consciousness and hence will not say ‘I believe that…’ This leaves us with a situation where our knowledge of language leads us to interpret AI output as if AI knows stuff.