What was the main question in your dissertation?
The main question of my dissertation was whether spontaneous facial signals (movements of the face such as eyebrow movements, eye gaze, or smiles) play a role in how we understand and use language use and processing.
Can you explain the (theoretical) background a bit more?
In our everyday talks, we naturally communicate face-to-face, seeing each other’s expressions and body language. While facial signals are often studied for emotions like anger or happiness, we know less about how they play a role in conversation. Conversations happen quickly, with only tiny pauses between speakers (around 0-200 milliseconds), while forming a sentence takes longer (at least 600 milliseconds). So, we must somehow predict what someone will say to keep up with the fast pace of conversation. In my research, I explored whether facial signals help us to quickly understand what a speaker is trying to say, allowing for swift responses.
Why is it important to answer this question?
Figuring out how we talk to each other in spontaneous face-to-face conversations is key to grasping the natural way people communicate. It’s more representative than just studying how one person speaks because it reflects the same conditions as everyday conversations. This way, we get a better picture of how language works in more real-life situations.
Can you tell us about one particular project?
To find out whether facial signals can help us understand what someone is trying to say, I created an experiment using virtual avatars, which are basically digital characters on a computer. The avatars either showed eyebrow movements as frowns or raises or no eyebrow movements during speech. Participants were asked to press a button to say if the virtual avatars were asking a question or making a statement. Turns out, participants were really good at figuring out questions when the avatars frowned with their eyebrows, especially if it happened early in what they were saying. This suggests that early eyebrow frowns in particular are strong signals of questions.
I really like this project because the speech and visual communicative signals that the virtual avatars were animated with were based on the natural behaviour of real people holding conversations. This makes them better for studying human language and cognition in a realistic manner compared to using actors or people who already know what the study is about.
Results of this study show that facial signals play an important role in human communication. This study also provides important methodological insights for developing virtual characters, such as avatars or social robots, by showing that adding eyebrow movements like frowns can help convey questions better.
Can you share a moment of significant challenge or failure during your PhD journey and how you overcame it?
At the start of my PhD, the assumption was that I could use software that would automatically detect facial movement from videos. Even after trying to improve the software, the software ended up not being reliable enough to completely trust its judgment on behaviour like spontaneous face-to-face conversations between people. This meant I had to manually annotate people’s behaviour by going into each video frame-by-frame. Luckily for me, a lot of people ended up joining the ‘annotation team’, and after 1000+ hours of going through video data, we were able to create a large collection of data points that I could use for my first analyses.
What was the most rewarding or memorable moment during your PhD journey?
My work has received some attention in the media (documentary about my work on Dutch national youth program Het Klokhuis, or more recently a Radio 1 interview De Nieuws BV), which has been an incredible experience to have. But what I cherish the most from my PhD journey is being so easily surrounded by such interesting, smart, and supportive people. Working in academia can be quite testing at times for everyone, and having a community to share that with (practically but also emotionally) can really help. I will never forget how I felt when seeing a large part of the community I had built up reunited in one room with me when I was defending my dissertation in the aula of Radboud University.
What do you want to do next?
Currently, I’m a postdoctoral researcher in the lab of Prof. Dr. Martin Pickering at the University of Edinburgh, Scotland. I also work together with Dr. Lauren Hadley and Prof. Dr. Graham Naylor from the Hearing Sciences department of the University of Nottingham in Glasgow. My position is part of a larger project that aims to study how we process language under difficult circumstances. For example, how do we process speech when we are in a noisy environment like in a crowded Scottish pub? Are we still able to predict language like we do under regular circumstances, to keep up with the fast pace of conversation? And how do people with a hearing-impairment do this? For now, I’m very happy to continue my research on what information is used to process upcoming speech and what can affect it. In the future, I see myself continuing research, but I’d like to broaden the topics I work on and/or make them have a more practical implementation for society.