Meet Naomi Nota, a postdoctoral researcher at the University of Edinburgh's School of Philosophy, Psychology, and Language Sciences. Her focus? Unravelling the complexities of language processing in challenging environments. For example, she looks at how we process speech in the midst of the cacophony, much like deciphering conversations in a lively Scottish pub. She also looks into the unique challenges faced by those with hearing impairments. Previously, she was a PhD candidate at the Max Planck Institute for Psycholinguistics & Donders Centre for Cognition and Behaviour, Radboud University. As a PhD, she delved into the role of visual bodily signals in language processing. Additionally, she worked on sentence reading in bilingual children. The overall goal of her research is to find out what information is used to process upcoming speech and what can affect it.
Hailing from France, Naomi majored in Linguistics at Leiden University. Beyond her academic pursuits, she is an avid drawer, and enjoys outdoor activities like climbing and hiking.
By 2050, around 2.5 billion people worldwide are expected to be impacted by hearing loss. This projection is mainly driven by the expected population growth and ageing. Hearing loss can result in difficulty following conversations and social isolation. In this blog, we explore what’s really important in understanding spoken language, and speculate on sound solutions of the future!
A visit to the Saturday food market in the city is an ordinary event. Interestingly, even something so ordinary includes the coordination of many simultaneous activities. For example, a market shopper might be talking on the phone to their international friend in English while searching for the most appetizing fruits and vegetables, maneuvering themselves through the crowd, taking out their wallet, and asking to pay in Dutch. The reason that we can juggle between all these activities so effortlessly is because the brain knows these familiar settings, making it easier to predict what is about to come. But how exactly does our brain manage to do it? This is exactly what Alex Titus is investigating as part of his doctoral research at Radboud University.
People use their bodies when they talk—a lot. For example, we change our posture, move our head, gesture with our arms and hands, and use facial expressions to convey different things. In other words, visual signals form an essential part of human communication, at least in many cultures.
It is the year 2049. We now co-exist with artificial intelligence. Human-like agents are completely integrated into our society and work regular jobs. They act and look fully human. Like in the dystopian movie ‘Blade Runner’, inspired by Philip K. Dick’s novel ‘Do Androids Dream of Electric Sheep?’, or the more recent TV series ‘Westworld’, the humanoids can imitate all the outward signals we attribute to consciousness, and appear entirely self-aware. In other words, we have reached the point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.
Spotting a fake smile can be more difficult than you think. Although humans are wired to be social, most people are actually surprisingly bad at recognizing whether a smile is authentic.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.