An interview with Dr. Ezgi Mamus
What was the main question in your dissertation?
We experience the world through our senses: we see, hear, smell, touch, and taste things. Each sense offers us unique information but also certain limitations. Together, they shape how we understand objects and events, and thus concepts. For example, when a car passes by, we see it moving fast and also hear the whooshing noise the car makes. Both visual and auditory cues inform us about the speed of the car. What happens when one of these cues is absent, as in the experience of individuals who are blind from birth? In my thesis, I investigate how our perceptual experience influences the way we use language in speech and hand gestures, as well as the meanings behind our language.
Can you explain the (theoretical) background a bit more?
Theories disagree on how much the way we think and speak is linked to our physical experiences. One way to study this is to compare people who experience the world with and without a particular sense. I focused on vision and examined how being blind from birth affects the language people use to describe objects and events.
Vision is unique because it provides a complete view of objects and events, whether they are near or far. Since we see continuously, we can keep track of movement, location, and how things relate to each other in space simultaneously (all at once). This makes vision dominant over other senses in how we understand space. So, it is important to examine how language changes when people learn and describe spatial information without visual experience.
Only a few studies have looked at how blind people use language to talk about spatial information, and their results have been mixed. Additionally, no study had tested how much or what type of information a sighted person can extract when they watch an event versus when they only hear the sound of the event. My thesis aimed to fill in these gaps in existing research by focusing particularly on the experience of people who are blind from birth.
Why is it important to answer this question?
Our experiences involve multiple senses, and we learn from different types of input. However, we often do not realize how much we rely on vision or, on the other hand, how much we can gain from other senses like hearing.
The dominance of the visual sense is also present in scientific research. For example, most studies on how people communicate using both language and hand gestures have used only visual materials without considering how the type of input might affect their results. In one of my studies, I found that the type of sensory input through which we learn about an event influences how sighted people speak about it.
On the other hand, research shows that relying exclusively on sound or touch shapes how blind people form mental maps of space. Unlike vision, which provides information all at once, auditory and touch information comes one after another. As a result, blind people tend to build spatial maps in a step-by-step manner (sequentially) and have a more egocentric perspective of space, where object locations are defined relative to their own position.
For example, they might say, “The bookcase is on my left” instead of “The bookcase is in the corner of the room”. Thus, speech and gesture conveying spatial information may be affected by changes in the spatial cognition of blind people. Identifying the unique contribution of each sensory input is essential for a better understanding of how the human mind works.
Can you tell us about one particular project (question, method, findings, implications for science or society)?
In one study, I created spatial scenes using event sounds. For example, participants heard footsteps moving away, a door opening, and someone walking into a room. They sat in the center of five speakers, allowing them to perceive the sounds as if events were happening around them from different directions. My goal was to provide both sighted and blind participants with the same type of input. Unlike previous studies, where sighted and blind participants experienced the events through different input, all participants in my study experienced the events through sound. After listening, they described what they had perceived.
I found that blind participants consistently used egocentric references when describing locations, while sighted participants rarely did. For example, a blind participant might say, “Someone left the room on my left, ran toward me, and took the elevator on my right.” In contrast, a sighted participant might simply say, “Someone ran into the elevator”. This suggests that blind participants processed spatial events in a more sequential way, breaking them down into smaller parts and focusing more on object locations relative to their own position.
These findings have implications for designing more intuitive navigational tools for blind people. When giving directions to a blind person, it is important to include more landmarks and describe locations in relation to the person’s own position to make routes easier to navigate.
What inspired you to choose your research topic?
As a bachelor’s student, I was impressed by the blind students at my university and how smoothly they navigated their surroundings. Since vision is often considered essential for navigation, I was curious about how they managed so well without it. While research had explored how blind people learn and navigate space, there were only a few studies on how they use language to describe spatial information. This gap sparked my interest, and I wanted to explore the topic further.
What was the most rewarding or memorable moment during your PhD journey?
I had a memorable moment with one of my blind participants. In the experiment I mentioned earlier, there was an additional task: after describing the events, participants listened to them again and silently localized the direction of movement. This task seemed too easy for my participant, and he wondered why I had given him something so simple. When I explained that it is not easy for everyone, especially sighted people, he was surprised to learn that blind people tend to outperform sighted people in sound localization tasks. This moment stuck with me because it highlights an important idea—everyone has unique strengths and experiences. There is not one “right” way to perceive or navigate the world; we all do it differently.
What do you want to do next?
I am currently a postdoctoral researcher in the Multimodal Language Department at the MPI. I’m continuing to work on the gesture use of blind people, but now I’m using a more qualitative approach. For example, I compare the kinematic features of gestures—such as size, speed, and precision—between blind and sighted speakers as they describe spatial events.
