What was the main question in your dissertation?
During my PhD, I studied what the brain does when we are speaking and listening. In particular, I asked whether brain activity is the same when we say or hear the same sentence. Language processing in the brain so far has been mostly studied from the point of view of comprehension (i.e. while listening or reading). Studies with speaking are instead rarer, because methods to measure brain activity are very sensitive to movement. Even if we may not realize it, we move a lot when we speak (especially our jaws and our head, but also the rest of the body, like our hands). It is also harder to run experiments where participants speak, because it is almost impossible to have experimental control over what participants will say, without telling them exactly what to say (for example by reading). In my PhD, I tried to partly bridge the gap in our understanding of brain activity during speaking by running a few studies to systematically compare brain activity during speaking and listening.
Can you explain the (theoretical) background a bit more?
When we speak, we know our brain is lighting up to allow us to move our mouth and vocal cords. For a long time, it was only possible to investigate the relationship between brain structure and function (e.g. language processing) after stroke. The advent of functional magnetic resonance imaging (fMRI) three decades ago allowed for studying brain activity during language processing in healthy participants. Since then, many studies have found a link between brain activity and specific linguistic processing. However, as introduced above, most studies of language processing focused on comprehension.
In my PhD thesis, I compared brain activity in speaking and listening to very similar sentences. We know that there will be differences: for example, when hearing someone speak, our auditory cortex (near the ears) will light up. Similarly, when speaking, our motor cortex, the part of the brain controlling the movements, is very active. But language is not just about making or hearing sounds. I asked if the same parts of the brain deal with how people put words together into sentences and what these sentences mean, whether they are the ones speaking or listening.
Why is it important to answer this question?
It’s important to have a better understanding of brain activity during speaking because language is exchanged in this form as well: generally, if you’re hearing something, it means someone spoke it. By having a better understanding of the similarities and differences in brain activity between speaking and listening, we can also better understand how language is generated in our brains. Eventually, a good understanding of the relationship between brain and language could help with fixing the system when it breaks down, such as after a stroke.
Can you tell us about one particular project?
In my first PhD project, I focused on delineating the sets of brain regions that are active when we produce or listen to connected strings of words of increasing complexity in syntactic structure, which are the rules that govern how we compose words into sentences. Concretely, brain activity while saying or hearing “think, jump, the boy, the girl” was compared to “the girl thinks that the boy jumps”.In these sequences, the same words are said but are arranged based on different (i.e. less and more complex) grammar rules. We reasoned that the brain would work harder to process strings of words with more complex grammar. We found that the same network of brain regions was involved while processing the more complex sentences (relative to the easier ones) in both speaking and listening. Finally, there were some regions that responded only in speaking or listening, such as the region involved in movement during speaking, and the region involved in processing sounds during listening. Therefore, this study showed that putting words together into sentences may happen in the same brain regions in speaking and listening.
Can you share a moment of significant challenge or failure during your PhD journey and how you overcame it?
All my life I’ve felt uncomfortable being the centre of attention in groups. In the context of my PhD, this means that I always struggled with presenting in front of colleagues or even just speaking up in a meeting with more than 2 people. Of course, this was not ideal in a job where a large part is learning to present and speak in front of large crowds. Surprisingly, I mostly managed to overcome this fear just with experience. The lockdown period partly helped, because giving talks on zoom is easier: you don’t have 25 pairs of eyes looking at you in person, but just 25 small screens. I think that really helped reduce the automatic stress response that always kicked in with in-person presentations, and I felt more in control. With experience, I became more confident and started almost enjoying the rush of giving a talk. I’m still nervous and I still find it hard, but now I also manage to appreciate it.
What was the most rewarding or memorable moment during your PhD journey?
One of the most rewarding experiences during my PhD was organizing a conference for other PhD students across Europe and the world. We tried to keep it inclusive by offering both in-person and online options. Organizing a conference, from the decision on topics, inviting keynote speakers, arranging the venue and abstract submissions, was a great learning experience! During the conference, we also had a great time, as we had the opportunity to meet a lot of other PhD students doing similar research and who were going through similar challenges at work. We were happy to share and spend time together after 2 years of COVID. It was a lot of work, but definitely worth it!
What do you want to do next?
After the PhD, I continue to investigate the similarities and differences in brain activity in speaking and listening by using different experiments and analytical approaches. It’s a never-ending journey!
In the future, I also plan to study the relationship between language and the brain in people who had a stroke and now have difficulties with speaking or hearing due to a condition that is called aphasia. I hope that by studying language in the damaged brain, we can get a different perspective on the differences between speaking and listening, and perhaps one day improve the recovery outcomes for stroke patients.