summaryrefslogtreecommitdiff
path: root/transcripts/hplus-summit-2010/d1s3-lauren-silbert-greg-stephens-uri-hasson
blob: 875f1fd389fe6f85c7987a73d560e3f8d4833ef4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

Hi, I am going to talk about some work in collaboration with Greg and Uri. Okay. 

So, communication is naturally an interactive process, the goal of which is to transfer information from one brain to another, and we all understand the importance of communication in our society and forming things, the smaller the world gets, the more importance it is to understand how communication works. We know very little about the underlying neural mechanisms of communication. Two reasons. First, neurolinguistics studies have been restrained by the boundaries of one brain, like the comprehension or production of speech. Secondly, communication occurs in a complex environment. As scientists, we have mostly used simple stimuli such as basic words, isolated from the environment to study such a process. When in reality we are always interacting with different brains, and communication is a really complex thing. In order to assess this, we need to develop new experimental paradigms.

We did this first by having a speaker, in an fMRI machine (me), tell a story, as naturally as possible, about my past, and so measuring the brain responses of my brain, using this microphone, which is this orthogonal optical microphone that cancels out the noise from the fMRI. It cancels the intense noise in real-time. You play this back for 11 listeners in the fMRI machine so that we can capture the neural dynamics from both sides of this communication. During the communication, both of them. We build this model where we have the speakers brain dynamics and we use this in one specific voxel of the speaker brain to predict the corresponding voxel in the listener brain, across the entire brain. We're using the speaker brain as a predictor.

The benefit of this model is that we bypass any need to link to apriori processing. So we can look at interaction between speaker and listener. Further, communication unfolds over some time course, so for some speaker, you have to think about what you're going to say, some motor plan, and execute that motor plan, and as a listener you analyze the sound, and find meaning from whatever sound you're hearing. So in order to hear what you're hearing during your communication, you have to add htis temporal dynamic to the model.

We take the response from the speaker, and we shift it back in time, and we go up to 6 seconds, and this represents moments, where the speaker's brain repressents what proceeds. Then there's the receiver that has this anticipatory effect on what the speaker is about to say. So then, using this new model, and this new methodology, we can ask, what really happens during this interaction between two people during communication?

So, for example, how similar are two brains? Is it possible that in order to comprehend something, you have to also produce. If indeed this kind fo coupling to speaker and listener underlies communication, you should be able to predict the success of the communication by the results of the speaker and receiver. We're going to skip to the most interesting results.

What we find is that indeed a speaker's brain responses are coupled to the listener's responses. Here's a map of the speaker and listener brains when they are doing the same thing. There's a really simple illustration of a raw time course, registered in a brain of a listener. There's a correlation of these raw time course responses. This coupling between a listener and a speaker, like in A1 and Brocca's and Wernike's, and also areas of the brain that are implicated more often in sort of higher-level cognitive areas, like the dorsolaterla and medial cortex, the temporal and (some) junction. What this suggests is that there's a speaker's brain during production, is coupled or there's same brain areas that are coupled, while they are understanding.

So then we answer some basic questions. First of all, is it really tied to the content, or is this a low-level processing effect? We had a russian speaker do the same thing, and with non-russian listeners. So there's no coupling involved with the russian speaker. So is this possible with the result that maybe, the speaker is listening to himself. Each regressor in this model, a different moment in differnet interaction between speaker and listener. There's perhaps the same response in the listener. These moments are equally, on the other side, so synthesizing anticipation, and prediction. In the listener-listener, all of these moments, the moment of the vocalization of the noise. In the speaker-listener, the speaker's main responses, they proceed the responses by about 1 to 3 seconds, which suggests that the speaker induces similar brain responses in the speaker. When we look at the spatial dynamics of these beta waves, we basically see that in early auditory areas, this correlation in this coupling between speaker-listener, is timelocked to the moment of vocalization. When we look at the posterior areas, in blue here, you're looking at areas where the listener proceeds, while in the frontal cortex, the listener preceeds the speaker.

So, next we ask, what can we do with this information? What's the practical usze, and if this is indeed the reason, we should be able to predict the success of the communication by speaker-listener coupling, so we measure the extent of how well each person understands the story, and there's basic assessment. Indeed we find that there's comprehension,l and also the rank of the individual listeners. The degree of neuro coupling does indicate the success of the communication. These areas in red, the areas where the listener brain is preceeding the brain, these are the strongest correlation for success. Prediction plays a very important role in communication. So, in sum, we see that over the course of communication that the speaker and listener brains are coupled. There's similar brain patterns invoked in other individuals maybe gates our communication and understanding.

This recording of an interaction between a speaker and listener, this methodology can be used to assess verbal and non-verbal communication in any sort of model system in general. I think that really understand people and their interaction, as opposed to an isolated brain, is a really exciting next step for how societies interact and form.

Thank you!