0:04

I'm happy to introduce our next guest lecturer.

Â That's Eric Shea-Brown of the Department of Applied Mathematics here at the

Â University of Washington. So Eric did his PhD at Princeton, with

Â Jonathan Cohen, and with Phil Holmes. On the Neurodynamics of Cognitive

Â Control, and after that, he did a post-doc at NYU under the mentorship of

Â John Rendall/g. Eric and his wife are avid skiers and

Â hikers. They're great cooks and they have an

Â adorable one year old son. Eric's work and that of his group

Â concerns the relationship between neural dynamics and coding.

Â And they're particularly interested in issues like decision making, chaotic

Â dynamics and neural circuits and also correlations.

Â And correlations is the topic he'll be talking to you about today.

Â Thank you, Eric. >> Thank you very much for the

Â introduction. So I'm going to talk to you about

Â representation of information in large Neural Populations.

Â The title is Correlations and Synchrony. So when we think about how representation

Â of a signal, say something that's in the sensory environment, in the spike

Â responses of single cells. A picture like that which you see on the

Â screen here, comes to mind. This is from famous nobel prize winning

Â work, in this 60s. the idea is there's something in the

Â sensory environment. Again, here, the orientation of a visual

Â signal. You can see that changing there on the

Â left. You record form a signal cell, here a

Â cartoon of what you might see from the visual cortex.

Â And as that feature of the sensory environment changes, something about the

Â way the spikes you see on the right changes, what is that something?

Â Famously, that's the rate at which the spikes are produced, the simplest

Â statistic perhaps, that you could imagine.

Â And there's been enormous progress in the field from quantifying being this change

Â in rates via something called a tuning curve.

Â So here, in the bottom you can see firing rates as a function of the sensory

Â variable. itself the angle, of this particular

Â visual stimulus of varying in some systematic way.

Â Okay, so that's the first statistic, that matters in terms of how cells respond.

Â In terms of representing information. We gave an example from visual

Â neuroscience. But we see these type of tuning curves

Â and the covariation of firing rates with. Again, something that you might imagine

Â the nervous system wanting to encode information about.

Â in a wide variety of different settings. One almost 100 years old in terms of

Â proprioception. motor neuroscience and other examples

Â listed at the bottom of the screen. So here we go, we're off.

Â We're talking about statistics of responses of cells.

Â and again, how those represent signals. What other statistics beyond the rate

Â might we be interested in? Well, first, if we give some sort of

Â stimulus which elicits, on average, say, a 10 hertz response or so, we count

Â spikes in half a second. On one trial we might see indeed the

Â average occurring 5 spikes occurring, but on other trials we'll see very different

Â responses. This is not a clockwork type of system.

Â Looks a lot more like popcorn or a Geiger Counter.

Â Variability or variance as represented say in a pluson process.

Â So, what do we have so far? We've got a bunch of neurons, we look at

Â one of them. We have a tuning curve, that's the mean

Â response as a sensory variable changes. We also have variability or variance

Â around those mean, that mean. two statistics so far, on our journey

Â towards describing population responses. We can do that for one cell, we can do it

Â for another. Let's grab this blue one over here, and

Â we can see that we can quantify similar statistics, firing rates and variance and

Â function of the stimulus. Is that all there is to it?

Â Or can we just repeat this procedure going one cell at a time, describing

Â possibly different properties of mean and variance in their responses.

Â Or is there something more there, in the neural population, beyond that which we

Â could deign by looking at one cell at a time.

Â Well the simplest way to get at that question is to look at pairs of cells at

Â once and ask whether again, in these paired responses.

Â There's more there than you see in just one cell at a time.

Â Quantify it as follows, put down some time window of length T, measure from a

Â couple of cells, upper cell here. Another cell, grab it, produce the

Â response, down there. Close that window.

Â and simultaneously measure how many spikes these two cells produce, right?

Â So here in our window of time the first cell produced two spikes, second cell

Â three. Slide that window of time along see what

Â happened, okay? Label these spike counts n1 and n2 for

Â the two cells and ask, well, again, is there more there in those two cells

Â responses than I could have seen one cell at a time?

Â How do I do that? Measure something called the correlation

Â coefficient or the Pearson's correlation coefficient.

Â That's just that covariance of these two spike counts, divided by their variance.

Â And you ask, well, are these cells covarying or not?

Â What is this number, is it zero or something nonzero?

Â By now we have many studies which indicate that this correlation

Â coefficient is significantly nonzero. Now, there are some interesting cases in

Â which these correlation coefficients do seem to be 0.

Â but by and large, again, there are a large number of examples all the way from

Â the input and of the nervous system to the output.

Â Where we see significant departures from independents of the cells, again,

Â quantified by non 0 correlation coefficients row.

Â Okay, so that it looks like we need to keep going in our effort to describe what

Â neural populations do. We can't just look at one cell at a time,

Â there's more in the joint or co-bearing activity of these two cells.

Â That could be discovered by looking at one at a time.

Â But what we haven't yet established is whether that's just some factoid about

Â the way cells fire. Fine, they happen to go at the same time,

Â spike at the same time, with a little bit more prevalence than you might expect by

Â chance. but that does that actually matter for

Â the way that they encode information? Encode, for example, the type of simple

Â sensory variables that we have been looking at so far.

Â That's the question, who cares, at the bottom of the screen.

Â 6:46

So, there, this has been, studied. And also, reviewed, you see a review

Â paper here. Averbeck et al, Nature Reviews

Â Neuroscience '06. and studied and reviewed in the context

Â of, of neuroscience by a large number of authors.

Â And I want to give you a sense of, what the type of results that have been

Â established. seem to be pointing to, so, let's look

Â again at the responses of two of our cells, our friends, the blue and green

Â neuron from before. in response to a particular sensory

Â variable, so a drifting grading, say, with an orientation that you see is is

Â diagonal. As indicated by my lollipop in the bottom

Â of the screen. Okay, so let's talk about the mean

Â responses, that these two cells produce. they're both firing at some reasonable

Â rate. the blue and the green cell together.

Â And if you would make some sort of plot where on one axis we have the spike count

Â coming out of cell one. The other the spike count coming out of

Â two cells cell two, we would get some sort of a point on average in this.

Â in this two-dimensional space for the mean responses of these two cells.

Â Now if we on top of that, were to make, we know it's probably wrong

Â But we were to make the assumption that these cells are statistically independent

Â of one another. Then their variability would be spread

Â around that mean in some some roughly circular way, okay?

Â Fine, so this would be a picture of the cloud of responses that I get out of

Â these two cells. Under the assumption that they are

Â independent of one another, not correlated, okay?

Â Now, I present another stimulus, so my lollipop has moved over by a little bit.

Â And my stimulus is now a little bit more horizontal.

Â What's going on both of these cells respond with a slightly higher firing

Â rate right? So, the mean of the distribution has

Â moved up in this two dimensional space. But we're still assuming in an in a

Â roughly independent way so the response distributioner cloud is still roughly

Â circular, okay? So those are my two clouds of responses

Â elicited by stimulus one and stimulus two in my pair of neurons.

Â Under the assumption that these spike in an independent way.

Â Hm, now, what if these cells were not independent of one another and they

Â tended to be correlated in a positive way?

Â In other words, their responses tended to covary.

Â They're still variable, this cloud is extended, okay?

Â But what happens to occur, is that both cells tend to fluctuate or tend to do

Â about the same thing. They tend to have similar correlated

Â noise, or correlated variability. That means that these responses cluster

Â towards the diagonal. Again, where cell 1 and cell 2 are doing

Â approximately the same thing, okay? So, under that correlation assumption,

Â right, my response distribution has gone from a European football to an American

Â football. It is more concentrated, more elongated.

Â My response distribution is going to look something like this for stimulus 1, and

Â for stimulus 2, same thing, right? The mean, once again, shifted up.

Â R axis in both directions, but we maintain our correlation, so that the

Â response distribution again is expanded or elongated like an American football.

Â Fine, so that's my picture, do I care? Well, let's take the organism's

Â perspective, as the saying goes in the research literature and think about

Â trying to look at the responses of these two neurons.

Â And determine or decode which sensory stimulus was given.

Â Was it the more diagonal one or the flatter one?

Â Well you can certainly tell that that task is going to be much more difficult

Â in the presence of these correlations. Because these two response distributions

Â overlap more. The conclusion?

Â Correlations can degrade the encoding of neural signals, okay?

Â So, we saw this result for two cells. Now it turns out, this is not just a

Â finding auh, that holds for pairs of neurons.

Â If I look at large groups of cells, say, M cells with identical tuning curves,

Â there's a famous argument. over a paper of Zohary, Shadlen, and

Â Nethor, Newsome that makes the following point.

Â Let's compute the signal-to-noise ratio of the output of all M cells at once.

Â What's that? That's just a mean response divided by

Â the variance of that response. Okay, so this signal-to-noise ratio is

Â going to grow with M as I include a more and more cells into the population.

Â Let's be careful there. should I be the mean, divided by the

Â variance or the mean divided by the standard deviation

Â That will grow with M if we have M independent cells, then the mean will

Â grow with M, and the variance will also grow with m.

Â So this is going to be something which grows in time, is going to be the mean.

Â That's where it grows with the number of neurons you include in the population

Â will be the will be the mean divided by the standard deviation.

Â So there's a typo on the slide. Okay, anyway, we have some measure of the

Â signal-to-noise ratio. This is growing with am, include more

Â cells in the population that are signal noise ratio.

Â Does this make sense? absolutely this makes sense.

Â it's just like doing an experiment over and over again, or flipping a coin even,

Â over and over again. The more times you do this, if you take a

Â look at the aggregate response, it will have a smaller ratio of the size of the

Â fluctuations. As opposed to the, again, the aggragate

Â response, the mean response. Repeat an experiment many times,

Â aggregate the data. You get a more accurate result, okay?

Â So, this is the type of thing you see if all of the cells are statistically

Â independent of one another. Do more, include more, get more

Â information out. But what do you see, as you include

Â correlation among these variables? So, here's our friend the correlation

Â coefficient row again, before it was zero, all these cells were independent of

Â one another. Now we increase this correlation

Â coefficient, it goes from 0 up to 0.1. And you see something quite interesting

Â happening to this signal noise ratio. Looks like it saturates even with a

Â relatively wimpy correlation coefficient of one part in ten.

Â So this is the same picture. This is code fluctuation or commonality

Â in the response. in the responses of these cells, giving

Â us a noise term that cannot be averaged away as I include more and more cells in

Â the population. The consequence of this is a limitation

Â on the signal, a noise ratio. A reinforcement of our overall point that

Â we already saw in these perhaps easier to understand bubble pictures up at the top.

Â Positive correlation giving you more overlapping responses, giving you less

Â information, a bad news story. Now, some in the audience probably

Â already thinking about this option. Is this bad new story the only one we can

Â ever read? And the answer is no.

Â What if I have my friends the blue and the green cells arranged as follows.

Â Still the same two stimuli are presented. But now these cells have less similar

Â tuning curves. So, that, notice please, when you go from

Â stimulus one to stimulus cell, the green cell displays a lower firing rate.

Â But when you go from stimulus one to stimulus two, the blue cell displays a

Â higher firing rate. Well, What are my clouds of response

Â distributions going to look like? Well, in this case, one of the cells has

Â a higher firing rate, but the other cell has a lower firing rate as I go from one

Â stimulus to the next. And my two response distributions will be

Â arranged across the main diagonal like this.

Â Now, you can guess what's going to happen when you introduce positive noise

Â correlations. There we go, these two responses become

Â more elliptical, exactly as before. But in becoming more elliptical they now

Â become less overlapping or easier to discriminate.

Â The conclusion's in the box here. Correlation can have a good news effect,

Â as well. So if we sum up what we learned here,

Â right? These are the two examples.

Â and when we were trying to answer the question of, who cares?

Â about the fact that I see positive correlation or nonzero correlation, I

Â should say, in many places in the nervous system.

Â We saw that there were a number of different options.

Â There was this bad news story, right, as highlighted by this famous paper in the

Â l, talking about large group of m cells. Or in our simple lips picture here, a

Â decrease in information when cells tend to be more homogeneous or have similar

Â response properties in their means. A good news story where if the cells are

Â sufficiently heterogeneous with respect to one another.

Â The presence of these correlations could increase the detectability of the two

Â different signals, the discriminability of the two different signals.

Â 18:00

Well hang on, that's just, thinking about 2 cells at once, but what happens when I

Â think about 3 cells at once. It's something that's really different

Â there? Is there an analog of the American

Â football being different from the European, or the International, football.

Â that occurs when you go from three two cells to three.

Â And how about from three cells to four? And when is this story ever going to

Â stop? Now, this is a question that's been

Â around in neuroscience for a long time. Here are some of the references, and the

Â ideas go back, not surprisingly, even before that.

Â But these type of questions have really come to the floor even more strongly with

Â an increase in this scope and scale of array type of recordings.

Â Here's a famous, pe, paper from the research group of E.J Chichilnisky.

Â at the Salk Institute in which a ballpark of 100 or so cells are recorded

Â simultaneously. We really have to think about the

Â statistical scale, at which we would describe those cell populations, when

Â we're faced with these type of data. An exciting question.

Â How are you going to do it? How are you going to describe the

Â response of this entire ensemble. How are you going to build up the

Â probability distribution and not just over n1 and n2, as we had in my blue mare

Â example before. But over that which contains all n cells.

Â And this is more than just an academic question as these authors at the bottom

Â of the page for example have emphasized. when we think about simply the practical

Â process of doing this and trying to build up this probability distribution over n

Â cells. Think about it, how many different firing

Â rates are first order statistics? How many different tuning curvers would I

Â have to describe? Well, that's going to be n, right?

Â Because, we've got n cells. But, how many different pairwise

Â combinations are there n squared? Again, these are really the arguments of

Â these authors down here, Schneidman and Shlens, they're colleagues.

Â How many triplet interactions are there? Well, n cubed, quartuplet, on and on and

Â on. And if N is set at reasonably large, this

Â is the appropriate time to make some sort of a galactic metaphor, but you get the

Â picture. We need a intelligent way of doing this,

Â of thinking about these population-wide statistics which is not a brute force

Â enumeration of all the joint statistics. There are too many of them to write down

Â let alone, Other problems that come about with

Â thinking about such a complicated probability distribution.

Â How are you going to do this? There is one, there are many, different

Â approaches, I should say. There's just one I want to close with.

Â This is my very last slide. It's this, here's an approach.

Â Let's talk about this full probability distribution over n cells is what

Â actually happens. What if I tried to based on this complete

Â description of all of these cells. Build up my best possible estimate of

Â what happened in all of those n cells by pretending that I could only observe

Â pairs of cells at once, right? So I'm saying look, I went through that

Â whole first part of that tog. I got what this guy was saying about

Â these paralyze correlations and these ellipses.

Â Let's just try to extend this type of description to the population as a whole.

Â What I would get is some model which we'll call P2.

Â Again, the best possible description based on just looking at pairs of cells

Â at a time. Okay?

Â So, again, if I look at just, at most, two cells at once all I know is how

Â quickly they fire and how correlated all of those individual pairs are.

Â Mm, okay? And then I minimize any further

Â assumptions about the way those cells are interacting with one another.

Â That's equivalent to something called maximize the entropy, this is absolutely

Â not my idea. This goes back to Jaynes and perhaps even

Â further. And has been advanced in the neuroscience

Â literature by all of these other authors who you see listed at the bottom of the

Â page as well as many others. But one idea is, again, to build up this

Â P2 under that assumption. Under this minimal assumptions model.

Â That leads to a particular probability distribution across the whole ensemble.

Â A P2 that looks like this, it has the following special form.

Â We're obviously not going to derive that. The references are here, it's a

Â reasonably doable, but also a more advanced topic.

Â but the bottom line is there's something concrete to compare with in answering the

Â question, is there more there, than what is present at the level of pairs?

Â The answers in research community are mixed.

Â and interesting, this is a contemporary area on the frontier and I'm looking

Â forward to seeing what we all learn, as the field moves in these and other

Â complementary directions in the future. We'll stop there.

Â