0:01

Hi. In this part of the lecture, we're moving

beyond Hodgkin-Huxley to think about simplified models.

So, can one build simple models that capture the behavior of, of true neurons?

But are either, analytically tractable, so that one can do some analysis on them

and understand, maybe, how different ion channels contribute to their interesting

dynamics. Or else, can one build a simulation, a

large scale simulation, that has models that are as simple as possible that is,

that involves little computational time as possible to model and yet capture the

relevant and interesting dynamics of real neurons?

So here are a few different examples of firing patterns from real neurons being

driven by a noisy input. On the top, you see cortical neuron early

in development and then here thalamic neurons that have been recorded under

different depolarizations. So here, in particular, you can see a

very characteristic bursting pattern, where a bunch of spikes are generated in

a clump. And at a different depolarization, those

bursts almost disappear, and you get single spikes, more like, more like in

the case of the cortical neuron. And finally, here's a motor neuron.

So in this case, you see very regular firing.

Motor neurons tend to fire very regularly, and the noise leads only to

small deviations in the regular timing of spikes.

1:18

So, we see that neurons can have a wide range of firing patterns, which come

about partly because of the nature of their dynamics, and partly because of the

nature of their inputs. Let's look at some potential examples of

firing patterns. Imagine that a neuron fired regularly

like this. And to a second input, it also fires

regularly, but with a different, with a different spiking interval.

So one might feel comfortable thinking about this neuron's behavior as

expressing a rate code. The spike frequency signals the input.

What, though, if we now had this case? Here, the mean frequency is the same, but

now the firing times of spikes are shifted slightly.

So, we might imagine that these little changes in local frequency and code

stimulus information, may be like frequency modulator or FM signals.

In the next case here, the main firing rate might still be important.

But there's so much variability in timing that, that suggests that precise spiked

times might mean something distinct about the input.

And what about this final case? Here now you see that there are perhaps

two distinct symbols in the code. This looks like the bursting that we saw

in this thalamic neuron, are these single spikes signalling something different

than these, than these groups of spikes, or these bursts.

So, neurons are capable of firing with these many different kinds of outputs.

And if we're trying to come up with a reduced model, we'd like to aim for one

that would allow us to represent these different behaviors.

So try to keep this range of different behaviors in mind as we go through

different ideas about how to make reduced or simplified model neurons.

2:53

Let's start with the simplest case. Let's just try to write down an equation

from V that does something like what a neuron does.

So we have a differential equation that looks like this, and and our task is to

find a good function f of V that makes [INAUDIBLE] do what we want it to.

So as we observe, the behavior of the neuron can be quite close to linear as

long as it's not near spiking. So how bad would it be to assume that we

simply have a linear neuron. That is, an equation such as we've found

for the passive membrane. So note, from now on, that I'll set the

capacitance equal to 1, so we don't have to carry constants arounds.

So I'm drawing this case above. So here, f of V is simply minus a minus V

naught. So how did the dynamics of such a neuron

look. So here's our equation for the voltage.

Let's, for now leave aside our input. So f of V is minus a V minus V naught.

We have a fixed point at dV dT equals zero.

That is, that V equals V not. Now, how do the dynamics look above and

below that fixed point? If you have a voltage, which is on this

side of the fixed point. So let's add dV dT for this value of the

voltage is positive. So voltage increases, and that's true

everywhere along this part of the line. On this side of the fixed point however

the voltage dV dT is negative. And so anything that's out here moves

back toward V naught. That's what makes V naught a stable fix

point. But how do we get a neuron like this to

fire a spike. We need to add in a couple of things.

So for one thing we need to say that there's some threshold.

So as I move around in V, although I'm always being drawn back to this fixed

point, if I happen, because of the addition of some input, to be pushed up

to some threshold voltage, what I'm going to do is, set this equal to the

time of a spike, and just jump myself up to a maximum.

And so, if we just plot what that looks like in time, we have some voltage that's

varying along. It hits this value of the threshold, V

thres, and instantaneously we're going to set that equal to the maximum of the

spike. And the next thing we're going to do is

take that voltage and reset it. We're going to take it back to some V

reset out here, and now the input will continue to push it around, but starting

at that new reset value. And so you can see that this mimics

pretty well what spike trains look like. Let's have a look at that directly.

So this is like the passive membrane. It, remember the equation that we wrote

down earlier for the passive membrane. This captures that linear behavior.

It has the additional rule that when V reaches the threshold, a spike is fired.

And then, it's reset. And V naught is just the resting

potential of the cell. So here's an example of how the

integrating fire model works in, with, in response to a particular input.

It might be hard to distinguish that from a real, a real spike tree.

So while the integrating fire model has a lot of advantages and suddenly captures

some basic properties of neurons, one can come a lot closer to the true dynamics of

neurons. So, in the integrating fire model, we had

to paste on the spike to make it excitable.

How can we make this model intrinsically excitable?

6:18

So what we need to do is to add on some more stuff to our f ov V.

What we need to do is to give f of V a range where the voltage can, in fact,

increase. So now, what have we done here?

We've added in another fixed point. So we have here, stable fixed point.

Now, what's up with this fixed point? So remember, that here, the voltage heads

toward the fixed point. What's going to happen as we cross this

fixed point. So now with voltages larger than, than

this value, you can see that this dV dt is now positive and we're going to start

heading out to larger and larger values. And so in response to dynamics like this,

what's going to happen is that as one crosses this effective threshold.

So now if you have some effective input that takes you above this value, now the

voltage is just going to, now the voltage is just going to increase.

So that means we still need a couple of extra pieces as we needed for the

integrating of firing neuron. We're going to add a maximal voltage, not

a threshold, now the threshold is determined intrinsically by the crossing

by this unstable fix point of f of V. But now we need some maximal voltage

beyond which the spike can not continue to increase.

And when we reach that voltage, we're going to reset again back to some reset

value. One example of form of a f of V that

works quite well, is simply a quadratic function.

So another example of a choice of f of V that's being shown to fit cortical

neurons very well is the exponential entry of fine neuron.

Now here, we can choose f of V, so that has an exponential piece.

So that part of the dynamics sub threshold are linear and part have this

exponentially increasing part that mimics the rapid rise of the, of the spike.

And again we have to add a maximum and reset.

So this model has an important parameter, delta, which governs how sharply

increasing the nonlinearity is. So here's a strongly related example of a

one dimensional model that gets a lot of use.

This is called the theta neuron. And in the theta neuron, the voltage is

thought of as a phase, theta. When the phase reaches pi, here, we call

that a spike. So what's neat about using a phase

instead of a continuous variable, like voltage as before, is that as soon as you

pass through pi, you wrap around to minus pi and that gives you a built-in reset,

so you don't need to add that extra part into the dynamics.

So the dynamics given by, by this equation here.

This is being shown to actually be equivalent to the one dimensional voltage

model with a quadratic nonlinearity. This model also has a fixed point, V

rest, and an unstable point, V thresh, which acts like a threshold.

Now, because this model fires regularly, even without input, so now, let's imagine

that It is zero. You can see that these dynamics are still

regularly firing. They'll continue to oscillate, the theta

neuron is often used to model periodically firing neurons.

So aesthetically, lets say, we're still a little pained by this construction of the

maximum and the reset, or even the reset on the, on the phase variable.

Is there anything else we can do to improve this simple model?

How might we prevent our spike from increasing to infinity, apart from

putting some maximum on it? So, let's try falling.

So what does that do? Now, there's another fixed point, here.

So that we still have our stable fixed point.

We have an unstable fixed point, which acts as our threshold.

And now, we have antother fixed point. Now, is it stable or unstable?

Let's just, let's just check it. So here we're increasing.

There we're decreasing. Here we're increasing.

And here we're moving back toward that fixed point, this is a stable fixed

point. Hopefully you will sort of intuitive by

now that you can tell whether a fixed point in this one-dimensional

representation is stable or unstable, by just looking at the slope of f of V at

that point. Whenever the slope is negative, that's

the stable fixed point. And if the slope is positive, it's an

unstable fixed point. So now we have this fixed point.

What's the dynamics? Now once we get above our threshold, we

increase. And instead of increasing without bounds,

we go to this fixed point. So that's great.

However, the problem is that it stays there.

The system is called bi-stable. In order to allow the dynamics to come

back from that stable fixed point, let's remember what happened in Hodgin-Huxley.

Actually, two separate mechanisms helped to restore the voltage back to rest.

10:53

One was that, this, one was that the sodium, switching of the drive toward

this sodium equilibrium potential. And the other was set that potassium

channel activated pulling the voltage back to what the potassium equilibrium

potential. So here we need to do something similar

to pull the voltage back toward rest. And that is to include a second variable

to take care of inactivation. So that's done here by including this

second variable, u. So u here decays linearly, but it also

has a coupling with V. So this function of voltage means that

when the voltage gets large, u is also driven to be large.

Then we couple inactivation variable into the voltage equation.

One would want the function G(u) to be negative, so that a large u, pulls V down

again. So this leads us to the consideration of

models that have two dynamical variables. Now, instead of drawing my f of V against

V we need a new kind of plot called a phase plane diagram.

The phase plane is just the plane to find by a dynamical variables V and a.

Now, our understanding of how the model behaves is organized not just by

identifying the fixed points as we were doing so far, but looking at the entire

line of points, where either one or the other variables has zero derivative.

So, we can define these nullclines, here's the V nullcline, as the line in

which In which dV dt equal zero. So we set this equation equal to zero.

That's going to give us a function of u with respect to V, if we solve this

equation. And here, here it is.

Similarly, there's a u nullcline, at which du dt equals zero, and that defines

this other curve. For most neural models, nullclines have

shapes something like I've drawn here. In this particular case, there's one

fixed point that is a true fixed point, where both dV dt equals zero and du dt

equals zero. And that's here.

This is the resting state. So now we can think about what happens if

we start out at some particular value of V and u.

We're going to head out in a trajectory that has a velocity that has a component

in the V direction, given by dV dt, dV dt evaluated at V and u.

And at component in the u direction, which is going to be given by du dt.

At v and u. The nice thing about these nullclines is

they give us a sense of how trajectories in this two-dimensional plane will work.

So this green curve divides parts of the plane in which the voltage is either

increasing, down here on this side of the green curve and decreasing here.

Whereas, the red curve divides regions of the plane in which u is either increasing

or decreasing on this side of the red red curve.

So if we start near rest, with an input that now takes us out into some larger

voltage range, the nonlinearity in voltage now says that we start to move

quickly in V. We're now going to undergo what will look

like a spike. And now that we've crossed that green

line, now remember that the direction of the voltage now changes now we're

going to come backward. Wrap around this direction, move that

way, but we still need to increase in u in this half of the plane.

Now we've crossed that red line. Now we start to decrease and we come back

this way. And so we have a spiking trajectory.

If we now plot that, so now the voltage as a function of time, small and rapidly

increases. Now, depending on how quickly it moves

along this part of the no claim, it will gradually come back again.

So there's an enormous amount or richness and fun to be had by analyzing your

[UNKNOWN] dynamics. Likes in the phase plane this is a very

simple example that can be multiple fix points, limit cycles, all different kinds

of bifurcations and dynamics as the input changes.

Since there's no way we can do justice to this in this course, I'm not going to go

into any of this, would actually be a great point to branch out an entire

second course. Some [UNKNOWN] dynamics in phase plane

analysis. So luckily for you there's a great book

available by Eugene Izhikevich if you want to explore this direction.

The reference is posted on the website. There's also a lot of resources online

by, by scholars like Wulfram Gerstner, Bard Ermentrout, our white knight of the

previous slide. And also generally perusing through

scholarpedia. What I will do, however, is introduce you

to one final model that's inspired by all this richness.

And that's the so called simple model. Izhikevich and others have noted, that if

you zoom in here, to the, to this part of the phase plane, you can pick off the

important dynamics that generate a lot of the nice behavior of real neurons.

16:11

So what this would do is as V gets larger then that's going to drive u to get

large. The coupling here is now going to

decrease the voltage as u gets large. So that forms the basic role of

inactivation. So this reduced model is certainly not

complete. So like in the cases that we've just

left, we've thrown away the higher order dynamics in voltage that allow it to

restore itself from a spike. So we have to go back to putting in a

maximum and a reset. So these are parameters of the model.

The u variable also needs a reset. So one is left with one, two, three, four

parameters and these four parameters determine the decay rate of u.

To determine the sensitivity of u to changes in V and they also determine the

reset of V and u. So here's a range of very different kinds

of firing patterns from very different kinds of neurons, sort of being generated

by different choices of these four parameters.

So these are all model fits to different kinds of, of real neurons and they've

been fit using just those four parameters, so you can see that you can

get a simple regular spiking dynamics that you can get integrating the firing

neuron. You can also get neurons that do

intrinsic bursting that have these very rapid sequences of spikes.

You see bust that are punctuated by these large inactive periods.

You see fast spiking, low, low threshold spiking.

You see spike frequency adaptation that the firing of this neuron starts off

rapid and gets slower and slower. This [UNKNOWN] cortical neuron that has a

burst of spikes and then no firing. And here you see something nice that you

actually can't get it off from the integrated firing like neuron.

You see subthreshold resonance. That is the propensity of the neuron to

oscillate in response to an input. This is something that can only be

achieved with two variable systems, because the two variables can play off

against each other. This can't be caputred by a regular

integrated [INAUDIBLE]. That was a painfully brief and partial

overview of the world of simplified models.

As you can imagine this is something of a mathematician's playground.

So there's lot to be found out there if you'd like to do more reading.

We're going to continue in the next part of today's lecture to go in the other

direction to look at the, the Gory reality of neurons and try to understand

how one can model complicated dendrytic algebras/g.