0:02

In this final part of the lecture, we're going to continue to go beyond

Â Hodgkin-Huxley, but this time, in this direction.

Â We're going to be going back to biophysical reality and addressing the

Â issue of geometry. How do the complexities of neuronal shape

Â and structure affect our computation? In the first lecture we invested heavily

Â in understanding the spike generation process in a patch of membre here at the

Â easy end. But its a little embarrassing to zoom out

Â and look at real neurons which have a truly extraordinary range of beauty and

Â complexity in the geometry of the dendritic abras.

Â So for moving toward building biophysically realistic models of neuro

Â processing. It would be good to know how these

Â structures can contribute to the processing of information.

Â 0:46

So, here's what we learnt to model, a compact cell, and here's what real

Â neurons really look like. Here's even quite a simple version.

Â So, as with the complexities of ion channel dynamics, what is the appropriate

Â level of description of a single neuron that's necessary to understand brain

Â operation? Because we don't yet know the answer to

Â this, and there probably is not one answer to this, it's important to pursue

Â models with many different approaches. So, here I'll be introducing you to the

Â techniques that one can use to handle dendrites, and some ideas about what they

Â may contribute to computation. So let's start by looking to what extent

Â dendrites feel what's going on in the soma.

Â So here, this is an impulse point that's being put in at the soma.

Â Let's see what that input looks like when it reaches the dendrite.

Â You can see that it's both delayed, it's reduced in amplitude, and it's broader

Â Similarly if we put an input here, add in the dendrite, and we look at what happens

Â at the soma in response to that input, you also see that it's much reduced in

Â size and it's broadened out. Furthermore, how thin the dendrite is

Â affects how big a voltage change you could make with a given amount of current

Â input. The thinner the dendrite the larger the

Â voltage change but generally the further away the the more that input gets

Â filtered and attenuated. This tells us the inputs that come along

Â different parts of the dendrite can have very different effects and very different

Â influence on firing at the soma. As you can image this can have a

Â tremendous impact on the information that is integrated and representated by the

Â receiving neurons The theoretical basis for understanding voltage propogation in

Â dendrites and axons is cable theory, which was developed by Kelvin in quite a

Â different context. The voltage, V, is now a function of both

Â space and time, which means that we're now dealing with partial, rather than

Â ordinary differential equations. So here's the setup.

Â We now think about a tube of membrane with sides have the same properties as

Â our membrane patch. They have both capacitance and

Â resistance. So we see little elements that look a lot

Â like our, like our previous patch model, but now they distributed down a cable.

Â There's additionally the resistance of the cable interior.

Â Current can flow along the cable as well as through it.

Â So generally we're not going to worry about the external medium here.

Â We'll just take it to be infinitely conducting with a resistance of zero.

Â So the cable equation for a passive membrane, we're not going to deal with

Â ion channels for now, is derived by considering the changes in current as a

Â function of space. The current down the cable will be driven

Â by steps in voltage as a function of x. So, if we have a voltage difference

Â between two points in the membrane, that's going to drive a current down the

Â membrane. Of course, current is also passing out of

Â the membrane. That's the im that we modeled previously.

Â Now when one puts those 2 things togteher dealing with the way current flows out of

Â the membrane and the way that it flows down the membrane 1 obtains an equation

Â that is actually 2nd order in space so it has a 2nd derivative with respect to

Â space. So this half of the equation you 'll

Â recognize that we've seen before of the passive membrane now we have an

Â additional term that, that includes a spacial derivative.

Â So, some of you will recognize that this equation is not unlike the equation that

Â describes diffusion or heat propagation, but it has this additional term, so this

Â part looks like diffusiion, has this additional term that's linear in v.

Â You might remember that when we rewrote the RC second equation for the passive

Â membrane to find the time constant of the membrane that gave us sort of the

Â fundamental time scale of its dynamics. So there's something very similar in this

Â case too we can rewrite the equation in this form where we bring together all the

Â dimensional quantities. This will ask us to read off the natural

Â time scale so going with this time derivative there's a a time.

Â Constant which is our tow M and now when we look at the, the spacial derivative

Â this has units of 1 over space squared and there's a space constant out the

Â front lambda that carries the typical spacial scale from the coefficient of the

Â space derivative. That's given by the square root of the

Â ratio of the membrane resistance divided by the internal resistance.

Â 6:28

Now let's put in a brief pause and see how that behaves at T equals 0 we put in

Â a spike of input at X equals 0. Here's what happens to it, the input gets

Â broader, spreads out spatially and also decays in time.

Â This is a lot like diffusion. If you spray a pulse of perfume,

Â somewhere in a room, and were able to watch what happened to it, it would do

Â something similar. It would spread out with the same spatial

Â profile. But for the perfume, there's always the

Â same amount of perfume. If you were to add up all the molecules

Â of perfume in the smeared out blob, it would be the same as you started with.

Â For the voltage, that's not the case. The total voltage signal is decaying away

Â in time, because of that first order part in the differential equation.

Â That is, the total voltage is decaying exponentially, just as it did in the RC

Â circuit. So what are we going to see if we sit

Â some distance away, and observe the change in voltage?

Â That's what's shown in this figure. So these are different curves that plot

Â the voltage that's observed at different locations.

Â X equals 0, x equals .5, x equals 1. These are all in units of the space

Â constant of the membrane. At different times, and so sitting at x

Â equal 0, we see an exponential decay. As we sit a little further away we're

Â going to see that. That, that pulse of voltage change first

Â rise, then seal the decay away again. So you can see how rapidly the signal

Â decays as a function of space. So at about, at one space constant you

Â still see a reasonably large Deflection in voltage that's caused by that pulse.

Â But at two space constance away, the signal, the size of the, of the pulse

Â that we see there is down to five percent of the original.

Â 8:55

So for some of you who like to see general solutions here's how the voltage

Â responds to a pulse of input at time T equals zero and position X equals zero

Â looks at position X and time T. We can see that this solution is made up

Â of two parts. So here's the diffusive spread for this

Â Gaussian profile. That's very similar to the way things

Â spread diffusively. And mutliplying that, there's this

Â pre-factor that has an exponential decay. So, knowing the solution, one could take

Â some arbitrary pattern of inputs, decompose it into pulses, as we, as we

Â put in here, at different central locations.

Â Say, T prime and X prime, and add together a weighted sum of this solution

Â form. We've centered up those differnt

Â locations and times. So now we know how to find solutions for

Â a very long pass of cable with a fixed radius.

Â In fact doesn't get us very far in dealing with the real neuron, because of

Â two things, the inter current branching of dendrites and the fact that many

Â dendrites are not passive but are active. That is, they have ion channels in them.

Â So it quickly becomes very difficult to solve anything analytically.

Â The path forward is by dividing the dendritic arbor into what's called

Â compartments. So here's an example, one can approximate

Â the dendritic arbor as a coupled set of compartments.

Â So these, we're going to break this dendrite into little sub regions In which

Â the radius and the ion channel density is taken to be constant.

Â Each compartment will then have an equation that only depends on the time

Â derivative of the voltage, and not on x. The spatial dependence is incorporated by

Â coupling each compartment together. So, so [INAUDIBLE] Rall device is a

Â helpful way to approximate complex dendritic trees of.

Â Let's consider a branch that divides into 2 daughters.

Â It turns out that if the diameters of these two branches have this particular

Â relationship to the diameter of their mother so if the if the diameter of 1 of

Â them raises to 3 halves plus the diameter of the other also raised to the 3 halves

Â is equal to the diameter of the mother raised to the 3 halves.

Â What that means is that these two branches are impedance matched to this

Â branch, and one can simply accumulate them all together into one long branch of

Â the same, of the same diameter as the mother.

Â So one needs to thus compute the effective electrotonic length of these

Â two additional branch elements. And extend the original cable by that

Â much. So you can see that one can continue to

Â do this iteratively. If the same property holds here, lennox

Â can extend that branch out to an effective branch of this, of D2 diameter.

Â And then one can agormate those two together.

Â Until one eventually has a single, a single cable coming out of the soma.

Â So it turns out that this condition on the diameters it approximately satisfied

Â by real dendrites and even when it's not exact the resulting approximation is

Â often quiet accurate. So the role models are very useful for

Â passive membranes But it doesn't address the issue of ion channels which make the

Â problem nonlinear. Furthermore ion channels densities often

Â very along dendrites which can lead to a lot of interesting effects that one might

Â like to explore. so here's the full approach, given the

Â geometry and the ion channel density of the dendritic tree One can divide it

Â where the properties are approximately constant.

Â 12:36

One can then write down equations for the membrane potential in each compartment

Â individually. So let's say compartment 1 we represent

Â here in terms of a, a similar second model that we saw for the passive

Â membrane we're going to give that the voltage V1 and now right down an equation

Â for V1. We can do the same for compartment two

Â and compartment three, here. These equations will be similar to the

Â passive membrane equations we looked at for Hodgkin-Huxley, but with the

Â individual ion conductances, membrane resistance and capacitance set

Â appropriately for each piece of the cable.

Â Furthermore, there'll be also two tons that couple the compartment with it's

Â neighbors. The current input from the neighboring

Â compartments. Which depends on the voltage difference

Â between the two compartments and a fixed coupling conductance.

Â So here for example, let me write down an equation for for v two.

Â We're going to need to include a current that's coming from, from compartment one

Â that's going to go like g one, two multiply it by V1 minus v2.

Â And similarly, there'll be a current that comes into compartment one from

Â compartment two that's going to have a different coupling conductance and the

Â opposite voltage difference. These fixed turns, these coupling

Â conductances, depend on the area of the connection, and whether or not the

Â compartments straddle a branching point. So if you notice, that in general, these

Â coupling conductances are not necessarily symmetric, which is why there are 2

Â values at each of these connections. So there are many models like these that

Â have been built from microscopic reconstructions of single neurons.

Â And a great many have been made publicly available on the ModelDB site maintained

Â at Yale. So if you're in the mood to go explore a

Â dendritic forest, there's plenty out there, so don't forget your adventure

Â hat. So what do den-, so what do dendrites add

Â to neuronal computation? There are many proposals for ways in

Â which the filtering and active properties of dendrites can work, to shift the way

Â in which incoming information is Clearly, where an input arrives on the tree can

Â influence the effective strength of the input because of the passer properties.

Â Interestingly, it's been found that in the hippocampus, neuronal dendrites have

Â solved this problem. So that when inputs arrive at the suma/g,

Â they have a very similar shape no matter where they come in.

Â This amazing property is known as synaptic scaling.

Â 14:58

Filtering through the dendritic tree on the way to the sonar also determines

Â whether a sequence of successive inputs is integrated to build up to potentially

Â drive the spike or not. Where two different inputs enter the

Â dendritic tree can also make a huge difference in how they interact with each

Â other. For instance, if two inputs come in on

Â separate branches. They contribute independently.

Â While if they are on the same branch, they can sum either sublinearly or super

Â linearly which leads to amplification. Another very important property is that

Â thanks to their ion channels, dendrites can generate spikes, generally calcium

Â spikes. [INAUDIBLE] This leads to the possibility

Â that one could use coincidence of inputs, driving spikes in the dendrites.

Â Along with back propagating spikes from the soma back to the dendrites to drive

Â synaptic plasticity. This is a topic you'll be hearing much

Â more about in the next lectures. So, let's close out by looking at two

Â ideas for how dendrites might perform a computational role.

Â The experimental evidence supporting these mechanisms is somewhat mixed, but

Â the fundamental ideas stand as interesting conceptual models.

Â 16:05

First, here's a wonderful example where the propagation of signals through cables

Â is thought to help out in carrying out a computation.

Â Nuclei in the auditory brain stem are responsible for sound localization, the

Â ability that we all have to locate where a sound is coming from.

Â The que that's thought to be used is the timing difference in the arrival of a

Â sound at our two years. The sound arise at the two ears at

Â slightly different times, and the signal those travels through the left and right

Â auditory pathways at slightly different times.

Â Imagine that these two signals are piped into the population of neurons The

Â thresholds are set such that they can only fire, when coincidence signals from

Â two different inputs arrive at the same time.

Â Each neuron receives the two inputs with a delay caused by traveling different

Â distances along the dendrites. The neuron that fires the most is the one

Â for which the relative timing delays, due to the timing difference between the two

Â ears Is compensated for by the dendritic delay.

Â This mechanism turns a tiny difference into a place code where by the label of

Â the neuron that fires, indicates the timing delay, which can then be

Â translated into the spatial location of the sound.

Â 17:19

Here's a final example Neurons in the retina show direction cell activity.

Â That is, they respond to individual stimulus moves in one direction, and is

Â suppressed when it moves in the other. So how might such direction cell activity

Â begin constructed at the single neuron level?

Â Imagine that inputs from different spatial locations are coming in to a

Â dendrite at locations along the dendrite, arranged as in space.

Â As the dendrite receives a sequence of activations, you can see that if they

Â receive that input first, at the far end of the dendrite, and it begins to travel

Â toward the soma, then another input comes in, closer to the soma.

Â Then the influence of these two inputs can sum and build up as more and more

Â inputs arrive. So that the net input crosses some

Â threshold. On the other hand if the nearby location

Â is stimulated first, then the next one and then the next one and then the first

Â inputs will die away by the time the later ones arrived.

Â And you can see how, how that, how that voltage signal at the soma might behave.

Â So this idea was first proposed by Rall. While it may not fully explain direction

Â selectivity and retinal ganglion cells, the general idea does predict that the

Â firing probability of the neuron should be sensitive to the sequence of inputs

Â along the dendrite. Let's say these inputs occur in some

Â sequence. One could then scramble the order, and

Â see whether the firing of the neuron can distinguish those inputs.

Â Michael Hausser's lab carried out this experiment by simulating sequences of

Â synaptic inputs into single neurons and found that indeed different input

Â sequences were discriminable. Okay so here's where we wrap up my

Â section of the course. I hope you've enjoyed this brief

Â introduction into the electrical basis of neural signaling in the brain.

Â Roshesh is going to take it from here, to guide you through the ways in which these

Â basic cellular components get wired up together, to produce the amazing variety

Â amount of behaviors that I know the systems are capable of, through

Â experience and through learning. Have fun.

Â