0:00

Now, there's nothing new what we saw so far.

We've just rearranged terms in finite sums.

But here comes the trick that defines the leaky integrator filter.

When M is very large, we assume that the average over M-1 or

over M point is going to be pretty much the same.

Not much is going to change if we add a single data point to a very long average.

And also, when M is very large, lambda,

which is defined as M -1 over M, is going to be very close to 1.

So, we can rewrite the previous equation as, current estimate for

the moving average is equal to lambda times the previous estimate for

a moving average + 1- lambda x the current input.

And this defines a filter that we can use to process our signal.

Now the filter is recursive because we are using output values,

previous output values to compute the current value of the output.

So there is a feedback path that goes from the output back into the filter.

1:01

Before we even try to analyze this let's see if it works.

So we take our smooth signal corrupted by noise and

we filter it with a leaky integrator for varying values of lambda.

So if we start with lambda rather small we don't see really see much happening.

But as we increase the value of lambda towards one,

which is really the operating assumption for the derivation of the filter anyway,

we see that we start to smooth the signal as we expected.

And when lambda is very close to one, the smoothing power is comparable to that of

a moving average filter with a very high number of tabs.

But now the difference is that each output value only requires three operations.

Because remember, the value is simply 1 multiplication and

then there's going to be another multiplication and 1 sum.

And this is independent of the value of lambda,

so highest motion power at a fixed price.

Of course the natural question when we have a filter is what

the impulse response is going to be.

And to compute the impulse response of the leaky integrator, all we need to do is

plug the delta function into the recursive equation and see what happens.

So when n is less than 0, since the delta function is 0 from

minus infinity to 0, nothing non-0 has ever happened inside this equation.

And so it's very safe to assume that the output will be 0 for

all values of n less than zero.

Things start to happen when n reaches zero at

which point the delta function kicks and assumes a value of one.

So if we compute the value of the output for n = to 0, we have lambda times

the previous value of the output, which we know to be zero, so this is zero.

Plus 1 minus lambda times the value of delta and 0, which is 1.

And so the output would be 1 minus lambda.

At the next step y of 1 is equal to lambda times the previous value of the output,

which now is not known as 0, it's 1 minus lambda as we computed before.

Plus 1 minus lambda times delta 1, which we know to be 0 and so this is cancelled.

And the final output is lambda times 1 when it's lambda.

At the next step, once again the recursive equation kicks in and we have lambda

times the previous value which we know to be lambda times 1 when it's lambda.

Plus 1 minus lambda delta of 2.

But again delta from now on is going to be 0, so

this term we don't even have to look at.

And so you will have lambda square times 1 minus lambda.

3:34

Next step is going to be the same and

you have pretty much understood what's going on here.

If we plot the impulse response we have an exponentially decaying function.

Where the exponential base is lambda and

the whole curve is weighted by one minus lambda and is of course

multiplied by uni step because everything is zero for n less than zero.

You might wonder why we call this filter a leaky integrator.

The reason is because if you consider for instance this very simple sum.

It's the sum of all input samples from minus infinity to current time.

This is really the discrete time approximation to an integral

because you're really accumulating all pass values up to now.

And we can rewrite this summation here as such.

So at each step we get what we have accumulated up to the previous step and

we sum the current input.

So you see it's very close to the equation for

the leaky integrator, except in the leaky integrator what we do is the following.

We scale the previous accumulated value by lambda, which means we,

remember, lambda is close to 1, so that means we keep pretty much all of what

we've accumulated before, but we forget a little bit, we leak a part of it.

If lambda is equal to 0.9, then that means that 10% of what we accumulated is lost.

And we've replaced what we've lost by a fraction of the input,

and this fraction is 1- lambda.

So what we've lost from the accumulator,

we've replaced with equal percentage of the input.

The idea is that by picking lambda < 1, we are forgetting, we're leaking the past,

and the accumulation will never blow up.