In this segment, I want to continue our discussion of tail-recursion and show you

the common pattern of how we create tail-recursive functions and the role

that accumulators play in that process. So, what I'm trying to emphasize is that

when we have tail recursion, we do have significantly more efficient functions.

So, if it's reasonably eloquent, if it's simple, and if performance is important,

it can be worth rewriting your function to be tail-recursive.

So a tail-recursive function is just one where all the recursive calls are

themselves tail calls. So we have this situation where when we

do the recursion, there is no more work for the caller to do after the recursive

call is done. And it turns out there is a methodology

that can often guide this transformation, a way for you to take your code and make

it tail-recursive. And that is to create a helper function

that takes an accumulator, something that combines answers as you go along to hold

the answers so far. And what typically happens, is the old

base case from your recursion is your initial accumulator.

And, in the recursive accumulator style

function, your final result is just your accumulator and it turns out that this

works anytime that you can combine your results in essentially any order, in

order to get the same result. So, in our previous example where we had

factorial, that's indeed the case. It doesn't matter to us what order we

multiply all these numbers, three times two times one or one times two times

three, and therefore, this transformation worked

just fine. So indeed, if you think of factorial in

your head as multiply n by the recursive factorial of n minus one, that's the

traditional version. But the tail-recursive version, which we

have here, uses our methodology.

We have a local helper function aux that takes an extra argument, the accumulator.

The initial accumulator in this call here between the in and the n passes in one

which used to be our base case. And then, in the tail-recursive helper

function, our base case when n equals zero is to just return the accumulator.

And so, we are still recursively aux ox each time with n minus one, but we're

multiplying numbers as we go so when we get to zero, the accumulator is our

answer and that's why this worked out. So let me show you two more examples,

the second one will actually be more interesting.

The first example is just summing all the elements in the list.

It turns out that at the top, we've written functions like this many times,

this not a tail-recursive function. In the recursive call here, where we call

sum with xs' prime, the caller still has work to do after

this call finishes. We need to add x, to that result.

But our methodology will work here, because we don't care what order we sum

numbers up, and so, we can create a helper function

aux. It can take an additional argument and

accumulator. Its base case can just return that

accumulator. When we have our recursive call now, we

make sure it's tail-recursive. There will be no more work to do by the

caller after we complete this call. And we pass in, for our new accumulator,

xiii, + acc, so that we're adding thing as we go.

And then, our initial accumulator, 0, is what our base case was above when we had

our traditional version of sum. So that was also a fairly easy example,

factorial and sum are fairly similar. let's do a more interesting example.

This is something I haven't shown you before.

Let's take a list and reverse it. Okay? So if we get, had the list one,

two, three, we want to come up with a list three, two, one. And it turns out,

in this case, the traditional version is not very good and we're going to look at

why. So I'd taken a list, xs'.

The way you reverse the empty list is you return the empty list. No problem there.

Otherwise a-ha, we do want to recursively reverse xs' prime.

Sorry, that should be xs' prime right there.

and that's fine, but then what we need to do is put x at the end.

Alright? And it won't work to write cons X because

cons can't have a list in the first position and an element in the second

position. Now what we need to do is we need to make

a list holding x and then append to those two lists together.

And it turns out that's remarkably inefficient as what we'll talk about on

the next slide, but if we just do our tail-recursive version, everything will

work out normally. So here's the tail-recursive version.

Create a local helper function that takes in the accumulator.

Let's start that accumulator with the empty list. Let's pass in xs' for the

argument. nd here, if xs' is empty, then just return the accumulator.

But now what we should do in the recursive case, is yes, pass in xs' prime

for X is, and then this accumulator which is the reversal of the list so far, we

just want to put x on the front of that. Okay?

So if we were to reverse one, two, three, the initial accumulator would be empty,

then the next accumulator would be one cons on empty, then two cons on the one

on the empty. And indeed, our natural accumulator

pattern is reversing the list quite simply with a simple tail recursive

function and as we, as efficient as a loop, it takes a list and produces a new

list that is the reverse of the list you started with.

Okay. So as promised dig a little deeper here

on what's so bad, so inefficient about the traditional version.

Okay? So it turns out that it's not just that

we're going to build up a call stack every time here.

It's actually that this append operator always copies the first list.

It has to that's how append works. The same thing happens in the version of

append we would write whether append is tail recursive or not.

So it turns out that this code, if you were reversing a list of length

proportional to some number, let's call it k.

The amount of work this is doing is proportional to k squared.

It's growing quadratically with the length of the list.

That was not true for factorial. It is not true for sum.

It's because, when we have these k recursive calls, one of them is going to

copy a list of length one, one a length of list two, one a length of list three,

all the way up to one of list of length k minus one, or maybe even k.

And the sum of the numbers from one to k? Is approximately k squared over two.

It is only about, it is about half as big as K squared.

And that is a lot more work then our tail-recursive function.

Our tail recursive function did not use this append, right?

It only used cons. The total amount of work this bottom

version is doing is proportional to the length of the list,

whereas the top version is doing a total amount of work proportional to the length

of the list squared. So this is an interesting point.

The moral here is don't just obsess about whether you're tail-recursive or not.

You should also beware of appending things,

that if you are recursively appending at every step, your algorithm is sometimes

is significantly less efficient than you might expect it to be.