Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

Loading...

From the course by Johns Hopkins University

Mathematical Biostatistics Boot Camp 2

41 ratings

Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

From the lesson

Two Binomials

In this module we'll be covering some methods for looking at two binomials. This includes the odds ratio, relative risk and risk difference. We'll discussing mostly confidence intervals in this module and will develop the delta method, the tool used to create these confidence intervals. After you've watched the videos and tried the homework, take a crack at the quiz!

- Brian Caffo, PhDProfessor, Biostatistics

Bloomberg School of Public Health

Okay. So let's go through an example.

Â so, theta, le, let's consider an instance where theta is p1.

Â We're only going to consider x at this point.

Â Our theta hat, p1 hat x over n1. Our estimated standard error right,

Â the standard error for x over n1 is square root p1 hat, 1 minus p1 hat over n1.

Â I hope that's not used anywhere in this pa, case, and let's

Â assume that f we want to estimate is log.

Â So we're interested in log p1, for example.

Â So, f of x in this case is log x.

Â then f prime of x is 1 over x. Okay.

Â And then we know that theta hat minus theta over it's standard

Â error, that tends to a standard normal by the ordinary central limit theorem.

Â In this case, theta hat is an S simple average.

Â Okay. And we're just simply

Â subtracting off it's mean and dividing by a consistent standard error, so

Â it converges to a normal zero one by the ordinary central limit theorem.

Â So then its saying that the standard

Â error, the estimated standard error of the log

Â of the sample proportion is f prime theta hat times standard error of theta hat.

Â So lets go through that calculation.

Â So f prime of theta hat in this case is, f prime is one over its argument.

Â So in this case theta hat

Â is p1 hat.

Â So its one over p1 hat times the standard error, which

Â is square root p1 hat, 1 minus p1 hat over n1.

Â Rearrange terms then you get square root 1 minus

Â p1 hat divided by p1 hat, divided by n1.

Â And so, what this is saying is that log p1 hat minus p1 divided by this

Â standard error, square root 1 minus p1 hat over p1 hat times n1,

Â that tends to N(0,1).

Â So, if I want a confidence interval for p1, for log p1.

Â What I could do is take log p1 hat and add and

Â subtract to, a standard normal quartile, let's say 1.96 for a 95%

Â interval times this standard error here, square root 1 minus p 1

Â hat over p1 hat n1 hat and that would be an interval.

Â And so, that works out to be very convenient.

Â The only

Â complexity in this whole calculation, and it

Â was very mild complexity, was calculating the derivative

Â of the function that we are interested in, the rest of it was simple arithmatic.

Â And that is why the delta method is so powerful.

Â Okay.

Â So that actually doesn't give us a standard

Â error for the, the, the log relative risk.

Â and honestly to do that in its full glory, you need a multi-variant

Â version of the delta method, which there is

Â but we don't really cover it in this class.

Â So, let's look at the asymptotic standard error for the log relative risk.

Â Let's just kind of heuristically do it.

Â So the variance of the log relative risk is

Â the variance of log p1 hat divided by p2 hat.

Â And so, let's just say, you know, so, so

Â that is the variance of log p1 hat plus the

Â variance of log p2 hat, because we're assuming that x and y are independent.

Â The, the group one and group two binomial counts are independent.

Â And so this, from going from, these first three statements,

Â this variance to this variance to this second line here.

Â We, we, these are all exact equalities, we haven't done anything.

Â Now, if we were to use our delta method estimate

Â of variance for each of these things that

Â we calculated on the previous page, then we get

Â one minus p1 hat over p1 hat n1 plus 1 minus p2 hat over p2 hat n2.

Â And if we square root that, that is exactly the standard variance estimate for

Â the log relative risk that, that we gave at the beginning of the class.

Â Okay?

Â And so, that's where it comes from, and you

Â could do exactly the same thing for the odds ratio.

Â And you may have to do a little bit of arithmetic to show that it works

Â out to be 1 over n 1 1 plus 1 over n 1 2, and so on.

Â But, but the same exact rule applies.

Â And it's, it's relatively, it's relatively easy to do.

Â there's one small bit of fudging that we're doing here, in that, we're saying

Â that delta method variances add in the same way that, that, that random variable

Â variances add and, and that is the case. That is the case.

Â If you work out the multi-variant delta method, it,

Â it, it, the, the, the, the, the, the delta method

Â estimated variances of independent things works out to be the

Â sum of their delta method estimated variances worked out independently.

Â But for the purposes of the class, this is not an issue.

Â I think what, what hopefully what you can follow is that for the log relative

Â risk works out to be, for the variance of the log relative risk, we work out

Â to meeting the variance of the two component

Â parts, log p1 hat and log p2 hat.

Â And that we can calculate those associated variances with the delta method.

Â Now notice note, the delta doesn't just give you a standard

Â error and the variance calculation, It also gives you the asymptotic normality.

Â So that it, so that not only do you

Â get the variance estimate, but you get the inference too.

Â You get the actual confidence interval that you want to create

Â as well, or the hypothesis test that you'd like to perform.

Â So the delta method isn't just the standard error variance calculation.

Â That's just the, the neat part of it.

Â The, the rest of it, the, the delta method also tells you

Â you can put the whole thing together as a confidence interval estimate.

Â So, for my final thought on the delta method,

Â I thought I'd just show you quickly why it works.

Â And it, it's very easy.

Â It's surprisingly easy to prove. now we're going to do a heuristic here.

Â but the actual full proof is just not that different.

Â So, let's you know, assume you have a large sample size and the delta

Â method is an asymptotic technique, so we can assume there's a large sample size.

Â And if theta hat is close

Â to theta. Alright?

Â Then f of theta hat minus f of theta over theta hat minus theta.

Â Well that should be approximately, approximate to

Â approximately close to f prime of theta hat.

Â Now why is that?

Â Well, on the left hand side here as theta approaches

Â theta hat or theta hat approaches theta, either way this

Â is just the definition of a derivative of f.

Â Okay?

Â That's loo, if, if, if you're not familiar with

Â it, just look up the definition of a derivative.

Â It's the change in the function divided by the change in

Â the arguments as the change in the arguments goes to zero.

Â So this is exactly just the derivative of f.

Â Okay?

Â And then, you know, here you know, we're assuming that theta hat is

Â close to theta because theta hat converges to theta.

Â And so, consistent estimate.

Â Okay.

Â So this first line is true, and let's just assume

Â we multiply both sides by theta hat minus theta and divide

Â both sides by f prime of theta hat and we

Â get this left-hand expression here and this right hand expression here.

Â And then let's just suppose we divide both sides by the standard error.

Â The estimated standard error.

Â And then what you get is that the

Â right hand side is roughly equal to the left hand side.

Â And so, si, since the right hand side converges to a normal 0,1,

Â the left hand side should similarly converge to a normal 0,1.

Â And it, it also kind of gives you the heuristic of kind of why it works.

Â If we assume f is a smooth enough function.

Â Right?

Â And we appropriately derive, divide by the derivative,

Â then we're really kind of estimating the same thing as theta minus theta hat.

Â as provided theta hat is close to theta.

Â And that that's ultimately why it works, it's nothing other than

Â a, a, an instance of, of, of applying the definition of differentiation.

Â And if you want the formal proof, if you really want that kind of detail.

Â Any way, incidentally by the way my office is in a

Â way on a medical campus, and I'm right near the emergency

Â room, which is probably why you constantly hear the sirens going by.

Â so those are the ambulances going to

Â the emergency room, in case you were wondering.

Â so any way, if you want to improve, prove the mean value theorem identically.

Â Right? if you want to prove the delta method

Â exactly, what you have to use is, is a thing called the mean

Â value theorem, and then you, you get a very formal proof of it.

Â this heuristic is just based on the definition of, of a derivative, but if

Â you use the exact mean value theorem, then, then you get the full proof.

Â And and that's it, and so in case you're wondering where

Â you know, how in the world do people pull out these crazy

Â standard errors for the odds ratio and the

Â relative risk, where do these formulas come from?

Â It's a surprisingly easy little argument involving this

Â quantity, this, this, this concept called the delta method.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.