A practical and example filled tour of simple and multiple regression techniques (linear, logistic, and Cox PH) for estimation, adjustment and prediction.

Loading...

From the course by Johns Hopkins University

Statistical Reasoning for Public Health 2: Regression Methods

68 ratings

A practical and example filled tour of simple and multiple regression techniques (linear, logistic, and Cox PH) for estimation, adjustment and prediction.

From the lesson

Module 3A: Multiple Regression Methods

This module extends linear and logistic methods to allow for the inclusion of multiple predictors in a single regression model.

- John McGready, PhD, MSAssociate Scientist, Biostatistics

Bloomberg School of Public Health

Greetings and welcome to Section B.

In this section we're going to start with multiple linear regression now, and

look at some examples, to make specific comparisons to what we talked about in

the general context in the previous section.

So, hopefully by the end of this section you'll be able to

interpret the intercept and slope estimates from multiple linear

regression models in a substantive context.

And then compare the results from simple or

unadjusted associations via simple linear regression.

And then multiple linear regression models to assess confounding.

So let's first talk about predictors of arm circumference.

We'll start with height and we'll go back to what we looked at in lecture 1.

The simple linear regression relating arm circumference to height.

We're using a random sample of 150 Nepalese children less than a year old.

In the unadjusted association we estimated with the simple regression, and it's

unadjusted because we're only considering height as the one predictor for

arm circumference was such that we'd estimate the mean arm circumference.

For a group of children of a given height by taking the intercept 2.7 and

adding 0.16 times the height value in centimeters for the group.

So, we saw a visual evidence of an association seemed well described by

a line, but we noted that the R square was quote, unquote only 46%.

And again, it's hard to put a value judgment on the size of R squar square.

But for a physical phenomena, like growth in children, we'd expect there to

be relatively strong correlations between the anthropometric measures.

Now 46% is reasonably strong.

But, because of the way growth works, this implies that other predictors,

such as weight or sex, may be able to predict some of the variability in arm

circumference that wasn't explained by height.

We also showed in lecture 4, evidence that this crude association was

confounded by weight differences across the different height values.

So let's tie that altogether now.

So first, let's just do a simple regression to estimate the unadjusted

association between arm circumference and height.

In lecture 4, we saw a scatter plot that showed that this was reasonably well

described by a line.

The unadjusted association via this regression is such that our slope for

height is 0.8 and the intercept which describes or estimates the mean arm

circumference for children that weigh zero kilograms which is not a group sample.

But this is a necessary place holder at 7.8.

The 95% confidence interval for

that slope of 0.8 is 0.72 to 0.89 so it does not include zero.

And in fact the P value, protesting the null, no association between arm

circumference and height at the population level was predictably rather small.

The R square here is notably high, 70%, so 70% of the variability in s, we estimate

that 70% of the variability in this arm circumference values and the sample

at least was explained by variability in weight between the children.

So now let's put the two together though.

Let's see if we can do a both better job of predicting mean arm circumference and

also whether the results are relationships between arm circumference in

both height and weight change when we consider the other in this same model.

So what we get here now is if I use the computer to estimate this,

I'd actually get a multiple regression model.

Where I have one predictor of height and one of weight, such that

the estimated average arm circumference is equal to some intercept 14 plus

negative 0.16 times the height of the group of children we're looking at.

And the number may sound familiar from lecture 4.

Plus 1.40 times weight for the group of children we're looking at.

So we're going to parse this, closely in a second and compare the unadjusted, but

let me just note that both the adjusted associations between

height and weight in this model were statistically significant.

So both contributed, from a statistical perspective,

information to the outcome of arm circumference above and beyond the other.

And the R squared here is 0.77 which is higher than either R squared we saw for

height or weight alone although not much higher than what we saw from weight alone.

So let's just, before we talk about potential confounding and

assessing that by comparing the unadjusted and adjusted and

talk about why this R squared wasn't much larger than what we saw with weight alone.

Let's, let's try and interpret these slopes of height and

weight in terms of the comparisons that's being made by each.

So the estimated slope for height is beta 1 hat equals negative 0.16.

So this is still the estimated mean difference in arm circumference between

two groups of children who differ by one centimeter in height but

it's more specific because it's comparing groups who differ by one centimeter in

height but are of the same weight.

This is a different interpretation and when we have a simple linear

regression with height because that didn't consider weight at all.

Here we've adjusted this comparison for weight such that we're

comparing groups who are comparable in terms of weight but differ by height.

And we saw in lecture 4 that we got a similar result or we,

we presented the same result but we didn't talk about where it came from per say.

But this is the weight adjusted association between arm

circumference and height.

Now think about this, does it make sense that the relationship between average arm

circumference and height is negative when we're comparing groups of the same weight?

Well, think about it.

If they're the same weight, sort of the same mass in terms of heft,

increasing in height is associated with a more lanky, if you will,

or thinner body type which would translate into narrower arm circumference.

So, it does make some sense that after we adjust for

weight and are comparing groups are comparable in terms of weight,

height is negatively associated with arm circumference.

So, this result is an adjusted mean difference.

Another way to say this is that this result,

this negative 0.16 estimates the groups of children who differ by one centimeter in

height but are the same weight will differ in arm circumference on

average by negative 0.16 centimeters, taller to shorter.

So the taller group will have lesser average arm circumference.

And that 95% confidence interval suggests that this difference is real on average,

and could be anywhere from, anywhere from negative 0.21 to negative 0.11

centimeter difference in arm circumference per additional centimeter in height.

The slope estimate for weight is 1.40.

This is still an estimated mean difference in arm circumference between two

groups who differ by one kilogram in weight, but here the comparison is more

specific than the simple regression because this is adjusted for height.

So, this compares children who differ by one unit in weight, but

are of the same height.

This is called the height adjusted association between arm circumference and

weight and this result another way to state is that it estimates that groups of

children who differ by one kilogram in weight, but are the same height will

differ in arm circumference on average by 1.4 centimeters, heavier to lighter.

So increased weight is associated with an increased arm

circumference among children of the same size.

And think about this, this also makes some sense biologically.

And then the 95% confidence interval for

the true height adjusted arm circumference and weight association is 1.2

centimeters per kilogram of weight, up to 1.6 centimeters per kilogram weight.

That estimates a range for the size of the association at the population level,

accounting for sampling variability.

So how could we present these findings?

Well, in research articles, and

we'll look at some results from research articles in similar tables in section D.

Frequently a single table of unadjusted and

adjusted associations will be presented, especially for non-randomized studies

where adjustment is necessary to at least assess whether there's any confounding.

So, something like this might in, in,

in our, if we were to present this in an article, it might look like this.

We have a table.

It's called, it says linear regression results for

predictors of arm circumference and at this point we've only considered two.

And we have one column that says Unadjusted and

this implies that the results being estimated here are slopes because we

have linear regression from simple regression models.

So this is from the simple regression model of

arm circumference on height alone.

And this is from the simple regression model of arm circumference on weight.

And, then the adjusted estimates, and

the implication is that each has been adjusted for the other.

Or all other things in the table, which there's only one.

If we're looking at height, the only other thing is weight.

If we're looking at weight, the only other things is height.

These are from the multiple regression model that included height and

weight as predictor.

So we can immediately see that the association between height changed not

only magnitude but direction.

After we adjusted for weight, so we get some very

clear numerical demonstration of the confounding in these data.

We also see that the association between weight and arm circumference,

while it's positive and statistically significant when it's unadjusted.

It's larger in magnitude, still positive.

And, again, statistically significant when adjusted.

But if you look at the confidence intervals for the unadjusted and

adjusted, they don't overlap.

So, this implies that there was some confounding here as well.

We were underestimating the association when

we didn't adjust for height differences between the weight groups, and

that estimate is larger both in terms of the estimate, and

in terms of the uncertainty interval after we adjust for height.

Sometimes, the results from several models will be presented, so for

example if we also wanted to consider sex as a predictor,

we might do something like this.

We have the unadjusted associations here, this is the again, the two unadjusted

associations between arm circumference and height and arm circumference and weight.

Here is the unadjusted association with sex and

one way to present this to indicate that this number compares females to males is

to list female and males and for it says rails, put ref for reference.

For, indicate that this is the group, reference group for the sex comparison.

And then by female we put negative 0.13 which indicates this is the unadjusted

difference in arm circumference, in mean arm circumference for

females compared to the reference of males.

And this is not statistically significant.

Here in Model 2, what's filled out in the table are the height and

weight, information but no sex.

So this implies that the results from this model include height and

weight as predictors in a multiple model.

It also gives the intercept so that somebody could read the whole equation and

use it for

prediction purposes which we'll talk about in the next section if they wanted to.

If we only gave the slopes, they could make comparisons in terms of

mean differences but they couldn't estimate the mean arm circumference for

any specific single group, given the weight and height info.

And then in Model 3, the implication here is it has all three, predictors.

So this is from a multiple regression model of height, weight, and

sex all taken together.

And so we'll notice that the associations between height and

weight don't change much when they're additionally adjusted for sex above and

beyond being adjusted for each other.

The estimated slopes are of similar magnitude and

the confidence intervals overlap.

Interestingly enough though the association for

sex, the difference between females and males goes from negative and

non-statistically significant when it was unadjusted accrued to positive and

statistically significant when it's adjusted for height and weight.

So this 0.30 here estimates the mean difference in

arm circumference between females and males of the same weight and height.

And once we control for those and dimensions of mass, we

see that comparable females have, in terms of weight and height, have greater average

arm circumference than comparable males, and it is statistically significant.

Let's look at another example.

These were data from the National, how, Hospital Ambulatory Medicare,

Medical Care Survey, NHAMCS.

So the potential predictors they include, there's more than these.

But they include things like sex,

the race of the person which was actually the sample was people who,

a random sample of people who visited emergency departments in 2010.

In a random sample of US hospitals, and we have the sex of the,

individuals, the race categorizes black, white or other.

The insurance payer type, whether it's public, private or

other, and age, which is categorized in the data in to four quartiles.

So I'll just show you table, jump to the table that you might see in

a article looking at predictors or emergency department waiting times.

And so what this shows is one set of unadjusted estimates and

one set of adjusted estimates from a multiple linear regression.

So let's just consider the unadjusted estimates for a minute.

So this sometimes when we have binary predictors,

they won't specify which is the reference group with it's own row, but

they'll just name the predictor for the group that is not the reference group.

So male here implies that this is a sex comparison, and we're comparing males.

And the hidden group, the one that's not shown, females, is the reference.

So this negative 2.5 said that males, on average,

had waiting times of 2.5 minutes less than females.

Accounting for sampling variability, this reduction average waiting time

could be anywhere from 4.4 to 0.7 minutes but it is statistically significant,

indicating that at the population level males had

shorter waiting times on average because that interval does not include zero.

But this does not take into account any other characteristics of the patients.

Here's the racial comparisons not considering any other factors.

So White here,

we actually put White on the table to indicate it's the reference group.

Black, this co, this slope here of 19.3 compares the average difference in

waiting times for Black patients to White patients.

No other factors considered and it was 19.3 minutes on average and

statistically significantly so.

And this 2.6 compares the mean difference in waiting times for

those who identify as Other, not Black or White compared to White.

And this difference is not statistically significant.

See this p value up here?

This is what I was alluding to in the first, section.

The null here is that the slopes, the White is the reference group so

the slopes, the mean differences between Black and White and Other and

White, are equal to each other and are equal to 0.

Meaning, if you play this out, not only the differences between each of

these groups and White, 0 but the differences between Other and

Black, which would be the difference of these two things, is also 0.

But null is that there's difference in waiting times on

average between the three racial groups.

And the alternative is that at least one group has a statistically significant

different waiting time.

And so we can see this null, this p value is less than 0.001.

This is an anova, analysis of variance is what this p value is testing,

whether there's any mean differences here.

And the answer is, this results are statistically significant, so

at least one group was statistically significantly different than the other and

we don't have a confidence interval for the black to other comparison but

we can certainly see that the difference between black and

white is statistically significant.

Age, we can see that older age is associated with

longer waiting times relative to be less than 20.

Until you get into the 55 and older group.

And the average is slightly low under the reference but

not statistically significantly so.

But overall there is an association between age and

waiting times as that p value for testing for any differences is less than 0.001.

And similarly Public, those on public insurance, have

longer waiting times by about 3.5 minutes on average compared to those on Private.

Statistically those are another which include those with self pay,

who are self paid who have no insurance,

have average waiting times of 11 minutes longer than those on Public.

And that is also statistically significant and of course what we can already tell

seeing some of these differences but on the whole, there is an association.

So let's look at the adjusted estimates.

So this, for males now compares the difference in males to females,

the difference in average waiting time of same race,

age, category at least, and payer type.

And this is very similar.

It's still negative,

males have lesser time than comparable females by 2.1 minutes.

And it's statistically significant.

So the comparison here is the average difference in waiting times for

males to females of the same race, age, and

payer is such that males have waiting times 2.1 minutes lower on average.

So from a convalley perspective that doesn't at least in terms of race, age and

payer, these don't seem to confound much the overall unadjusted association, it's

still negative after adjustment similar in value and statistically significant.

Without race we might ask whether some of the original difference in race had to do

with sex, age or payer type differences between the racial groups.

Now, let's compare black to white after adjustment for

these other things, so this 18 minute difference here is the average

difference between blacks to whites after adjusting for, or otherwise,

another way to say it is around the same age, sex and payer type.

And this difference is still, remains, slightly smaller, but

substantly equivalent more than 15 minutes, more than a quarter of an hour.

And it's statistically significant and the differences between others and

why it stays almost exactly the same so it doesn't appear that those original

differences because of race are explained by differences in sex,

age or payer types between these racial categories as they effect waiting time.

And this p value here, is testing the null.

That there's no association between waiting times and race.

After accounting for sex, age and payer type.

If we go down to age I'll let you look at the details but there doesn't appear to be

much confounding, the associations are similar in terms of their magnitude and

significance after adjustment.

This is the p value from the,

null testing at the slopes for the three age categories.

Are all equal to each other and 0.

From the model that's already accounted for male.

For sex, race, and payer type.

So after adjusting for sex, race, and payer type the null here is that there

is no association between waiting times and age and that, is rejected.

So this says that.

Age tells us, gives us information about waiting times above and

beyond those other three predictors.

And the story with payer type remains very similar as well.

So if we were to write this model out just for, just to have some fun,

to show what this model looks like, here's the intercept.

This model, this adjusted model looks like this.

We have our average waiting time, y hat is equal to the intercept, 46.5.

I'm going to write it down just next to this plus negative 2.1.

And I'll say this is x1, where x1 is a 1 for males and a 0 for females.

Plus, now I'll do the race part.

Plus 18.0 times x2 plus 2.6 times x3 so

x2 is a 1 if subject identifies as black, 0 if not.

X3 is a 1 if they identify as others, 0 if not.

So this is our race component.

And we have the age component.

Into four categories.

You have, 5.2 times, I'll call it x4 plus 4.9

times x5 plus negative 0.1 times x6.

So, x4 is a 1 if the person is 20-34 years and a 0 if not.

These represent the other two categories.

And we'd have payer type.

So plus 3.3 times x7

plus 10.0 times x8.

So there's eight x's in this model but there's only four predictors, sex,

race, age and payer type but some of these predictors require multiple x's so

that's just to give you a sense, you know, these models can be big,

they can be bigger than this.

But nicely in the paper, or in a paper context,

we see a table like this instead of the entire equation written out.

And this is a more concise presentation because we get the estimate and

the confidence interval.

Where these confidence intervals come from well like I say it was business as usual,

just to give you an example though, here are the slopes we presented for

the age categories and their estimated standard error.

So the confidence interval given in the previous table for

the mean difference adjusted for sex, race and

payer type between those that are 20-34 years old in the youngest

group would be the estimate of 5.2 plus or minus two standard errors.

And that gave us that confidence interval from 2.2 to 8.2.

And this looks slightly different than what I

showed on the previous slide because I rounded the numbers where as I

got the previous ones form the computer but this is 4.9 plus or minus 2(1.4).

Which gives us the confidence interval of 2.1 to 7.7.

Very similar to what was on the previous slide.

And this is negative 0.1 plus or minus 2(1.5).

Which gives a confidence interval we showed before of negative 2.9 to 2.8, so

it's the same old business as usual for confidence intervals.

Okay, so in summary, we've given a couple examples here of multiple regression.

We've showed how to interpret the resulting slopes.

What would the estimated intercept describe in these models,

that have multiple predictors?

Well, for example, in the model with height, and weight, and sex.

On arm circumference, the intercept estimates the average arm

circumference for, male children, because that was the reference for sex.

Of the same, male children with zero height and zero weight.

So it's a completely fictional group that doesn't describe anybody in our sample.

But we've shown how to act, so the intercepts frequently in multiple

regression don't have any relevance to the population from which our sample is drawn,

but they're necessary to specify the entire equation.

But we've seen in this section how to interpret the slopes from multiple

regression models in terms of the comparisons being made, and the mean

differences, and how they're specifically adjusted for other things in the model.

And we've looked at how to assess confounding through two examples, and

we'll look at some more in Section D, by comparing the results from

simple regressions to the adjusted results from multiple linear regressions.

In the next section, Section C,

we'll show how to use these models to predict outcome values for

different predictor values, and how to compare groups and get mean differences

between groups who differ by more than one predictor in a multiple regression model.

Coursera provides universal access to the world’s best education,
partnering with top universities and organizations to offer courses online.