A practical and example filled tour of simple and multiple regression techniques (linear, logistic, and Cox PH) for estimation, adjustment and prediction.

Loading...

From the course by Johns Hopkins University

Statistical Reasoning for Public Health 2: Regression Methods

78 ratings

A practical and example filled tour of simple and multiple regression techniques (linear, logistic, and Cox PH) for estimation, adjustment and prediction.

From the lesson

Module 3B: More Multiple Regression Methods

This set of lectures extends the techniques debuted in lecture set 3 to allow for multiple predictors of a time-to-event outcome using a single, multivariable regression model.

- John McGready, PhD, MSAssociate Scientist, Biostatistics

Bloomberg School of Public Health

Greetings. In Section C here we're going to talk

Â a little bit about handling non-linear relationships with

Â a continuous predictor in regression and doing something different.

Â And we'll talk about the potential advantages in certain situations above and

Â beyond categorizing the continuous predictor.

Â We're going to talk about something called the spline approach.

Â So in this section, you'll get a brief overview of another method for handling

Â non-linearity in a regression method that allows for a piecewise approach

Â to estimating the relationship between an outcome and a continuous predictor.

Â To get this started, we're going to look at yet another arm circumference example.

Â This time on a sample of 1,000 Nepalese children who are between 0 and

Â 16 months old.

Â So I have pretty wide age range from birth to five years.

Â We're going to try and quantify the relationship between arm circumference and

Â weight using regression.

Â And here's a scatterplot that shows the arm circumference and

Â weight values for the 1,000 children in the sample.

Â So take a look at this for a moment.

Â What do you think the nature of the relationship is between arm circumference,

Â or the average arm circumference and child's weight?

Â Well, you might first think let's try fitting a linear association.

Â This is, outcome is continuous, predictor is continuous.

Â Let's try linear regression and put the line on our scatterplot and see how we do.

Â And for the most part, at least with the upper weight,

Â it looks like we're fitting the data pretty well.

Â But down here at or below around five kilograms it looks like we're missing

Â a key characteristic of the relationship here, a potential characteristic anyway.

Â By feeding one line to this overall cloud of points.

Â We may do pretty well with one line.

Â But if we're really trying to better understand growth rates in terms of

Â arms circumferences and

Â function to weights, we may miss some of the story down in the lower weights.

Â We could categorize weight in four groups.

Â So for example, I categorize weight into four quartiles and

Â estimated the mean arm circumference for

Â each of the four weight quartiles, which is the middle of this square on the graph.

Â I just put the square here so we could see what the means look like.

Â And you can see well, that'll do nicely, and that'll kept the essence of the story

Â and we'll see a slightly larger jump from the first quartile to the second,and

Â then the respective second to the third, and third to the fourth quartiles.

Â And this might be, for most purposes, be sufficient.

Â But if we're interested in the rate of change, if you will, the amount of

Â change in arm circumference per kilogram change in weight across this entire

Â range of waves, then this categorization approach is not going to help us do that.

Â It's going to tell us, on average,

Â in the four different weight groups, how the mean arm circumference is different.

Â But it won't talk about, in each of those weight groups,

Â what the relationship between the arm circumference and weight is.

Â So, you may have thought possibly looking at this picture,

Â it looks like this is somewhat curve or linear.

Â Maybe we could fit a curve to these data, and we could actually,

Â we're not limited in regression to linear terms.

Â We could actually put in a, a term for

Â weight and a squared version and fit a curve like this.

Â And this might be good if our only interest in was predicting arm

Â circumference given somebody's weight.

Â But the tricky thing here is, if we want to use these results to quantify,

Â give a numerical summary of the association between arm circumference and

Â weight in the form of a slope or slopes,

Â we're not going to be able to do it if we fit a curve.

Â Because a curve is structured such that the slope or the difference or

Â the change in average arm circumference per unit change in

Â weight is different at each weight.

Â So there's not one or two numerical summaries we can pull out of this to say,

Â here's the association between arm circumference and

Â weight in this weight range.

Â And here it is in another weight range.

Â So, for quantification purposes, this is not necessarily a very useful approach.

Â So there's one more approach we could take.

Â Maybe we could actually still estimate the relationship between arm circumference and

Â weight, taking weight is continuous, but allow this relationship to

Â change across the weight range.

Â But change, but each piece would be represented by a line and

Â that's the idea of the spline.

Â Kind of think of sp, sp, S.P. for

Â split and line for line, put it together and you get spline.

Â So what I am showing here are the results of a ration that I'm going to detail in

Â a minute where I actually allow there to be two different line slopes.

Â In one regression model estimating the relationship between arm

Â circumference and weight.

Â And I picked, based on this scatterplot,

Â the change point to allow the relationship to change at five kilograms.

Â So, before we detail this,

Â let's talk generally about the linear spline approach.

Â This allows for non-linearity be investigated via fitting lines with

Â differing slopes across the continuous predictor range.

Â The researcher or a statistician or

Â a person analyzing the data can pick the points where the line slope can change.

Â There are all also methods for allowing the computer to do that, but

Â we won't be able to get into details about that.

Â And then the slope changes across the outcome exposure

Â range for the x variable be estimated at multiple points.

Â We only chose to do one change for these data, but

Â we'll show another example with more.

Â And I like to think of this as really a non-linear sort of

Â a form of effect modification.

Â If you think about it, non-linearity occurs when an outcome

Â predictor relationship is different for different ranges of the predictor.

Â For example, the relationship between arm circumference and

Â weight may depend on weight.

Â Maybe the trajectory relating arm circumference to weight is different for

Â lower weights than it is to higher weights.

Â So you can sort of think of this as an outcome/predictor relationship being

Â modified by the predictor itself.

Â And the way we're going to handle this is very analogous to how we dealt with

Â interaction where affect modification in a,

Â in a regression context with the interaction term.

Â We're going to create something similar to that to estimate these changing slopes.

Â So, again, what we're going to do now is use a technique that allows us not only

Â to estimate differing slopes, relaying arm circumference to weight for

Â different weight ranges, but we'll also be able to test whether this change is

Â statistically significant or not.

Â In other words, whether the data supports the change at the population level.

Â So let's look at how this is set up.

Â If I want to set this up, it's going to look a little messy at first, but

Â we'll parse it.

Â The estimated association between arm circumference and weight, but

Â I want to allow for a spline or

Â a changing slope at five kilograms, this is how I'm going to do it.

Â This is the results I get from the computer.

Â I'm going to estimate a line that includes the slope for x1, where x1 is weight.

Â And, then, this may be reminiscent of of interaction terms, includes another

Â copy of x1, but subtracting off the point where I want to estimate the change.

Â The reason we have to subtract off that point is so that the two segments connect.

Â We didn't subtract off that point where we estimated change there'd be a jump,

Â potentially, between the two lines and we want to estimate a smooth function here.

Â But this piece here is very much like an interaction term we sometimes use

Â the notation plus above this to indicate that this extra term,

Â the spline term, is not activated.

Â If we're looking at the relationship between y and

Â x1 for x1 values less than the point where we want to estimate the change.

Â And then this just gets turned on as a copy of itself, x1 minus 5 or

Â weight minus 5.

Â If we're looking at the association between y and x1 at greater than or

Â equal to the cut point or the change point of 5.

Â So, let's see why this works out.

Â So, if we're looking at the relationship between our outcome and our predictor,

Â arm circumference and weight, for children whose weights are less than five,

Â then that piece, the x1 and minus 5 plus piece is turned off, it's zero.

Â And so for children who's weights are less than five,

Â our equation's pretty straightforward and simple.

Â We get that the estimated relationship between mean arm circumference and

Â weight is equal to an intersect of 6.25 plus a slope of 1.17.

Â So, on average a one kilogram difference in weight is

Â associated with a 1.17 centimeter difference in arm circumference for

Â children whose weights are less than five kilograms.

Â What happens if we're looking at or beyond the cut point?

Â Well, we get this term back.

Â Remember, when we're beyond the cut point, at or beyond a cut, this is just equal to

Â what it is in parentheses, x1 or weight minus 5 kilograms.

Â If we do a little algebra and multiply this out, you'll see,

Â just like when we had interaction terms, we just get another copy of our predictor.

Â And we also have to multiply this slope times the negative 5 here.

Â And if we redistribute this, pulling the terms over that don't involve x1,

Â and separating those that do, it looks like this.

Â Such that when all the dust settles we have a different intercept of 10.75 and

Â our slope for weight here is the sum of the slope that we had for

Â children less than 5 kilograms, plus the extra coefficient for the spline term.

Â And this when summed together turns out to be 0.31.

Â So, what we're allowing here is, for children who are less that five kilograms,

Â the, we estimate that a one kilogram difference in weight is

Â associated with an average difference in arm circumference of 1.17 centimeters.

Â For children who are greater than or

Â equal to five kilograms the trajectory shifts down, substantially There's

Â still a positive association between arm circumference and weight.

Â But now, when we compare two groups of children who differ by one kilogram,

Â where all groups are greater than or equal to five kilograms,

Â the average difference in arm circumference is now 0.31 centimeters.

Â So, we see that the growth relationship, if you will,

Â slows down after five kilograms.

Â So just to show what we've got on this graphic here,

Â the slope for this first segment here is that 1.17.

Â And then as soon as we hit five, we see the trajectory change and

Â the slope here is 0.31.

Â If we were to extend this first

Â segment down all the way to the y axis, we'd hit that intercept of 6.25.

Â If we were to extend the second segment

Â down to the y axis, we get that, and I didn't draw it to scale here,

Â that intercept that gets created by adding the original cept,

Â intercept to the extra piece that comes from the spline term, 10.75.

Â But, you can see here, now that we're able to quantify the per-unit change in average

Â arm circumference per one kilogram difference in weight separately and

Â differently for these two weight ranges still using linear regression.

Â This is really nice because we could also test now,

Â that was an estimate based on our data, but we could actually test whether

Â the true at the population level of all such children, zero to five years old.

Â Whether there is a real statistically significant change in

Â the relationship between arm circumference and weight after five kilograms.

Â And that's the key in the testing.

Â This is our estimate of that pi, change piece.

Â But we test whether that's zero or not to get a test of whether these data support,

Â show evidence of a change in the relationship between arm circumference and

Â weight, at five kilograms, at the population level.

Â If we were to do this, the p-value is very low, and so these data

Â show a statistically significant change in the association at that level.

Â We could also use it in the computer, and it's,

Â certainly if you have the information by hand you could do it as well.

Â But the standard error for these things is harder to get.

Â But the computer will give us confidence intervals for each of these slopes.

Â So for the first slope, the estimate was 1.17 and

Â the confidence interval went from 1.02 to 1.32.

Â The slope after five, which is the combination of 1.17 plus that negative

Â 0.86 which quantifies the negative change in the slope before five to after five.

Â The slope after this is 0.31 with a confidence interval of 0.29 to 0.32.

Â And the reason this is much narrower than the, this confidence interval is,

Â there's more data points in the after five kilogram range compared to before.

Â But what can't, we can see here with these data is that the relationship between arm

Â circumference and weight is positive and

Â statistically significant for both groups of children,

Â those less than five kilograms and those greater than five kilograms.

Â But, not only are the estimates different but

Â the confidence intervals don't overlap and that's consistent with this test result we

Â had that the change was real and statistically significant.

Â Let's look at another example.

Â With go back to the NHANEs data.

Â Let's look at obesity and age, the relationship.

Â We had previously controlled for age in other analyses, and categorized age

Â because there was some evidence from our Lowess plot, that at least the unadjusted

Â association between the log odds of being obese and age was not linear.

Â And it's fine.

Â We created age quartiles, and

Â just estimated the log odds for each of the four age quartiles.

Â And then used the results to get odds ratios for

Â quartiles two through four compared to the reference quartile one.

Â But, suppose we were actually interested in the change in the log odds and hence,

Â the change in the odds ratio per year of age, across this entire age range.

Â And we did some preliminary analysis and

Â saw that it may not be purely linear on the regression scale, what could we do?

Â [SOUND] Well, we could fit a model that allows for changes.

Â I'm just using this graphic going to estimate that the changes occurred at

Â 40 years old and 60 years old.

Â That's a subjective decision, but you may make a slightly different decision and

Â could model similarly to what I'm doing here.

Â But what we're now doing at this approach is estimating potentially three different

Â associations between the log odds of obesity and age, three different lines.

Â And the way to handle this would be an extension of what we did before.

Â We put in our main predictors of age, that's x1.

Â And then we create these blind terms at 40 and 60.

Â So, just as a refresher, this term here would not even be activated until

Â we hit 40 years, at which point it would take on the value of age minus 40.

Â And this term here would not be activated until we were dealing with people 60 years

Â and older, at which point it would take on the value of x1 or age minus 60.

Â And we could also controlled for

Â other things like we had done previously, like HDL and sex,

Â things that may differ across the age groups and also be related to obesity.

Â So splines can be done in a simple regression context, like we saw before,

Â but also in multiple regression context.

Â Since I want you to focus on the big picture,

Â some of you are more interested in the mechanics than the others and

Â if you're interested in the mechanics, see if you can get my results here.

Â But what this does is if we're looking at the different age ranges we get different

Â slopes estimating the relationship between the log odds of obesity and

Â age depending on the age range.

Â For the less than 40 group,

Â it's just the main slope for age of that generically placed beta one hat.

Â We look at the 40 to 60 group then the overall association is the starting slope,

Â the one for the less than 40 plus, the coefficient for that spline for after 40.

Â The, the second coefficient doesn't come into play until we actually hit 60.

Â In which case the relationship between the log odds of obesity and

Â age is described by the sum of the relationship for

Â the first group plus the coefficient for this line for the second age group

Â after 40 plus the coefficient for the third age group, after 60.

Â So you can see these things accumulatively add together.

Â These are log odds ratios and

Â we could exponentiate them to get the respective odds ratio of being obese

Â associated with a one year difference in age for each of these age ranges.

Â So, let's see what the results look like.

Â And this is, might be a way we present it, in a table.

Â I'm going to show you the results when we didn't adjust for HDL levels and

Â sex, and after we did.

Â And you can see that numerically, actually,

Â when you first look at these odds ratios, they are pretty much similar.

Â Although the odds ratios for the 40 to 60-year-old groups and greater and

Â equal to 60-year-olds groups get closer to one, after adjustment.

Â But let's think about what each of these things are telling us,

Â let's look at the adjusted situation.

Â This odds ratio here, of 1.04, is the relative odds of being

Â obese for two groups of persons who differ by one year in age,

Â where both groups are less than 40 years old.

Â So 30 year olds to 29 year olds, 35 year olds to 34 year olds, et cetera.

Â Age is associated with a 4% increase in the relative odds of obesity.

Â When we go to 40-to-60 ye, years I've already combined the slopes and

Â exponentiated them so we don't need to do anything.

Â We can take this odds ratio at face value.

Â There is no association between increasing age and obesity.

Â The relative odds is constant, right.

Â The odds ratio for a one year difference in age for

Â those between 40 and 60 year olds is, is one, indicating no difference in the odds.

Â And it is not statistically significant.

Â At greater than equal to 60 years,

Â there is a decreased odds of being obese associated with increased age.

Â A decrease of 1% per additional year of age,

Â per people greater than equal to 60 years, but this is not statistically significant.

Â So, the big picture here and

Â probably what we're seeing evidence of is there's this shift at or

Â about 40 years old where age no longer is associated with the odds of being obese.

Â But in younger years it is.

Â And so if I were actually doing the analysis, and not necessarily trying to

Â replicate the results that someone else did on a separate population, I may

Â go back and re-run this with a single spline term at 40 years to get a better

Â combined estimate of the change, or lack of change, if you will, after 40 years.

Â Let me show you an example that is from the literature.

Â This is from an article on soda consumption in physical education classes

Â from the American Journal of Public Health.

Â And I'll just read you the abstract here because they bring in this idea

Â of non-linearity.

Â They say we examine the association of adolescence beverage

Â consumption with physical activity and

Â study how their school beverage environment influences the association.

Â We use the nationally representative data from the 2007

Â early childhood longitudinal study, kindergarten cohort.

Â And then we examine non-linear associations of

Â eighth graders' self-report of beverage consumption, milk, 100% juices, or

Â soft drinks, with moderate to vigorous physical activity and

Â physical education participation using piecewise linear regression models.

Â That's a synonym for re, splines.

Â In their results they say we found a non-linear association of

Â participation in physical education class, PE class, with beverage consumption,

Â especially in schools with vending machines and those selling soft drinks.

Â For students participating in physical education less than three days per week,

Â beverage consumption was not significantly associated with

Â participation in physical education class frequency.

Â For students participating in physical education three to five days per week,

Â one more day of participation in phys ed class,

Â was associated with .43 more times per week of soft drink consumption and

Â 0.41 fewer glasses per week of milk consumption.

Â So, what they're saying here is that this relationship,

Â the per-unit change in number of soft drinks consumed on average, or

Â number of glasses of milk consumed on average, as a function of number of days

Â of physical education, is different for those who get lesser physical education.

Â Zero to two days per week, versus those who get more, three to five days per week.

Â They actually showed the results of these regressions, and they do it with three

Â outcomes, soft drinks or sodas, milk, which they refer to as well, and juice.

Â And they share the results of several different models, but

Â you'll notice that model three and model four.

Â So, you've noticed they have two different entries under participation and

Â PE class, and they say spline, zero to two days and three to five days.

Â So, these are not actual categorical indicators when they put the words spline,

Â they're saying that this slope is the per-unit association in the outcome,

Â average soft drinks consumed per number of days of phys ed.

Â And the uppers e, estimate separate slopes for zero to two days, and

Â three to five days.

Â And let, they also do this in model four,

Â where they adjust for amount of moderate to vigorous physical activity.

Â Everything in this model is also, and down here they say, adjusted for

Â adolescence, gender, age, race, ethnicity et cetera, et cetera.

Â And they actually explain how to

Â interpret the results from these piecewise regressions lines.

Â And they indicate that some of these, these p values are testing for

Â the difference between two slopes for zero to two days versus three to five days

Â in participation in physical education class, in piece wise linear regression.

Â So they're just explaining what they did.

Â Let's look at what they did for soft drinks.

Â This is the results for soft drinks.

Â And I'm going to actually detail the results of this model four,

Â where they included all the adjustment factors they mentioned, age, etc, but

Â are also highlighting the association with moderate to vigorous physical activity and

Â this participation in physical education class.

Â Here's what the model looks like, what they ran.

Â We had a slope of negative 0.26 for number of days to moderate or physical activity.

Â So increased number of days is associated with a decrease in average number of

Â soft drinks consumed adjusting for amount of physical education and

Â the other things in the model.

Â And then they had these two pieces for

Â number of days participating in the phys ed class.

Â I'll call it x2 is the number of days participating, and

Â then the spline at three days, x2 minus 3.

Â So this is only turned on or activated for children who participated in three, four,

Â or 5 days of physical education.

Â So if you look at the results of this, we look at children who are adolescents who

Â participated for less than three days, this spline piece disappears,

Â it's not activated and the sole slope of number of

Â days of physical education is this negative 0.18 which is what they reported.

Â And it was not statistically significant.

Â So even though there was a negative association between average beverages

Â consumed, average sodas consumed, and number of days of physical activity, it

Â was not statistically significant, that's what they referred to in the abstract.

Â For the group who had physical education for three or

Â more days, three, four or five days we have to turn on this spline term.

Â And if you turn this on, activate it, and then do some algebra and regrouping,

Â when all the dust settles, the slope for number of days of physical education

Â all combined is the original slope plus this coefficient for the spline term.

Â And if you do that, if you add this together.

Â This is the 0.3 they were quoting in the abstract and

Â that they showed in that table.

Â That the association after, for three or

Â more days of physical education between average sodas consumed and

Â the amount of physical education is positive and statistically significant.

Â Even after adjustment for a moderate to vigorous physical activity and

Â the other adjustment factors.

Â So they give confidence intervals for

Â each of these pieces back in that table I showed you before, and

Â they also noted that this change slope here, they didn't show this piece.

Â They actually kindly added the two together so

Â we didn't have to do the adding in their table.

Â They gave us the before and after slopes and did not show us the change piece but

Â they noted that the change was statistically significant.

Â So, in summary, linear splines offer an alternative categorize and

Â a continuous predictor when investigating and

Â or handling potential non-linearity in outcome exposure

Â association estimated with regression, whether it be simple or multiple.

Â And this approach is very useful when the per unit change in a measure of

Â association, the mean difference in odds circumference per one kilogram of weight,

Â the change in the relative odds, of obesity per year of age, etcetera.

Â When this change is of scientific interest but

Â the association is not necessarily linear on the regression scale.

Â So this allows us to fit several lines to describe the relationship on

Â the regression scale.

Â Sometimes that'll, for linear regression, then we have the story, and for

Â logistic or Cox proportional hazard regression,

Â we'd have to exponentiate the results to get the differing ratio results.

Â But this is a nice approach when that change per unit is of interest.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.