A practical and example filled tour of simple and multiple regression techniques (linear, logistic, and Cox PH) for estimation, adjustment and prediction.

Loading...

From the course by Johns Hopkins University

Statistical Reasoning for Public Health 2: Regression Methods

74 ratings

A practical and example filled tour of simple and multiple regression techniques (linear, logistic, and Cox PH) for estimation, adjustment and prediction.

From the lesson

Module 2A: Confounding and Effect Modification (Interaction)

This module, along with module 2B introduces two key concepts in statistics/epidemiology, confounding and effect modification. A relation between an outcome and exposure of interested can be confounded if a another variable (or variables) is associated with both the outcome and the exposure. In such cases the crude outcome/exposure associate may over or under-estimate the association of interest. Confounding is an ever-present threat in non-randomized studies, but results of interest can be adjusted for potential confounders.

- John McGready, PhD, MSAssociate Scientist, Biostatistics

Bloomberg School of Public Health

In this lecture set, we're going to formally define and discuss some ways of

Â dealing with something we've eluded to and spoke of before.

Â The idea of Confounding.

Â So in this set of lectures, we will formally define confounding.

Â And give some explicit examples of its impact.

Â Define the idea of adjustment and adjusted estimates conceptually.

Â And begin a discussion of the analytics or approach to adjustment for confounding.

Â So first, let's give a formal definition and

Â some examples of the impacts or potential impacts of confounding.

Â So in this lecture set, we're going to formally define confounding.

Â Establish conditions which can result in the confounding of an outcome

Â exposure relationship, or more generally a relationship between two variables.

Â And demonstrate the potential effects of confounding on measuring

Â association via several examples.

Â So let's just get started with a very over-the-top fictitious example, just to

Â hammer home with the idea, or to start the discussion about the idea of confounding.

Â So consider the following results from a fictitious study.

Â This study was done to investigate the association between smoking and

Â an outcome of.

Â We'll just say,

Â a certain disease in a population of adults both male and females.

Â And a random sample was taken and

Â the subjects were classified as to their smoking status at the time of the study.

Â And there were 210 smokers, and 204 former, 40 non-smokers.

Â And then they were assessed at to whether they had the disease of interest or not.

Â And this was not a particularly awful disease, but

Â it was relatively prevalent in this fictitious population.

Â So, here are the results broken out by disease and smoking status.

Â And if you analyze the results of this two by two table,

Â you'll see that if you look at the association between smoking and disease.

Â You can compare the proportion of smokers who have disease to

Â the proportion of non-smokers.

Â This relative risk is 0.93.

Â Indicating, at least in this sample, there is a lower risk,

Â a slightly lower risk of disease among the smokers.

Â Well, we haven't accounted for sampling variability.

Â But at least this estimate in this sample,

Â smoking appears to be protective against disease by a small amount.

Â But how can this happen?

Â This goes against everything we understand about smoking and

Â it's association with morbidities.

Â So let's just take a look at our data a little bit further.

Â Let's look at the relationship between smoking and the sex of the person.

Â So this is new, this is data we collected.

Â But now I'm showing you a representation the data that

Â compares smoking status by sex.

Â And if you look at this, if you look at the smokers, it,

Â the proportion of smokers who are male is about 76%.

Â So the majority by a fair amount of the smokers in the sample are male.

Â But among non-smokers, the proportion who are male, is about 16%.

Â So right off the bat, we see pretty clearly that there's a strong

Â association between sex of the person and smoking status in these data.

Â We go ahead and look at the association between sex and the disease outcome.

Â If we look at the persons who have this disease,

Â the proportion of the persons who have the disease who are male is 28%.

Â And the proportion of males amongst those without the disease is 50%.

Â So we can see that disease is associated with sex, as well.

Â And females are more prevalent amongst the disease.

Â So what's going on here?

Â We want to associate disease with smoking, but there's this third variable, sex.

Â Which it seems to be related to disease and smoking.

Â And because of that, it may possibly be explaining some of the association or

Â lack of association that we're finding,

Â when we look directly at the association between disease and smoking.

Â And ignore this information about sex.

Â So let's think about this.

Â The comparison.

Â So I should say potentially distorted or nullified, even negated.

Â The comparison of the disease risk between the smokers and

Â non-smokers is potentially distorted or nullified or lessened, if you will.

Â By the disproportionate percentage of males among the smokers.

Â So, when we are getting a comparison on the relative risk scale of

Â smokers to non smokers.

Â We look at that comparison.

Â Remember, the percentage of smokers for males is 76%.

Â So roughly eight out of ten.

Â So if we look at smokers, for every 10 we look at,

Â there would be about eight males and two females.

Â And it's only 16% among non-smokers.

Â So if we round up,

Â we'd expect to see only two in ten are males among the non-smokers.

Â So if we look at this comparison, it's imbalanced heavily in terms of

Â the sex distribution of the numerator and denominator.

Â And recall, males in the sample have a lower,

Â are less likely to be diseased in females.

Â And, so, if we take this ratio as is, we're getting something distorted

Â by the fact that the majority of the numerator consists of people who are male.

Â And are less likely to have the disease.

Â And that's why we're seeing this association,

Â this minor negative association between smoking and disease.

Â So again, the original outcome of interest is disease.

Â The original exposure of interest is smoking.

Â And in this sample, sex is related to both the outcome and exposure.

Â The relationship that we see is possibly impacting the overall relationship between

Â disease and smoking.

Â So how can we assess the degree, or

Â if sex is distorting the overall relationship in any way?

Â Well, one approach is to start,

Â we're going to look at the, we're going to remove the variability in

Â sex distributions between the smokers and non-smokers.

Â By stratifying our data and looking separately at each of the two sex groups.

Â So let's look at males only.

Â And if I disentangled this data, and you can't directly do it from the way I

Â presented before, but I have all the data in a database.

Â If we actually look at the relationship between disease and smoking only in males.

Â So this comparison is not corrupted by a different distribution of males and

Â females in the smoking and non-smoking groups.

Â because we're only looking at males.

Â If we look at the relative risk now, in this group of males for

Â disease to smokers and non-smokers, it turns out to be 1.8.

Â We show an estimated 80% increase in the risk of disease for

Â smokers among the males compared to non-smokers.

Â And if we do the same thing for females, we get

Â a relative risk of disease for female smokers to non-smokers of 1.5.

Â An elevated risk of 50% in the sample of females for smokers to non-smokers.

Â So a recap.

Â The overall, sometimes we call it the crude or

Â unadjusted relationship between smoking and disease.

Â The relative risk was nearly 1, and the risk difference is nearly 0.

Â So it didn't appear at first pass,

Â that there was much of an association between smoking disease.

Â But in the sample smoking was somewhat protective.

Â However, when we looked at the data separately by sex, we saw

Â increased risk of disease for smokers compared to non-smokers in both groups.

Â 80% and 50%, respectively.

Â And we're just looking at the estimates for now, so

Â we're not considering statistical significance, we, we will shortly.

Â [SOUND].

Â So this is a pretty striking result.

Â That background combination of the increased risk of smoking among the males,

Â and the decreased risk of disease among the males.

Â Made it look like there was little association between smoking and

Â disease, when we compared all smokers to non-smokers in the sample.

Â But this, the overall association was being heavily influenced by

Â this imbalance of sex distribution between this exposed and unexposed group.

Â And sex was related to the risk of disease.

Â So what we were seeing was mainly a negation of the overall smoking and

Â disease relationship because of this sex component.

Â And when we, when we remove sex from the story and

Â looked at the association separately by males and females,

Â we saw a positive association between smoking disease in both sex groups.

Â So this example's pretty explicit and contrived, just to illustrate a point.

Â But it illustrates something, sometimes called Simpson's Paradox.

Â That the nature of an association can change or

Â even reverse direction, or disappear when several data from

Â several groups are combined to form a single group.

Â In other words, when we took the entire sample of males and

Â females together, we missed the association between smoking and disease.

Â So an association between an exposure, X, and a disease or outcome.

Â Let's say more generally, outcome Y.

Â And more generally,

Â we really could say an association between any two variables x and y.

Â We don't even have to explicitly make one the exposure and one the outcome.

Â This association can be confounded by another lurking hidden variable,

Â sometimes called a lurking or hidden variable Z.

Â Or multiple hidden or

Â lurking variables, that are associated with both the exposure and disease.

Â And what a confounder, or a set of confounders does,

Â is it distorts the true relationship between X and Y.

Â And so this can only happen if our confounder or

Â confounders, are related to both X and Y.

Â So in our example we just looked at, sex was related to both the smoking status and

Â the disease status of the participants in this study.

Â So when we get this sort of thing going on,

Â if we look at a Venn diagram descriptive.

Â There might be some crossover in the information relating Y to X.

Â There might be some distortion or crossover because of

Â the relationship of both, with this third variable or set of variables.

Â So what's the solution for confounding?

Â What can we do about confounding?

Â Well, if we don't know what the potential confounders are, we don't,

Â there's not much we can do after the study is completed.

Â Randomization as a study design, is the best protection against confounding.

Â Randomization essentially, and we'll look at a pictorial of this in a minute.

Â Limits the, eliminates the potential links between the exposure of interest and

Â potential confounders.

Â You know, Z1 through Z3, through Z whatever, how many confounders we have.

Â And the nice thing about randomization is it limits the potential

Â links between confounders we could think of and

Â measure, and confounders we never considered when planning the study.

Â But in many cases we've talked about, we can't randomize our exposure of interest.

Â So if you can't randomize, but

Â have some sense of what the potential confounders are.

Â There are statistical methods to help control for confounding.

Â And it's called adjusting for confounders.

Â This is a tricky thing, though.

Â Because potential confounders must be known in advance and

Â measured as part of the study.

Â And there's always going to be that nagging question of,

Â did we measure all potential confounders?

Â So why does randomization minimize the threat of confounding?

Â So let's look at a situation where we have some outcome.

Â I'll just generically call it Y.

Â And some predictor X.

Â And there's these variables behind the scenes that may confound this association.

Â In order to confound this association, either distort it or hide it, because of

Â behind the scenes relationships with these confounders or potential confounders.

Â These have to be related to both the outcome and the exposure.

Â So what do we do when we do randomization?

Â So let's suppose we're looking at a drug trial, and we're looking at some drug and

Â its impact on some condition that's been shown to be related to age and sex.

Â Well, there's nothing we can do to change the relationship in

Â nature between the condition or disease and age and sex.

Â But if we're doing a randomized trial, and

Â randomizing subjects to either get a drug or placebo.

Â By randomizing them to both groups, we can eliminate any systematic links

Â between age, sex and other things, and which treatment group they're in.

Â So randomization eliminates this potential systematic link

Â between the exposure groups, and these potential confounders.

Â And remember, in order to confound an association,

Â the potential variables have to be related to both the exposure and the outcome.

Â So by getting rid of that link, we're minimizing the threat of confounding.

Â So, let's look at another study.

Â An observational study to assess, estimate the association between arm

Â circumference and height in Nepali children.

Â And we've looked at these data several times before.

Â So let's suppose we have 150 randomly selected children between 0 and

Â 12 months old.

Â And they had their arm circumference, weight and height measured.

Â In fact, we looked at this recently in the unit on linear regression.

Â Well, the study is clearly observational.

Â It's not possible to randomize subjects to height groups.

Â So, the data is such,

Â that the arm circumference range in this data is 7.3 to 15.6 centimeters.

Â The height range is, as stated here.

Â And the weight range is given by 1.6 to 9.9 kilograms.

Â So, as we saw back in the unit on linear regression,

Â if we fit a linear regression to estimate the association.

Â We found a positive.

Â And it turned out to be statistically significant as well,

Â association between arm circumference and height.

Â But notice perhaps not surprisingly, that arm circumference is strongly,

Â positively associated with weight of the child and height.

Â [SOUND].

Â Is positively and strongly associated with the weight of the child.

Â And these lines here are the respective regressions,

Â lines of arm circumference on weight, and height on weight.

Â So here's what we get if we actually, and

Â we'll talk about adjustment in the next section.

Â But if we actually re-estimate the relationship between arm

Â circumference and height.

Â But remove the behind the scenes relationship between arm circumference,

Â weight and height.

Â In other words, adjust for

Â weight differences and the different height groups.

Â The association we now get between arm circumference and height is negative.

Â In other words, if we've made the comparison subst,

Â when we compare children who differ by height.

Â We've made it such that this comparison is amongst children of the same weight.

Â And we'll talk about this more in the next session.

Â But essentially, if we're comparing children of similar weight,

Â who differ by height.

Â Then amongst groups of children who are similar in weight,

Â the relationship between arm circumference and height is negative.

Â And the estimated regression slope here for

Â height, after pulling out the behind the scenes association with weight in

Â these two variables, is negative, negative 0.16.

Â And just consider that for a moment.

Â Think about why that may be.

Â Here's another study we can look at.

Â This is a pretty interesting example.

Â This was a longitudinal study done in South Africa.

Â It's a birth cohort.

Â Followed for five years after birth.

Â And so what they did was they actually,

Â actually collected information on this birth cohort.

Â And followed them up at the five year mark.

Â And what they wanted,

Â wanted to see is there was a fair amount of drop out in this study.

Â Understandably, if they only measured them once every five years.

Â But what if they wanted to see for design of future studies, is if there was any

Â information they could use to predict who would drop out and who wouldn't.

Â And maybe customize their follow-up intensity

Â depending on these characteristics.

Â So what they found if they looked at whether or not the subjects or

Â the families who were followed or initially selected, whether or

Â not they participated in the followup.

Â They looked at their medical aid status.

Â Whether they had public insurance or not.

Â And what they found in the overall cohort was,

Â the relative risk of actually participating in the follow up visit.

Â For those who actually received public insurance or

Â medical aid compared to those who didn't was 0.7.

Â So, it looks like medical aid was associated with a reduction in

Â follow-up participation on the order of 30% based on these data.

Â But it turns out,

Â if they actually stratified this by the race of the participants in the study.

Â And first looked at the black participants, so

Â they were classifying race as black and white.

Â If they only looked at the black participants,

Â the relative risk of follow-up for

Â those black participants on medical aid versus those not, was equal to 1.

Â There was absolutely no difference in the proportion in the samples we followed up

Â after five years, amongst those on medical aid and those not on medical aid.

Â Similarly, if they looked at all the white participants only and

Â looked at those who receive public insurance or aid from the government.

Â Versus those who didn't, the relative risk of follow up among white participants on

Â medical aid compared to those not, was 1.05.

Â So a slightly elevated proportion with those on medical aid

Â who participated in the follow-up visit amongst the white participants.

Â So let's just pause for a moment and think about what's going on here.

Â When we looked at everyone all together, there's a negative association between

Â participation in the follow-up visit and being on medical aid.

Â But when we stratified and looked separately by race groups,

Â there was little to no association between participation in the follow-up visit and

Â medical aid.

Â So what's going on?

Â Well, if we go back and look at some characteristics of the sample,

Â the majority, 91%, were black.

Â And 26% of the Black families or subjects completed the follow-up,

Â as opposed to only 9% of the White subjects.

Â However, only 9% of

Â the Black subjects had medical aid, compared to 83% of the White subjects.

Â So what's going on here?

Â In the initial comparison of medical aid versus no medical aid.

Â If the numerator

Â was majority White, and the denominator was majority Black.

Â But Whites were much less likely, 9% versus 26%,

Â much less likely to participate in the follow-up visit.

Â So this comparison was distorted by

Â the disproportionate amount of White families in themed, receiving medical aid.

Â And the fact that White families were far less likely to participate in

Â the follow-up.

Â Let's talk about one more example.

Â I'm not going to give an example of this per se, but

Â talk about something that comes up a lot today with genomics, and

Â sequencing, and gene expression type studies.

Â That it's, it's been discovered to be a big problem that needs to be corrected for

Â both in the design of the study, and the analysis.

Â But in what are called Batch Effects in lab-based analyses.

Â Well, lab based results can be influenced by the technician,

Â the laboratory used, the time of day, the temperature in the lab, et cetera.

Â So if the goal of a study,

Â is to ascertain differences in lab measures between groups.

Â For example, differences in gene expression levels between those with

Â a disease and those without.

Â And the group, like disease or

Â non-disease, is associated with at least some of the above characteristics.

Â Then there can be confounding.

Â So, for example, if the majority of the diseased subjects under

Â study were analyzed by technician 1 And

Â the majority of the non-diseased subjects were analyzed by technician 2.

Â And the study finds a differential in the measured lab result on average,

Â between these two groups.

Â That could be because the result differs among those who

Â are diseased and non-diseased.

Â But it could be heavily influenced by the fact that the technician was correlated or

Â associated with the disease group, and the measurements themselves.

Â So it's just something to think about even in non-population based studies.

Â In things that seem to be well controlled in laboratory situations,

Â there can be a threat of confounding.

Â So in summary, in non-randomized studies, outcome/exposure relationships or

Â just more generally relationships between two measures of interest,

Â may be confounded by other variables.

Â And in order to confound the outcome exposure relationship,

Â a variable must be related to both the outcome and the exposure.

Â What we're going to look at in the next two sections, is in the next section we'll

Â talk about interpreting what's called a confounder adjusted association.

Â And we'll talk about that means.

Â And what comparisons are being made by that association.

Â More formally, we alluded to it in the weight and

Â arm, in the arm circumference height example in this section.

Â But we'll talk about it formally.

Â And then in section C,

Â we'll give a little intuition behind the mechanics of adjustment.

Â And that will set the stage for our next chapter on multiple regression methods.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.