An introduction to the statistics behind the most popular genomic data science projects. This is the sixth course in the Genomic Big Data Science Specialization from Johns Hopkins University.

Loading...

From the course by Johns Hopkins University

Statistics for Genomic Data Science

101 ratings

Johns Hopkins University

101 ratings

Course 7 of 8 in the Specialization Genomic Data Science

An introduction to the statistics behind the most popular genomic data science projects. This is the sixth course in the Genomic Big Data Science Specialization from Johns Hopkins University.

From the lesson

Module 2

This week we will cover preprocessing, linear modeling, and batch effects.

- Jeff Leek, PhDAssociate Professor, Biostatistics

Bloomberg School of Public Health

A unique thing about regression modeling in genomics is that you

Â often fit many regression models simultaneously.

Â The reason why is you usually have many measurements, and so

Â each of those measurements is designed to be able to be correlated with

Â an outcome that you care about.

Â So here, for example, is the typical genomics data set.

Â You have a large number of features in the rows.

Â So that could be tens of thousands or millions of features,

Â whether they're SNPs, measurement of methylation, MCPGs,

Â gene expression levels, or transcript expression levels.

Â And then you have some varying conditions.

Â And usually you have some kind of phenotype like case control status.

Â You would like to associate each feature with case control status and

Â you would like to discover those features that are differentially expressed or

Â differentially associated with those different conditions.

Â So to do this,

Â you usually end up with a matrix formulation of this same regression model.

Â So you can imagine that, for every single row of this matrix,

Â you'll fit a regression model that has some B coefficients multiplied by some

Â design matrix, multiplied by some variables that you care about,

Â plus some corresponding error term for just that gene.

Â And then you would stack a bunch of these up.

Â So this is a bunch of stacked regressions.

Â I'm showing it here in mathematical notation on the bottom.

Â You write matrix multiplication, to write down these many multiple regressions.

Â And then I'm showing it in block format up here.

Â So you model the data for this gene, Based on these coefficients multiplied by

Â these variables multiplied by, they're adding up this error term right here.

Â And you do this for every single feature that you're modeling.

Â So here's this example where you are looking at gene expression signatures

Â associated with geography in a particular population in Morocco.

Â And so, there's a primary biological variable that you might care about or

Â variable that you care about in this case might be where they actually come from.

Â What's the geography that they come from?

Â And then you might have a bunch of adjustment variables,

Â like are they males or females?

Â What batch they come from?

Â All sorts of other variables that you might have.

Â And so the model actually becomes a little bit more difficult when you're dealing

Â with such a case because there's all sorts of variables that you obviously need to

Â model like the location that the people come from, their sex, the batch.

Â There's also a much more subtle effect.

Â Say, intensity dependent effects in the measurements from the genomic data or

Â dye effects or probe composition effects since this is a microarray.

Â And then many other unknown variables that you might want to model.

Â So when you do this, you actually end up with a slightly more complicated model.

Â Again, this is in colored blocks, the observation version of this.

Â And so again, you might model the measurements for one gene that are in one

Â row as a function of the coefficients in one row times the set of variables that

Â you actually care about, in this case it might be geography.

Â Plus the coefficients in one row for a set of adjustment variables that you might

Â care about plus the random variation for that one row.

Â So now you've got a model that you're fitting many, many regression models.

Â You fit them all the exact same way as you fit a single regression model but

Â now you have to interpret them jointly.

Â And so there's a couple of different things that are difficult.

Â One is that you have hundreds, thousands or millions of model fits

Â at the same time and for each one we have estimates of the variables, the residuals,

Â we have the fitted values and there can be structure in any of those things.

Â There can be structure in the estimates, there can be structure in the noise and

Â there's all sorts of issues that may be due to different values of the covariance

Â and different unmeasured confounders.

Â So the key here is that we need to think of linear models as

Â one tool that can be applied many, many, many times across many different samples.

Â So there's actually a class on regression models that you can look at, but

Â I also highly recommend this paper on linear models for microarray data.

Â While there's talking specifically about microarray data,

Â and obviously there's new technologies that have been developed since then,

Â this is a really nice treatment of all the issues that come when you're doing many,

Â many regression models simultaneously.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.