An introduction to the statistics behind the most popular genomic data science projects. This is the sixth course in the Genomic Big Data Science Specialization from Johns Hopkins University.

Loading...

From the course by Johns Hopkins University

Statistics for Genomic Data Science

115 ratings

An introduction to the statistics behind the most popular genomic data science projects. This is the sixth course in the Genomic Big Data Science Specialization from Johns Hopkins University.

From the lesson

Module 2

This week we will cover preprocessing, linear modeling, and batch effects.

- Jeff Leek, PhDAssociate Professor, Biostatistics

Bloomberg School of Public Health

One of the critical steps in an Economic Analysis is normalizing the samples so

Â that they have common distributions.

Â You particularly want to do this when the distributions are likely being driven

Â by some technical variables.

Â So, I'm going to show one example of one of the most popular ways to normalize data

Â across samples.

Â So the first thing that I'm going to do,

Â is I'm going to set up the plotting parameters like I typically used them and

Â then I'm going to load the libraries that I need in this case pre-process course,

Â the main library that we're going to be using for this example and

Â so again I'm going to load in this state as a combination of Montgomery and

Â pick role data sets, for doing these examples and I'm going to basically

Â extract out the phenotype data, the expression data and the featured data so

Â that they're easier for us to work with when doing these examples.

Â And so, once I've done that I'm going to transform the data and

Â remove sort of the L line values.

Â So now I'm left with this expression data that has about 5862 genes in it,

Â 129 samples.

Â So the first thing I want to do,

Â is I'm going to show the distribution of each of these different samples.

Â So as we talked about an Exploratory Analysis, one way to do that is to

Â basically make a plot of the density of each of these samples.

Â So here, I'm going to make a plot of the density of the first,

Â the values from the first sample.

Â I'm going to use this first color from this color amp, and

Â I'm going to plot it here on this.

Â So here is the distribution for the first sample and then what I'm going to do is

Â I'm actually write a loop that loops over each of the other sample,

Â so it's going to go from 2 to 20, because I already did sample one and I'm going to

Â make 20 of the samples I'm going to make a density platform so in each one I'm

Â going to use lines to overlay another line from the coloring on top of that,

Â so when I do that I can see that some of those samples have nearly identical values

Â and some of them have big distributional differences between the samples.

Â That's likely due to technology and not due to biology.

Â So one thing that we can do is do quantum normalization like we talked about.

Â That's basically going to force the distributions to be exactly the same.

Â And so the way I'm going to do that is using that pre-process core package,

Â I'm going to use the normalize.quantiles function.

Â And then I'm going to convert this to a matrix and apply it.

Â So now I have a new data set,

Â what this returns is a new data set of the same size.

Â So if I look at dimensions of edata and the dimensions of norm edata.

Â We need to set this exactly the same size, but

Â where things have been quantile normalized.

Â So now, again, I can make a density plot for the normalized data

Â of the distribution and it looks like this after normalization.

Â And then I can again loop over the first 20 samples and

Â add lines and over layed and on top of that plot.

Â And so

Â you see when I do that, they basically all land right on top of each other.

Â Now there's a little bit variability down here on the low end that's because

Â the quantiles for the very low values are difficult to match up,

Â so often you'll see a little bit variation here in the low values or

Â the really high values in the quantitative normalization.

Â But for the most part, the distributions lay exactly on top of each other now.

Â And so the cool thing here is that it hasn't removed,

Â this is basically removing both differences in the distribution but

Â it hasn't removed the gene by gene variability.

Â So here, what I'm going to do,

Â is I'm going to plot the normalized first gene and I'm going to color it by study.

Â And you can still see that there's a difference between the two studies.

Â Even though overall the bulk distributions are about the same, you can actually

Â see that in any individual case you might have differences between the study.

Â So to see that more clearly we can actually do decomposition so

Â if we do the svd on the normalized data,

Â so we're going to subtract the runMeans again so that we can

Â see the first pattern of variation would be something that varies across each gene.

Â And then I'm going to plot that first,

Â singular vector versus the second singular vector, that usual plot that people make.

Â I can color that in my study and you can see that they're still separated by study.

Â So even though we've done quantile normalization, and

Â the samples all have sort of the same distribution,

Â we haven't removed the sort of gene to gene variability in expression patterns.

Â So that's an important thing to keep in mind,

Â that even though you've normalized out the total distribution, you can still have

Â artifacts like batch effects or other types of artifacts in the data set.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.