This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

From the course by Johns Hopkins University

Principles of fMRI 1

329 ratings

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

From the lesson

Week 4

The description goes here

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

In this module we're going to talk about group analysis.

Â Especially about fixed and random effects which is a pervasive and

Â important issue with neuroimaging analysis.

Â So let's recap the multi-level model.

Â There's two levels in our typical model.

Â The first level deals with individual subjects and

Â it conducts a model within one subject.

Â The second level deals with groups of subjects and

Â either constitutes one sample T-test across subjects or

Â an analysis across groups, like patients versus controls, for example.

Â So this is a schematic of a group again

Â where we have a model on the time series within each subject.

Â That's the first level and those are nested within the group.

Â And, again,

Â all inferences here are performed in the massive univariate setting.

Â So, we're still dealing with an analysis at one voxel.

Â Multi-level models have been developed for

Â analyzing hierarchically structured data like this.

Â And they allow different variance components to be introduced.

Â So, we'll talk about several different variance components soon.

Â But they essentially reflect variants within subject and

Â variants between subjects or individual differences.

Â And these provide a framework for conducting mixed effects analyses.

Â So, mixed effects in models, hierarchical models and

Â random effects models in neuroimaging all refer to the same concept.

Â They have model multiple sources of variation which are also called variance

Â components, and these models stand in contrast to what we

Â call fixed effects models in neuroimaging with only one variance component.

Â And we mean something very specific in neuroimaging when we say fixed effect

Â model or random effect model.

Â Really what we mean is we model subject or participant as a random effect or

Â as a fixed effect.

Â And I'll explain that more in the following slides.

Â So in the mixed effects analysis,

Â let's assume the signal strength varies across sessions and subjects.

Â There are two sources of variation.

Â One is measurement error.

Â That's all the stuff I can account for with the experimental design,

Â could be head movement related, could be lots of things.

Â The second is random response magnitude.

Â So every subject, or

Â every subject in every session has a random magnitude for their true response.

Â All we're saying here is that all the subjects are different from one another.

Â So those are our two basic sources of variation, or

Â our two basic variants' components.

Â Always, with these models, the population mean is fixed.

Â So we're assuming that there is some fixed population parameter for estimation, for

Â activation, let's say, or famous vs non-famous face differences.

Â So, now let's look at a sample fMRI time series with one source of error.

Â This is animation over replications of one subject's experiment

Â where the only thing that's varying is the fMRI noise.

Â So what we see here is a fixed population effect,

Â and that's the black line, that's this block, on-off, and then every time we

Â sample the actual fMRI data, we get the red line, the black line plus error.

Â So here the only source of variation is the measurement error itself.

Â The true response magnitude is fixed.

Â That's the on-off pattern in the black line that we're trying to estimate, and

Â we're estimating that with the error around that.

Â So in this case, the significance test would be based on the estimated response

Â relative to the measurement error variance,

Â that variation around the black line in the red.

Â That's only within subjects noise.

Â Now let's look at the same thing but with two sources of error.

Â Now the green line is the true response for

Â an individual subject which is sampled around the black population mean line.

Â And now when we sample the fMRI data,

Â which is the red line, we're sampling with error around the green line.

Â So now there are two sources of variation.

Â One's the measurement area, the scan to scan variability in red,

Â sampled around the true individual differences which is the green line.

Â And the green line has true individual differences

Â that vary around the population mean.

Â So only by including both sources of variation in my error term

Â in a sedisical model can I generalize to unobserved subjects.

Â And that's what it means to treat subject as a random effect.

Â So let's look more deeply at fixed effects and random effects.

Â So a fixed effect is always the same, from experiment to experiment, and

Â levels are not drawn from a random variable.

Â They're not assumed to be drawn from a random variable.

Â So some examples are sex, male or female.

Â There's only two, usually, alternatives.

Â And another example might be drug type.

Â I might be interested in the effects of Prozac versus control,

Â not the effects of some new, unobserved, randomly selected drug.

Â So in that case, the fixed effects model is appropriate.

Â Let's look at random effects now.

Â Typical random effects are those that are assumed to be,

Â whose levels are assumed to be sampled at random from a population.

Â So the quintessential thing that should be modeled as a random effect is subject or

Â participant.

Â We observe some subjects, but

Â we assume that we selected subjects at random from the population.

Â Another example is Word in experiments with verbal materials.

Â So let's say you're studying the effect of positive and negative words.

Â And you only choose one positive word which is puppies and

Â one negative word which is murder.

Â And you do a scan where you compare puppies and murder.

Â Well they differ in positive versus negative, but

Â they also differ in many other features as well.

Â So, you might treat, you might first include a poll population of words,

Â many kinds of positive and negative words.

Â And then you might want to model a word as a random effect, so

Â that you can generalize to unobserved words as well,

Â drawing from the population of negative or positive words.

Â One of the key points is in a mixed effects model,

Â we choose whether to model each effect as fixed or random.

Â So here are the implications of that choice.

Â The variance across each level of a random effect is included as a source

Â of error in the model.

Â So when I construct a t-statistic,

Â I take the estimate of the effect divide it by its standard error, and

Â that standard error includes variability from individual to individual or

Â from, from level to level of anything I've modeled as a random effect.

Â And this allows us to generalize to unobserved levels.

Â If an effect is treated as fixed,

Â error terms in the model don't include variability across those levels.

Â So, we can't generalize to unobserved levels in that case.

Â The upshot of this, is if I treat subject as fixed I cannot generalize

Â to new subjects, which is something that we virtually always want to do in science.

Â It's hard to imagine a case where we don't want to generalize to

Â other people besides the ones we actually included in our study, that's science.

Â So this is a group analysis using a summary statistics approach,

Â which is a simple kind of random effects model.

Â And this is the one that's used most of the time.

Â So on the left, what you see is a first level analysis which is

Â a GLM within each person, and I take that forward to find a contrast and

Â come up with a contrast image for each person.

Â Then I go to the second subject, and I repeat that.

Â Now I take the contrast images from all of those individual people, and

Â I put them into a second-level design matrix.

Â And what you see here is an image of what that design matrix looks like in this

Â case, and the design matrix looks like a white square because it's a constant.

Â It's all values of 1, and that's just a one-sample t-test.

Â So I conduct that, and then I can get a group result, and then I make inferences.

Â Now this is the most common approach because it has several

Â important advantages.

Â One, it's easy to do.

Â It's easy to add new subjects or participants later and

Â rerun the group analysis, for example.

Â It's optimal if the within person precisions are all equal for every person,

Â and that implies that the design matrix is identical, its efficiency is identical.

Â We'll talk more about that in future lectures.

Â And the errors are all equal.

Â It's fairly robust to violations of some of those assumptions in terms of

Â false positives, but we can lose sensitivity in some cases.

Â So that's a schematic overview of what the random effects analysis looks like.

Â Here's a schematic of what a fixed effects analysis looks like,

Â which is the wrong approach.

Â So this is what I call the grand GLM approach, and

Â this was done in the very early days of neuroimaging.

Â And this is a GLM on data that's concatenated across subjects.

Â So what you see here is an image of the design matrix

Â where I've got this blocked on, off design modeled separately for each subject.

Â So every subject gets one estimate for their individual slope.

Â I've got the intercepts for each subject, and I've got some nuisance covariance,

Â some filtering covariance, high pass filtering covariance for each subject.

Â And there are three example subjects here, so I've concatenated all of them.

Â So I'm making a number of assumptions here.

Â So every subject gets their own slope, but

Â when I calculate the error in that model, I'm going to average the subjects, and

Â I'm going to compare that to the error within subject.

Â The error on the time series only.

Â So this assumes that the only source of error is within

Â persons scanner noise, and that's not accurate.

Â So this tests the mean effect against that within-subject error, and

Â it doesn't account for the individual differences at all.

Â So even though I'm coming up with one estimate of the slope per subject,

Â that doesn't get reflected in the error term,

Â and I'm not making inferences that can be used to generalize to a population.

Â That's the end of this module.

Â In the next module Martin's going to talk more about

Â the multi-level GLM from a statistical or structural modeling perspective.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.