This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

From the course by Johns Hopkins University

Principles of fMRI 1

359 ratings

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

From the lesson

Week 3

This week we will discuss the General Linear Model (GLM).

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

In this module, we're going to go into more detail on building GLM models.

This is design specification.

First, let's review some key concepts from before.

We talked about the structural model for the GLM, Y = X times beta plus error.

Where there betas are the model parameters that need to be estimated.

X is the design matrix that we are going to specify in advance.

That's what we're building today.

We talked about an overview of the GLM analysis process,

which is a two level hierarchical model involving design specification or

model building estimation and contrast specification and group analysis.

Then we're ready for anatomical localization imprints.

This is the first-level GLM for

a single-voxel and a single-subject, a basic design matrix.

What we care about here, is the activation parameter estimate, or

the beta, for the task regressor, in a very simple design.

If we go from two conditions of interest to more than two conditions,

we can specify any number of different event types or block types.

Here we've got four, A, B, C, and D, can involve each of them with an assumed basis

function, and end up with a Design Matrix.

So let's look now at model building and

we'll look specifically at multiple predictors and at contrasts.

So let's go back to our famous versus non-famous face example.

It's a block design.

What we care about is the difference between famous and non-famous faces.

This is a contrast across those two relations.

With a block design, one can use a single regressor that captures that difference,

and just build it into a model.

That's what we saw previously.

What happens if we have an event-related design,

we have to model each event type separately in the GLM.

Now, we end up with the design matrix that has one regressor for

famous and one regressor for non-famous faces.

In this case, we can flexibly test multiple contrasts on this design matrix.

So we can assess the difference between famous and non-famous faces,

we can test each one separately, or we can assess their average.

These functions are specified by different linear contrasts across those

parameter estimates, or betas.

So what is a contrast?

It's a flexible and powerful tool for testing a hypothesis in a GLM framework.

We'll focus now, specifically on T-contrast, which is a linear combination

of GLM parameters that gives us a single planned contrast.

I can do a t-test on that and

make a statistical inference on whether that contrast value is not zero or not.

It's specified by a vector of weights which we'll call C.

So that C transposed times beta hat, and

beta hat means the activation primary estimates, gives me a scale or value.

This is signed and can have negative or positive values under the null hypothesis.

So let's apply that to our famous and non famous phase example here.

So I've got two parameter estimates that I'm interested in.

Beta one for famous, beta two for non-famous.

I can specify a difference contrast, which is 0 for the intercept, 1 for

the famous and -1 for the non-famous faces.

That gives us the famous, non-famous difference.

This contrast specifies the song or average across the two face types.

So, this essentially gives me face versus rest.

So that's 0, 1 for famous, 1 for non-famous faces.

And we can test a single event, so 1 1 0 tests only the famous faces or

beta one, against the impulse to intercept.

And then ask, is there a significant positive or

negative response to famous faces?

So let's generalize this now to the case where you have multiple predictors.

This will be useful example that we'll take forward with us in the future

lectures as well.

So here, I've got a design with four conditions.

And let's just say this is a memory experiment.

So I've got four word types, A, B, C and D.

And they're grouped into two factors.

Factor 1, we'll call modality or visual versus auditory presentation.

So there are two levels of that factor.

Factor 2 is high versus low imageability.

Turns out words that are imageable are easy to remember.

So there's two levels of imageability in our example.

And this is an example of a factorial repeated-measures ANOVA design,

to go back to the earlier lecture.

And that's because there are four or more repeated measures.

We have each of the four trial types sampled within person,

with multiple instances per person.

In this case, we don't have any between-subject predictors yet,

no individual differences, so

I've just got a straight up factorial repeated-measures ANOVA design.

Very typical for fMRI.

So let's look at model building and contrasts with multiple predictors.

We'll specify our indicator function, for four different types of onsets, convolve

it with the basis function, the assumed HRF, then we get the Design Matrix.

So this is exactly the case that we saw previously.

And in general if you're modeling any kind of factorial design in fMRI

you can simply create one regressor or one event type per cell.

Now let's use this to look at contrasts.

[COUGH] So these are my four columns in my design matrix, and

now I'm going to apply contrast weights across those four columns.

I can apply the contrast 1, 1, -1,

-1, which means I'm taking a linear combination that equals

the perimeter estimate for column A plus B minus C plus D.

And you can see this graphically down here below.

This is a main effect of factor one or visual versus auditory presentation.

Let's look at some rules for T-contrast now and

this can help us elaborate our understanding of contrasts.

So first of all, C can be a matrix.

So it doesn't have to be one contrast value.

It can be several.

And if C is arranged in columns, so

that each column is a contrast specter, those columns are applied independently.

So they don't effect one another.

So each is really a separate test of a separate effect on the data.

So let's look at this contrast matrix.

It's got three columns.

And this contrast matrix corresponds to the main effects and

interaction or the standard ANOVA contrasts.

So let's look at those three columns a little bit more carefully.

The first column is 0.5, 0.5, -0.5, -0.5.

So this reflects the main effect of Factor 1.

So I've got positive weights on A an B, negative weights on C and D.

The second column reflects the main effect of Factor 2.

Now I've got positive weights on A and C, negative weights on B and D.

And finally, the third column reflects the interaction,

which is what I get when I multiply the contrast weights of those two columns

together to create the third column.

And now this column essentially captures the crossover interaction,

so I've got positive weights on A and D, negative weights on B and C.

And this is testing the effect that the effect of Factor 1 depends on the level of

Factor 2, or vice versa.

I'm not limited to ANOVA contrasts.

I can specify planned tests that make sense based on

whatever hypotheses I might have.

So in this case, the contrast one, -1, 0,

0, is testing a simple effect, or the difference between A and B.

This might be of interest.

In this case,

it's testing high versus low imageability effect from visual items only,

which is a very sensible thing to test, depending on my psychological questions.

This contrast, -2, -1, -1, 0, tests something else.

This tests the magnitude of twice A versus B and C together.

So this may or may make sense depending on my design,

but this is a valid contrast, and in some cases it might be useful.

Another rule is about scaling.

So the scaling of the weights, the contrast weights,

affects the magnitude of the contrast values, but not the inferences I make.

So it doesn't effect the t values or the p values.

So I can use contrast weights of [1 -1] or [.5 -.5] and

get the same exact statistical result.

So let's look at this case, where I've got this contrast 2,

-1, -1 and this is twice A versus B minus C.

And if I rescale the contrast weights to be 1, -0.5, -0.5,

0, then the contrast value estimates A versus the mean of B and C.

If these were four different sports teams and

I was testing memory effects of football players, hockey players, baseball players,

and basketball players, you can see why you might want to test

football players verses the average of hockey and basketball players for example.

So depending on what my question is, this can be quite useful.

And here's one tip as we move forward, contrast weights must be the same for

all participants to keep all the participant's estimates on the same scale.

And one way you can get into trouble is if you have missing sessions or run,

if you use contrast the weights of 1's and

-1's across the runs, they may not be on the same scale.

We'll hear more about that in the second course.

Another rule for T-contrasts is that the contrast weights typically sum to zero.

And this makes it so

the expected value of the contrast under the null hypothesis is zero.

And that permits us to do a t-test where zero is the null hypothesis value.

So it's very natural.

Let's consider a contrast C across 4 conditions.

Here's a valid contrast, -2, -1, -1.

The contrast weights sum to 0.

[LAUGH] This contrast is not valid.

Contrast is -2, -1 0, 0.

This tests 2*A- B.

But even if the beta values are random [LAUGH],

I'm going to get some non-zero value for the contrast estimate, and

that means I'm not sure what the null hypothesis value should be for a T-test.

There is an exception.

So the exception is that I can test the average of one or

more conditions against the implicit baseline.

So if I test the contrast 1, 0, 0, 0, then that contrast value

is testing the significance of the beta value for condition A only.

So essentially whether the response to A is different than zero.

The contrast 1, 1, 0 tests the sum where the average is beta for A and B.

In our example, this would be for all divisually presented events, for example.

One final note before we move forward.

We looked at model building for multiple predictors,

just like this, and let's just very quickly remind ourselves that

there are a number of assumptions that we have to make.

To build this model, I have to assume that the neural activity function is correct,

little sticks or blocks, we have to assume that the HRF is correct, and

we have to assume a linear time invariance system.

These three assumptions together allow me to construct the design matrix.

All of these assumptions are wrong to some degree.

All models are wrong, but

some are useful as the statistician George Bachs once said.

And we'll look at how to relax some of these assumptions in certain ways in later

lectures.

That's the end of this module, thanks [LAUGH].

Coursera provides universal access to the world’s best education,
partnering with top universities and organizations to offer courses online.