This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Loading...

From the course by Johns Hopkins University

Principles of fMRI 2

85 ratings

This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

From the lesson

Week 2

This week we will continue with advanced experimental design, and also discuss advanced GLM modeling.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

The next principle is the principle of randomization.

And the idea is, we'd like to randomize event-related designs.

And we'd like to randomize them individually for each participant so

we don't have the same order for everybody.

In addition, if we can't randomize the order of events, we can use catch trials.

And what that can help us do is decorrelate predictors in our

design matrix.

So as a very fundamental principle, making the causal inference that

it's the experiment manipulations I've done that are causing the change in

brain activity I've observed, requires randomization to avoid confounds.

And even if I randomize the design and use the same design for every person,

there's systematic differences due to imperfect randomization and

finite sequences of trials and length of scan that will cause confounds.

And so I don't recommend using the same order of events for everybody.

Another example is a case where we have temporal dependence.

If trial type A always precedes trial type B right afterwards,

then the brain may show spurious A versus B differences for several reasons.

One is neural habituation.

If I always have A and then B, and even if they otherwise would approve the same

neural activity if the response to B is simply reduced because of

neural habituation or fatigue, it'll look like it's lower in that experiment.

Secondly, vascular elasticity, which is a kind of non-linearity.

So, if A produces a response, the blood vessels dilate.

When B happens, the blood vessels are already dilated so they can't go further.

And than C, psychological changes caused by order are quite common.

So, three compounds.

Let's look at examples of such a design and what we might do about it.

So, this is a working memory design in which I'm asked to

remember a series of words.

So, here I'm remembering the words and what I'm interested in is how those words

are maintained during the delay period where there is no stimuli presented.

Finally, I'm given a series of probes that probe my memory.

Now this design, let's say I'm going to model the encoding period and

the delay period, in blue and red respectively.

Well, encoding always precedes delay.

It's not possible to randomize the order of those conditions.

So what do I do?

First, let's look at a bad idea.

So here's the design where there's always dependency.

And now I'm just using a standard GLM.

And as you'll notice, they're now perfectly correlated.

And you can look at the correlation and you say, okay, maybe it's okay.

Correlation is only 0.45.

So you can get a response.

You can estimate the response for those two regressors potentially.

But this makes strong assumptions, and

strong reliance on the assumption that the HRF shape is exactly correct.

This is not likely to be true in practice most of the time.

And if it's not I'll get a wrong answer without knowing it.

I'm going to interpret encoding related activity as to lay related, and

vice-versa, simply because the human dynamics aren't modeled correctly.

And so I'm getting the wrong partitioning variance.

So here's a better design and what I like to do is model

the responses flexibly so i'm allowing the HR shape to vary using FIR model.

However, using this model reveals the fundamental

problems with this kind of design.

And the problems show up as correlations in the design matrix.

So, here we have a fixed intertrial interval here of 20 seconds so

regressors for trial type A are correlated with themselves shifted over in time.

That's colinearity and in addition,

what's estimated is linked to encoding event

A is correlated with what's happening to B at earlier time points.

So we got cross correlation across the different regressors for A and for B.

And the maximum correlation here is about 0.93,

but that's just because of how we sampled the HRF and

in fact there's really perfect dependence across the columns of A and B.

And what that means is, take a time point some time during the delay period.

I can't tell whether that that activity is related to the delay, or

related to what's happening at encoding residual effects of that.

Now let's look at a better design still.

So this is a catch trial design.

So what's happening here is on half the trails I've got encoding only.

I don't present the delay period, and say trial's over.

And what that means is now the B event, red,

follows the blue only on half the trials.

Now when I build that FIR design matrix, I've got two good features.

First of all, I've got Jitter, variability in the onsets, and

that reduces the correlations within trial type A shifted over.

So that takes care of that problem.

And secondly, these catch trials, or partial trials, reduce the correlations

between trial type A and B, in this case when they're dependent in time.

So now the maximum correlation in my FIR model is only 0.47.

So that's better.

That might give me a good shot at estimating the activity related to

encoding and delay.

So the next principle is the principle of non-linearity, or

issues with non-linearity.

The idea is we like to avoid non-linear interactions

among events by spacing them at least a few seconds, three or four seconds, apart.

And also avoid systematic differences in temporal grouping across different

event types.

You don't want a design where all the trial type As are clustered together,

and all the trial type Bs are very sparse because differences and effects of null

linearity will systematically confound those differences across trial types.

So, historically, the idea has been that the bold response

is roughly linear when it is presented repeatedly in time.

So, if you have an event at time one and two seconds later and two seconds later,

you get roughly the same response to each of those different events.

However, the operative word here is roughly.

So Meizin et al and Manny Buckner showed that if you present events of the same

type five seconds apart, the response to the second effect is 10% smaller.

That's a nonlinear saturation effect that's due to a combination of neural and

vascular factors.

Either way, they're not things that we can account for

when we do the JLM very easily.

Now, I'll show you some data from a study that we did some years back.

And it's designed to analyze the linearity or non linearity in their response.

So, what we've got is a series of events.

One flashing checkerboard, or flash, flash two checkerboards that second apart.

Or a series of five or six or 10 or 11 checkerboards, all one second apart.

We'll look in the visual cortex where we know we should see responses.

And on the left here, you see the actual data from the series of one, two, five,

six, 10, 11 sequences of flashes.

So you can see we have really nice data, nice responses.

On the right is what we predict from the linear model.

And I've normalized them so that the response to one flashing checkerboard,

one event, is the same.

But the linear model predicts that the response to

two checkerboards is about twice as high as one.

What actually happens is, it's only one and a half times as high.

So, that's a substantial nonlinear effect that we're not

going to be able to account for easily in the GLM.

This happened in part because these events were one second apart.

If you space them out three or

four seconds apart, this nonlinear effect will be much reduced.

And your linear model will fit better.

So that's the first line of defense, is just to avoid very rapid presentations.

And having a good model of linearity can also help us to understand

efficiency in a deeper way.

So what we are seeing here is a plot of efficiency

as a function of the inner stimulus interval.

How long between events on average for a random event related design.

Zero would mean, or close to zero, presenting events very, very rapidly.

And with a true linear tended variant system, then what happens is

as you present events very rapidly, the contrast efficiency goes up infinitely.

So, if I could present a trial type A once every millisecond,

a thousand times in a second, I would get a thousands times the response.

Compare that to B.

Then you have a huge efficiency.

Obviously the brain doesn't really work like that.

So having a sensible model of non linearity is critical for

understanding what's the optimal presentation rate.

So now on the right here, what we're looking at is a more realistic model

with a simple model of non linearity that won't let the response go up too high.

And now what we see is something useful.

What we see is if we're looking at a contrast,

1 minus 1 that's a task a versus b comparison.

Then the best design has an inter-stimulus interval of about

a stimulus every two seconds or so and no rest.

If all I care about is comparing A versus B, I should fill my task with As and Bs.

I don't need to jitter or have any rest, and that's optimal.

Once I start including rest intervals,

which creates jitter in the design, then the efficiency only goes down.

The last principal is optimization.

Optimize your design choices with specific study goals and constraints in mind.

They could be psychological, neural or statistical.

In particular, we'll look at specifying a series of things.

One is, specify a set of contrasts that you care about.

It's an ANOVA design, two by two ANOVA,

you might specify that you care about the main effects and the interaction.

That's three contrasts.

You can specify the relative importance of each contrast and

think about which effects you really care about detecting.

You can also think about the desired high-pass filtering cutoff,

how much noise do you want to remove, and so

how much do you need to keep the design out of that noise range.

And also, you can think about the desired model that you would like to run or use.

So do you want to use a canonical HRF or would you like to use an FIR model and

try to estimate the shape of the response?

So it depends on your goals.

And now we'll just look at the first principle,

which is thinking about which contrast you care about.

So let's say we're doing a study about famous versus non-famous faces.

That's trial type A and B.

And what I really care about is the difference.

All I care about is the difference, A versus B.

In that case, what I told you a moment ago, applies.

The best design has no rest and pretty rapid presentations of events.

A face every two seconds or so.

But now let's consider that you also want to tell whether the fusiform

face area is activated by faces on average.

That's a different contrast.

And that contrast is the average of the famous and

non-famous faces against the implicit baseline or rest.

Now, in this case, the optimal design Is different.

Now the optimal design has 50% rest intervals.

So they're still rapid trials, so you can present a trial every couple of seconds,

but half the trials should be faces and half should be rest if you'd like to

estimate the face versus rest difference as well.

So if you care about both of those contrasts, you have to optimize for

a high rate of those two effects.

And by the way, what exactly the peak ISI is here,

really depends on my non-linearity model.

So in practice, I might be a little bit less aggressive and

not present events quite so fast together.

I'm thinking two seconds or so is pretty good, but

I might push it out a little bit just to avoid non-linearity in case

the non-linearity is not quite what I observed in my experiment.

So now we talked about eight principles of fMRI design.

We talked about consideration of sample size and scan time.

We talked about how many conditions to include in your experiment, and

how to group the events and those conditions together, and organize them.

We talked about temporal frequencies, and

the need to use designs with the right temporal frequencies,

not too short and not too long, 18 second to 20 second blocks is optimal.

We talked about the need for randomization in designs and we talked about what to

do in some cases where you can't actually randomize the order of events.

And we talked about the effects and implications of non-linearity and

some principles underlying design optimization.

So that's the end of this module.

In the next module,

we're going to continue our work on design optimization and talk about computer-aided

design using a genetic algorithm to put some of these pieces together.

[NOISE]

Coursera provides universal access to the world’s best education,
partnering with top universities and organizations to offer courses online.