This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Loading...

From the course by Johns Hopkins University

Principles of fMRI 2

61 ratings

Johns Hopkins University

61 ratings

This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

From the lesson

Week 2

This week we will continue with advanced experimental design, and also discuss advanced GLM modeling.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

In this module, we're going to look at parametric modulation,

which is a simple but very effective way of enhancing an experimental design to

enhance the psychological specificity of our brain behavior relationships.

So let's review, we've previously talked about design building and

convolution for multiple predictors with discreet event types.

So this would be four discreet event types, A, B, C and

D that convolved in the corresponding design matrix.

We talked about that and now we'll extend this to talk about parametric modulation.

So what a parametric modulator is, it's a test for

a linear relationship between brain activity and a performance or

behavioral or psychological variable of interest.

For example, reaction time or performance on a memory test and

this lets us model how brain activity varies across multiple levels of

a variable like performance and that can provide stronger evidence and

contrast for brain performance relationships.

So let's look at two cases that are some of my older,

but favorites in the literature.

And on the left, what you see is a study from Adrian Owen's group where

they're looking at Tower of London task and task complexity.

And what you're seeing here is that CBF and the dorsolateral frontal cortex

increasing across five levels of difficulty in that Tower of London task,

which is an executive function test.

And what's nice is you could compare complex tasks to rest or

to a very simple task and

there might be lots of reasons that those two things differ in activity levels.

Frustration will go up, other kinds of autonomic arousal will happen.

Many different kinds of changes, but what they're showing here is a linear

relationship across the different levels of complexity,

which gives us more confidence that this area of the brain,

DLPFC is actually related to performance in some way.

On the right is another example and this is now the ventral media of prefrontal

cortex, which is often found to correlate with value and

this is from Antonio Vangelle's lab and what they've done here is they've

looked at relationship between how much people are willing to pay for

an item like a hat or a cup or a mug and the brain activity in the vmPFC.

And they've broken it up into five different categories and what they're

showing here is a linear relationship across levels from no, I don't want to

buy this thing for very much money to, I'd like to buy it [LAUGH] for a lot of money.

And that's a nice linear relationship again,

increasing confidence that the vmPFC is really tracking value.

So let's look at an example of how the parametric modulation model works.

So here's some idealized data and there's both an average effect and

a performance related effect.

Let's say, our modulator's reaction time, RT.

So we take the RT on each trial and

we're going to use that to modulate the amplitude of our stick function.

So that's what you see here.

So the longer the line, the longer the reaction time on that trial.

Now when I construct my model predictors in the parametric modulator framework,

I'm going to construct two predictors.

The first predictor captures the average response and is constant across trials and

the second predictor captures the parametric modulation effect,

which is the variation around the average response that's linearly related to

the reaction time or the amplitude of the modulator.

Those two functions combine together to fit the responses best it can.

So as we see here, the fitted response looks really nice and

it tracks the overall activity closely even though neither aggressor looks

exactly like that fitted response itself and this is to illustrate that

it's okay that the parametric modulator sort of rises and dips below zero.

It doesn't look like the reaction time anymore, but what it's doing is,

it's capturing performance related variation around the average response.

So, it's the relative rise and fall above and below the average that it's capturing.

So it's okay if it's below zero, it's still doing what it's supposed to do.

So here's some key properties of parametric modular models.

One is that the standard regressor captures the average event-related

response.

And secondly, as I said,

the parametric modulator regressor captures variation around the mean.

And that means it's orthogonal to standard regressor so

it's essentially parsing the over all fit into the average and

the modulated parts which is what we want it to do.

It's also extendable, so I can add linear modulators for time, for example,

across trials and this is really useful for assessing habituation or fatigue or

practice effects or performance related changes across time.

We can also add other basis functions to capture some non-linear effects

across time.

For example, we could add a quadratic regressor, quadratic function to my

linear one or we could add an exponential modulator as well.

And that might be particularly useful, because many times reaction time and

other kinds of time dependent effects can often vary exponentially with time.

So the power log practice deposits a very similar relationship

to an exponential decrees across time with increasing practice.

Finally, the implementation of this is all done by adding

regressors to the GLM design matrix.

So, it really fits nicely into the linear modeling framework.

Here are some cautions.

A standard approach is to modulate only the amplitude of this brief event,

stick function or an epoch.

So this is okay for many purposes but there's no guarantee that is this is

the most accurate or best model, there are some alternatives.

One, for example is if you're studying reaction time.

It's also possible to modulate the duration of an epoch with some

tricky manipulations all in the GLM framework.

As with other regressors, if you have multiple basis functions,

including basis sets for your event or quadratic or other non-linear modulators,

it may not be straightforward to do a t-test that captures that parametric

modulation effect altogether, so we have to be careful about that.

Linear modulators are really convenient, if you want to access a t-test and

do a t-test at the group level.

If you enter multiple modulators, be careful.

In some software, like in SPM software, which is very popular,

modulators are entered after the first one are orthogonalized

with respect to earlier ones.

So they're only allowed to explain variance that's not captured by the first

modulator and this not a standard multiple regression in which the effects compete to

explain variance.

It's actually a higher hierarchical regression in the sense that they're

entered stepwise and the first modulators is entered first and allowed to explain as

much variance as it can and then the the subserver modulators are entered.

So if you don't like this property,

you can change that by modifying the code if you desire.

And finally, let's look at the image of a design matrix

with really a complicated set of regressors and modulators.

What this all means?

It puts the pieces together.

So now we've got two trial types, A and B.

But now we've broken those two trial types into 18 regressors and

we've also added some nuisance covariates, maybe related to head motion.

So the first nine regressors model trial type A and

the second nine regressors model trial type B.

And now if you just look at the trial type A regressors,

the first three columns model the average response to trial type A,

but it does it with three basis functions.

So this is the three parameter basis that we talked about before with economical,

the derivative and the dispersion derivative.

Those together can capture the average response of trial type A with some

flexibility in the shape.

The next three capture the linear modulation by time,

the affect of time on the average or

trial number and you can see that there.

And that's true, we have to create a risk for each of the three basic functions and

then the third three capture the linear modulation by performance.

For example, reaction time.

Again, with one modulator per basis function.

So that's a little bit about unpacking how it might actually look in your design

matrix.

That's the end of this section.

Thank you.

Coursera provides universal access to the world’s best education, partnering with top universities and organizations to offer courses online.