This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Loading...

From the course by Johns Hopkins University

Principles of fMRI 2

88 ratings

This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

From the lesson

Week 3

This week we will focus on brain connectivity.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

In this module, we're going to talk about path analysis and

about mediation in moderation in particular, with two very simple and

powerful techniques for inference in the social sciences and now in neuro as well.

So, the two basic kinds of connectivity, that we have been and

will be talking about.

The first kind is what we'll call the data production approach, and

data reduction approaches identify distributed patterns in the data,

they display very complicated 3D, 4D, 5 dimensional data sets,

as a simple combination of, in our case, temporal components and

spacial components, and then subject level components.

And these include standard and tried and true techniques,

like Principal components analysis and independent components analysis.

Partial least squares is a variant that takes into account multiple

predictors on a data side and multiple outcomes.

Multi way algorithms extend beyond two dimensions.

These include things like Tencer ICA and Parafac, and

factor analysis, and in-scal and out-scal.

Individual differences, multi dimensional scaling.

And finally other techniques in the computational editor like

self-organizing maps.

And often graph theoretic approaches are applied to these maps, in order to

characterize the whole network properties, and that's increasingly popular now.

The second family is the Path modeling family, and the utility of this is

to provide inferences about specific connections and associations in

the brain and with physiology and behavior and experimental variables.

And include the following things.

Path models, are a series of linear models and observed variables.

And structural equation models are very similar.

They work on associations or cross-variables.

Look at a psychophysiological interaction analysis is a particular kind

of path model.

There is also spectral coherence models in the frequency domain instead of

the time domain.

And there's also Granger causality models,

which can take into account cross line correlations and outer correlation and

dynamic causal models, which is part of a family of state space models that

estimate relationships based on the dynamics or derivatives of the responses.

These different approaches can become complementary, so, for example,

you can use a data reduction strategy like ICA.

Come up with components, and then you can do path models or

other kinds of models on those component scores.

I think of this as a toolbox, for

things that you can do to help understand your data and make interesting inferences.

So, we'll talk about mediation in particular here.

And mediation is really an important concept, and

we can think of it in part as a search for mechanism.

So, let's say I've got a manipulation of expected pain relief and

I'm observing some analgesia, pain relief.

And we can think of mediating variables as those that might intervene and

explain that effect.

So, this could be caused by less attention to the symptoms.

Altered valuation of pain-inducing events.

And it could be caused by decision biases or other kinds of processes like that.

When we apply this to brain analysis,

it really advantages that it can connect experimental design variables,

brain measures and outcomes in a single integrated model.

Otherwise, it would be a series of models, so,

here I've got an experimental manipulations.

Measures in a series of brain regions,

which are potential mediators of that effect, and an outcome behavior.

So, let's look now at more detail at mediation.

So, mediation, the idea of what it's testing is, does this variable m,

explain some of the relationship between x and y.

So this is really all about these x as the initial variable,

y as the outcome, these initial variable outcome relationships.

And this can help establish a pathway that connects

x to y through some intervening variable.

So, for example in the brain,

I might apply a stressor that's an experimental variable.

It's x.

M is its activity in the anterior singular cortex.

And y is increases in heart rate.

We also commonly refer to these paths, the relationships by letters.

So, by convention, we'll say that the x to m relationship is path a.

The m to y relationship, controlling for x is path b.

The original relationship.

From x to y is path c.

And the relationship between x and y controlling for the mediator is c prime.

Moderation analysis tests something that's different.

What testing is, does the level of m,

my moderator now, influence the relationship between x and y.

So, this can establish regulation of a pathway or conditionality of a pathway.

And one common way to diagram, is like this.

And in this case, for example, a stressor is a moderator, and

now we've got the relationship between inter single connectivity and heart rate

being different when stress is on versus off, and now stress is a moderator.

So, let's look and now and test the mediation again, and

one way to think about this is as a model comparison.

So, a, b, and c, and c prime are all slopes in standard regression equations.

So, the first equation, we take m, we regress m and x, and

the 2nd equation we regress y the outcome on m and x together.

So, here's our linear modelling framework, y=x beta + error, and

in that kind of notation, this is what it looks like.

I've got a vector of observations, m, the mediator.

I've got an intercept and predictor x.

And those are each multiplied by beta knot, which is the intercept

times the intercept plus path a times x.

The second equation looks like this.

So, now I've got an intercept, the x, and the m variables in my design matrix.

And I'm estimating c prime and b, each controlling pulling for the other.

So, a way to think about this, model comparison,

is in terms of counter factuals.

So what we're doing, is we're comparing the situation on the right, where there's

just x and y, giving us path c, with the situation on the left, where now I have.

Some of that relationship going through m, through this mediating pathway.

So, the counterfactual is, if we could put a clamp on m and

prevent it from varying, would the effect of x and y be reduced or absent?

And if so, there's mediation.

And if that's the case, then if we compare path c and

path c prime they should be different.

And so we can test the significance of c minus c prime.

And that's the test of mediation.

And this with some algebra is actually turns out to be the same

as testing the product of the path coefficients, a and b.

So, we really testing a,b product in this case.

So, this is a directional model.

Where I have to specify, which is the predictor and which is the outcome.

But, one note of caution.

It's better not to make strong directional interpretations

about which is causing what.

Unless you're experimentally manipulating variables.

And there has been a lot of debate and controversy about mediation and,

as far as I can see, a lot of this controversy centers on whether you should

make strong causal inferences when you have observed variables.

And I don't believe you can.

I don't think that means the models are not useful.

I think that it just means that we have to be careful about the inferences

that we make.

So, one thing to note here is that we can reverse the direction of the arrows and

often get similar effects.

We could reverse, the which is the m and which is the y and

we might get similar effects as well.

And one can test these different models.

But, it's important to know, that the best direction, which model, seems to

give the most significant results, depends not only on the underlying

strength of the relationships, but, also on the relative error variances.

So, one variance can be greater than another one for uninteresting reasons.

For example, if it's a brain region, it could just be a noisier estimate, and

then it's going to look less like a mediator.

And, finally, causal inferences, generally safer,

at least, only if x and m are experimentally related and manipulated.

We can still use these as models of pathways,

but, perhaps without making strong causal claims.

So, mediation test is a statistical test of a times b.

And what that means, is that we need to have an estimate,

a times b divided by some estimator of its standard error, and so,

this is the sobel test or the aroian version of, several versions.

And this is, this says that the a times b divided by an estimate of the standard

error are distributed normally as a z score, and then you can get p values and

make instances, well, a times b is not actually normally distributed most often.

And so, this test turns out to be over conservative.

And it's now typically replaced with the boot strap test, and

many people have worked on boot strapping and mediation now.

And there is packages available for that in multiple software packages.

And now, in the bottom, if we think about these path a and

path b effects, it's useful to think about the families.

This is a Venn diagram of the families, of all the possible significant tests.

So what you see here in the once circle,

marked a, is the family of all the tests that have a significant path a effect.

And the green circle, is all to test if they have a significant path b effect.

And a certain subset of them will overlap.

You'll get a and b into conjunction.

And then a subset of those

are going to be ones in which the a times b path coefficient is significant.

So, you can see from this that the mediation analysis is really a relatively

conservative test.

For not only the presence of a path a and a path b effect, but,

also whether their covariances are strong, or

whether the join effects are strong enough to pass the mediation test.

Let's look at a graphical example of this.

So, now we're working with our case,

where we have a stressor influencing inter cingulate activity in the brain,

which is predicting heart rate increases to the stressor.

And this is what it looks like graphically.

We'll first look at a case where there's mediation on the top panel.

So now, the blue dots are stress on.

The green dots are stress off.

And the path a effect, is whether blue is higher than green on the x-axis,

which is the anterior cingulate.

So, we can see here,

indeed, the blue dots are shifted to the right, so ,there is a path a effect.

The path b effect, is the effect of anterior cingulate activity on heart rate,

controlling for the stressor.

So, this ends up being a parallel slope finder and model.

And we can see that path b there is the slope of that line,

on average across the two groups.

So that's positive, the more anterior cingulate, they go to the heart rate.

Controlling for group.

Path c is just the simple effect of stress on heart rate.

So, that's whether the blue dots are higher than the green on the y axis and

indeed you see the effect here.

And path c prime is the difference after you control for anterior cingulate for

the mediator, And so that's this gap between the two parallel lines.

And as you can see here, the gap is very small.

So we have a path c effect and it's gone down to virtually zero.

And that means that c minus c prime is greater than 0.

And there's a mediation effect.

So intuitively then, when we have a mediation effect, what are we doing?

Well, what we're saying is, that the effect of the stress on the heart rate is

exactly what I'd predict, based on the anterior cingulate activity.

So, I'm just shifting up along the anterior cingulate axis, and

that gives me the significant mediation effect.

Now on the bottom panel, let's look at a case where there's significant path a and

path b effects, but there's no mediation.

And in this case, it's a very similar situation,

except that the effect of stress on the heart rate is so large and

the path a and the path b effects are relatively small,

so, they're really not sufficiently large to explain the effect.

So, what I'm left with is a big c and

a pretty big c prime once I control for Inter-singular activity.

So, that gap is still there, and

that means that the mediator can't really explain a lot of that relationship.

Coursera provides universal access to the world’s best education,
partnering with top universities and organizations to offer courses online.