This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

From the course by Johns Hopkins University

Principles of fMRI 1

260 ratings

Johns Hopkins University

260 ratings

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

From the lesson

Week 4

The description goes here

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

Hi. In this module we'll continue

Â talking about multiple comparisons problem.

Â So, in the last module we talked about familywise error rate correction.

Â Here, we'll probably talk about false discovery rate correction.

Â So, methods that control familywise error rates, such as Bonferroni,

Â random field theory, and permutation tests,

Â provide a strong control over the number of false positives.

Â While this is really an appealing property, the resulting thresholds

Â often tend to be very stringent and lead to tests that suffer from low power.

Â So power is very important in fMRI applications because most

Â interesting effects lie in the edge of a detection.

Â The false discovery rate, or the FDR, is a recent development in multiple

Â comparisons that are due to Benjamini and Hochberg.

Â So this comes from a paper around 1995.

Â So while the familywise error rate controls the probability of

Â any false positives, the false discovery rate

Â controls the proportion of false positives among all rejected tests.

Â So this is a slightly different criteria that we're controlling here.

Â So, to get some notation down,

Â let's suppose that we're performing tests on m different voxels.

Â Now we can make the following little table.

Â We can separate voxels into those that are truly inactive and

Â those that are truly active.

Â Of course we're never privy to this information but

Â in general there are truly active voxels and truly inactive voxels in this setting.

Â Now we can also declare voxels inactive or active and

Â that's something that we have control over.

Â So, for example, in this table we have V

Â as the number of truly inactive voxels that were declared active.

Â So, those would be things that we should've not rejected but did reject.

Â That's a false positive.

Â So, both U, V, T and S are unobservable random variables,

Â because we don't know how many false positives we're making in practice.

Â The only thing we do know is R, which is the number of voxels that were

Â declared active, because we know which ones were declared active, and weren't.

Â But we don't know what proportion of those were truly inactive or truly active.

Â And that's something that we want to be able to control and

Â that's the basis behind FDR.

Â So in this notation, a familywise error rate is simply the probability that V

Â is greater than or bigger than one.

Â That means that we have one or

Â more false positives because V is the number of false positives.

Â The false discovery rate is defined to be 0 if R=0 in this case.

Â So, if we didn't declare any voxels active then we can't have any false positives.

Â Then this is not a problem.

Â So a procedure controlling the FDR ensures that on average the FDR is

Â no bigger than some pre-specified rate q which usually lies between 0 and 1.

Â Let's say 0.05.

Â However, for any given data set, the FDR need not be below the bound, because

Â the FDR is just the expected number of false positives among all rejected tests.

Â So, we just know that on average we're going to to control it at a certain rate,

Â not what's going to happen in any given situation.

Â So basically an FDR-controlling technique guarantees control of the FDR in the sense

Â that it's less than or equal to q.

Â So on average we're going to control the FDR at the level q.

Â So how do we do this?

Â Well, the most popular way of doing this is the so-called Benjamini-Hochberg

Â procedure.

Â So here we begin by selecting a desired limit q on the FDR, let's say 0.05.

Â The next thing we do is we just rank all the p-values over

Â all the voxels from 1 to m, where m is the total number of voxels.

Â So we just rank them from smallest to biggest, and

Â then we just plot them from smallest to biggest in a plot.

Â Then we let r be the largest i, so the largest voxel in this

Â ranking such that p(i) is less than or equal it i/m x q.

Â So this is, i/m x q is just a straight line that goes from 0 to q/m.

Â Now, in this case, we're going to get a straight line there,

Â and this is the black line we see in this little cartoon here.

Â And then anything that's below that black line is going to be deemed active and

Â anything above is not active.

Â So, here we're seeing the active voxels are those hypotheses,

Â we're going to reject all the hypotheses whose corresponding

Â p-values are between p(1) to p(r).

Â If all of the null hypotheses are true,

Â then the FDR is equivalent to the familywise error rate.

Â Any procedure that controls the familywise error rate will also control the FDR.

Â So a procedure that only controls the FDR can only be less stringent and

Â lead to a gain in power.

Â And so since FDR controlling procedures work only on the p-values and

Â not on the actual test statistics, it can be applied to any valid statistical test.

Â In the last couple of years,

Â FDR-controlling procedures have become increasingly popular as they

Â are less conservative than the familywise error rate-controlling procedures.

Â And so they're getting a lot of use in the neuroimaging context and

Â other big data context as well.

Â In the next module, we'll talk a little about pitfalls with multiple comparisons.

Â Okay, see you then.

Â Bye.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.