This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

From the course by Johns Hopkins University

Principles of fMRI 1

328 ratings

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

From the lesson

Week 1

This week we will introduce fMRI, and talk about data acquisition and reconstruction.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

So in the last module we talked about how we could take this

Â signal that was acquired from an MR scanner and create an image.

Â And what we talked about was that the image was not actually created

Â in image space, but rather in something called k-space.

Â So in this module, I want to talk a little more about K-space

Â to gain a little bit more understanding of that concept.

Â So here's the little cartoon I used last time.

Â So, data is acquired in the K-space, and

Â so here I show it being acquired in a grid like fashion.

Â And so, once you've acquired that data, you apply the inverse for

Â your transform, and you get this beautiful image in image space.

Â So it's important to note that there's not a one to one relationship

Â between image space and K-space.

Â So we don't have, like there's a single measurement in k-space

Â that gives you all the information about a single voxel of the brain.

Â But rather what happens is that,

Â all the points in k-space contain a little information about every voxel.

Â So by removing a k-space point,

Â we lose information about, over every voxel of the brain.

Â So each individual point in image space depends on all the points contained

Â in k-space.

Â So to illustrate the meaning of each k-space based points,

Â I like to think about this in one dimension.

Â So let's say that you make three sinusoidal curves as follows.

Â These are three sinusoidal curves with different frequencies, and

Â let's say that we take the linear combination of them.

Â So we take the top sinusoidal curve, and we multiply that by 0.5.

Â We take the second one and we multiply it by two, then we take the third one and

Â we multiply it by one, and then we add these curves up, and

Â then we get the following curve.

Â So this is a linear combination of three sinusoids.

Â So if I asked you what three sinusoidal functions went into making this curve,

Â by looking at it it might be hard to tell.

Â However, by taking the Fourier transform of this time series

Â we get the following information.

Â We get three different spikes in the frequency domain, and so basically,

Â the frequency domain, the x axis here is frequency,

Â which is one over the periodicity.

Â So if we have a sinusoid with a a long period,

Â we're going to get a spike in the low frequency portions.

Â So this first spike to the absolute left here, the low one,

Â which has the magnitude of .5, represents this curve with a height periodicity,

Â the top one here, and the .5 represents its relative contribution to the signal.

Â The second peak we have represents the middle curve and

Â its amplitude is two and the last one, which is the most at high

Â frequency is the one that oscillates the most, and that has a peak of one.

Â So by looking at the Fourier transform of this time series we were not only able to

Â reconstruct the periodicity of the three functions that went into it, but

Â also the relative contribution to the times series.

Â And so this is sort of in case base, we do this in two dimensions.

Â So let's now look at this in two dimensions and

Â say, well what are the contributions of the each of these k-space points?

Â Let's say that we have a blank k-space, so it's zero everywhere, and we have our

Â single measurements here, which is sort of to the Northwest of the origin.

Â So what happens when you put a value one in this case, and

Â you take the Fourier transform?

Â Well, it turns out that if I take the Fourier transform and

Â transform this into image space, you get a sinusoid here, but in two dimensions.

Â So, a two dimensional sinusoid, kind of a wave here, going here, and you're seeing

Â it's going in the direction from which the point, it travels from the origin.

Â And also the periodicity of this wave depends on how far away

Â from the center k-space you are.

Â So this is sort of, in the low frequency parts of k-space, so

Â it has a high periodicity.

Â So if you move in the same direction, but towards the high frequency portions,

Â then we would expect lower periodicity, and so indeed we do that.

Â So the wave starts oscillating more frequently because we're in the higher

Â frequency portions, but it's going in the same direction because we're moving in

Â the same direction from the origin.

Â Let's say instead we moved

Â to the Northeast instead of the Northwest from the origin of k-space.

Â In this case we're still in the same high frequency parts of k-space,

Â so we would accept the same periodicity, but we're moving in a different direction.

Â So if we reconstructed this point we would get a wave with the same periodicity,

Â the same frequency, but

Â now moving in the Northeast direction instead of the Northwest direction.

Â So basically, what each point in k-space gives us,

Â it gives us one these waves, and basically, the value of the point, k-space

Â point tells us the relative contribution of that wave in reconstructing this image.

Â Now this is really almost hard to believe because basically when we have

Â k-space here, what I'm saying here is that the k-space measurements that we see

Â to the left here are simply weights of these different waves, and we

Â take the linear combination based on these waves and we get the image to the right.

Â And so it's hard to believe that this image is just made up of different waves,

Â but we can show that this is true if we take a single k-space point and

Â just manipulate it by doubling it.

Â So let's say that we take this case spaced point, which is sort of to the Northeast

Â of the center, and I double its value, and now I reconstruct the image again.

Â Basically what you see is, you see that the way of going in that direction

Â becomes overvalued, and now we get this kind of grid like artifact over the brain.

Â So, now we're just overvaluing the wave going in a certain direction,

Â and this is giving rise to this artifact.

Â So this sort of illustrates that this image is

Â a finely kind of balanced combination of these waves, and

Â if we overvalue one of them, it kind of ruins the whole image.

Â If we go in the opposite direction, we ge the same type of grid

Â like pattern, but moving now in the Northwest direction.

Â So in this case, so the k-space contains information about the entire brain in this

Â case but now if we're interested in the relative contribution of the high

Â versus the low frequency parts of k-space, we can do the following example.

Â Let's split k-space up into nine equally sized boxes, and we take the center box,

Â and we reconstruct the image using that data, and then we take the outer

Â eight boxes, just removing the center, and we reconstruct the data using this.

Â Because the Fourier transform is a linear operation,

Â the sum of those two should add up to the original image.

Â So now by doing this little thought experiment, we can see what the relative

Â contribution of the center of k-space is, versus the outskirts of k-space.

Â So if we reconstruct the image using the center of k-space,

Â we get something that looks like this.

Â It looks very much like the original image, but a little bit blurrier and

Â you'll see that the detail is not as fine as it was in the original image,

Â but we've retained most of the information of the brain.

Â And that's just using one-ninth of the k-space measurement, so

Â about 11% of the data.

Â So if we look at what information is conveyed by the additional 88%,

Â or 89%, we can make that reconstruction.

Â Here you'll see that we're only getting detail, we're seeing the boundary between

Â the ventricles in the brain and between the skull and the brain.

Â So basically, these high frequency parts are the ones

Â that are oscillating very quickly so those are giving us a lot of the fine detail,

Â and while the low frequency parts are the things that are changing very slowly and

Â that's giving us most of the contrast here.

Â So in general, if we use this as an illustration,

Â we can see that the low spatial frequency of k-space

Â represent parts of the object that are changed in a spatially slow manner.

Â This is contrast.

Â In contrast, high spatial frequencies represent

Â small structures whose size in on the order of the voxel size.

Â So these are usually tissue boundaries and things like that.

Â So if you want to make out fine spatial resolution,

Â and you want to make out the difference between grey and

Â white matter, you need a lot of these high spatial frequencies.

Â If you're just interested in contrast, you primarily need the center of k-space.

Â So again, the farther out, the more we sample in k-space,

Â the more detail we get, and this goes back to spatial resolution.

Â So if we want to acquire a 32 by 32 image,

Â we need to sample about 1024 points in k-space.

Â If we do that, we're primarily using the slowly varying waves and

Â we're getting this sort of, not much detail.

Â We're getting very little detail about the brain.

Â We're just kind of getting a blurry version of the brain.

Â If instead, we sample a 64 k-space in a 64 by 64 grid,

Â we have to make 4096 different k-space measurements, and

Â by doing this, we're now starting to incorporate some of the more high

Â frequency parts of k-space, and this is giving us more spatial detail.

Â So now we can start making out differences in the brain.

Â If we go even higher to a 128 by 128 image,

Â now we need to sample at 16 roughly 1,6000 points in k-space, but

Â by doing this we're now getting a lot of high frequency parts of k-space and

Â I was making wherever to make out a lot of detail.

Â Like for example, the boundaries between grey and white matter, and

Â between csf and what not.

Â And so, what we have here is a very high spatial resolution image of the brain,

Â which gives us a lot of information and a lot of detail.

Â Now if we have our druthers, we would like these high resolution images.

Â But the point is in order to go from the 32 by 32 image to the 128 by 128 image,

Â we had to make 16 times as many measurements.

Â And so there's a cost in doing this, because we have to make a lot more

Â measurements of the brain, so it takes a lot longer for us to do this.

Â So, in general, there's sort of a trade off between spacial and

Â temporal resolution here that we want to have adequate

Â spacial resolution in order to make out what's going on in the brain, but we also

Â want to acquire the images in a fairly rapid manner because we're going to need

Â these when we do functional imaging, but we'll come back to this in coming modules.

Â Okay, so this is the end of this module.

Â Here we've talked about k-space, and

Â we talked about the information contact in k-space, and so this sort of ends this set

Â of three modules where we talked about image acquisition and reconstruction.

Â And now we're going to move on and talk about some other topics.

Â Okay, I'll see you in the next module, bye.

Â Coursera provides universal access to the worldâ€™s best education,
partnering with top universities and organizations to offer courses online.