The question is how much memory is required to store this data frame in

memory, okay?

So, I can do a simple calculation.

So, the num, the number of elements in this data, in this data

frame is going to be 1.5 million times 120, right, because it's a square object.

And so that's, so that's the number of elements in the data frame.

Now, if it's a numeric all the data are numeric then each

number requires eight bytes of memory to store.

Because the, because the numbers are stored using 64-bit numbers and

there's eight bits per byte.

So that's eight bytes of memory per numeric object.

So that's going to,

so here's the number of bytes, now there's two to the 20 bytes per megabyte.

So I can divide that, the number of bytes by 2 to the 20, and

that's how many megabytes I got.

So it's got, I've got 1,373.29 megabytes.

And I can divide that again by 2 to the 10 to get the number of gigabytes,

that's going to be roughly 1.34 gigabytes.

So the, the raw storage for this data frame, is roughly 1.34 gigabytes.

now, you're actually going to need a little bit more memory than that to

read the data in.

Because there's a little bit of, overhead required for reading the data in.

And so, and so the rule of thumb, is to, is that you're going to need

almost twice as much memory to read this dataset into R using read.table.

Then the, then the object itself requires.

So if your computer only has, let's say two gigabytes of RAM eh, and

you're trying to read in this 1.34 gigabyte table.

You might want to think twice about trying to do it.

Because it, you're going to be pushing the boundaries of of memory that,

that is required to read this dataset n.

Of course, if your computer has like four or eight or 16 gigabytes of RAM,

then you should have no problem in terms of the memory requirements.

It will still take some time just to read it in just because it takes time to

read in all the data, but you won't be running out of memory.

So doing this kind of calculation is enormously useful when you're reading in

large data sets.

Because it can give you a sense of you know do I have enough memory.

Is the reason, if you grunt any errors,

you'll know whether the error is because of memory, running out of memory or not.

So I encourage you to do

this kind of calculation when you're going to be reading in large data sets.

And you, and you, and you know in advance kind of how big it's going to be