So how do we represent this 2D signal?

Well, from a standard mathematical point of view, we could represent it with

a Cartesian plot where we have one axis that indicates the first index.

The second axis indicates the second index and

the value of the signal is represented as a third coordinate.

So we have a 3D plot where the scalar values form a 3D surface.

Sometimes, especially in conjunction with the description of filters,

we are interested in what we call the support representation of a 2D signal.

In this representation, we take a birds eye view of the signal and we only

represent the nonzero values of the signal as dots in a two dimensional plane.

Since the height of the pixels,

namely the scalar value, cannot be inferred just by the dot representation.

We often write the value on the signal of the particular location next to the dot.

So this plot for instance, represents the two dimensional delta signal.

Which is a signal which is 0 everywhere except in the origin where it is 1.

And so you have just one red dot here at the origin with value 1.

Of course, the most common representation for

a 2D signal which is also an image is an image representation.

In this case, we exploit the dynamic range of the medium.

In this case we have computer monitor, and

we know that each pixel can be driven to represent a different shade of gray.

And since the pixel values are packed very closely together in space,

here for instance we have 512 by 512 pixel values.

The density will be high and the eye will create the illusion of a continuous image.

So one question that could come up naturally at this point is,

why do we go through the trouble of defining

a whole new two dimensional signal processing in paradigm?

Can we just convert images into 1D signals and

use the standard things that we've used so far?

And of course the sometimes, that's exactly what we do if we think of

the printer that prints one line at a time as the paper rolls out, or a fax machine.

That's exactly what happens.

However, if we do that, we miss out on the spatial correlation between pixels, and

therefore the properties of an image will be more difficult to understand.

Let's look at an example.

Here we have a 41 by 41 pixel image and

the content of this image is simply a straight line.

We will see that the angle of the straight line will change later.

What you have in the bottom panel is what we call a raster scan representation of

the image.

In other words, we go through the lines of image one by one.

And with plot the corresponding pixels on this axis.

Now for a horizontal line coincides with the n1 axis the resulting

unrolled representation is just a series of 0 pixels.

Except when we scan this line at which point we will have

41 pixels equal to 1 and then we will go back to 0.

So this is rather simple to understand, but

if we change the angle of the line, we see that the representation

in an unrolled fashion changes in ways that are not very intuitive.

When the angle is small, we have clusters of pixels interspersed with 0s.

As the angle increases, the spacing of the clusters changes and

also number of pixels per cluster.

It's very hard to understand the visual characteristic of the line from

the position of the clusters and the number of pixels.

After we passed the 45 degree angle we will have

collections of single pixel clusters.

And the spacing of these clusters will change in even more

subtle ways according to the angle of the line.

Finally when each line that is coinciding with the n2 axis,

we will have single pixels that are separated by 40, 0s.

Because as we scan the image, we will hit in on 0 pixel, and

then we will have to go 40 pixel before we head to another one.