Now, let's look at some of the other implications of finite aperture optics.

The main thing we focused on up to now is that when your aperture is finite,

you have a point spread function and an

MTF determined by the numerical aperture of your object or image.

But there's other things that happened, and they're important too.

The first on that's probably important is the depth of focus.

This, of course, is very obvious to you if you've ever used a camera.

I've got an example here,

that if you're taking a picture of an object,

things in the foreground and things in the background go out of focus.

And that may be either a good thing or a bad thing,

depending on your application.

So you want to know what it is,

and it turns out it depends in several different ways on the numerical aperture.

So let's deal with both of those,

and those two ways are the geometrical optics version,

something that depends on rays,

and the diffraction, something that is much more related to the shape of light.

So let's start with the geometrical first.

So here is my marginal ray.

It's getting through the exit pupil of the system.

And let's imagine that I have like a camera and your cellphone,

let's say, and it's got finite pixel size.

What finite pixel size means,

that I can move this object around some,

and the light will either be at,

according to geometrical optics,

infinitely small, centered on that pixel,

or as I move the object around,

the light will go in and out of focus.

And as long as that geometrically blurred spot stays smaller than some pixel size,

I couldn't tell the difference.

In other words, my detection system has some finite resolution,

and as long as I keep

my geometrical defocus here smaller than that finite pixel size, I can't tell.

Well, that's a pretty easy bit of math,

then, to work out.

How much could I move my pixel size,

my pixel back and forth?

You can work these in either object or image space here.

I'm working in image space.

So I'm going to move my pixel back and forth,

like it's a receiver,

and I want to know how tightly I have the tolerance,

the position of my camera, let's say.

Well, that's just geometry of this ray of triangles,

and the key thing is,

again we're going to use paraxial limit here,

where sine theta and tan theta are all the same.

It's going to be like the pixel size over the angle,

and the angle is n over the numerical aperture.

And just in case you happen to be immersed in the medium here,

it's inversely proportional to NA and, therefore,

proportional to F# because it's just given by the ray angle

and how far you can go in and out of focus given your pixel size.

So that makes sense.

But what now if my system was limited instead of by geometrical effects,

what if I'm limited by diffraction?

That is, I really have a tiny pixel here now,

and I can't think about my light coming to an infinitely small focus,

perhaps it's like a Gaussian beam or an Airy disk,

and the light comes to a small region of collimation,

a depth of focus here.

And I want to know now a small pixel will be back and forth in that,

where am I going to be limited?

And you can see, in this case,

that once I get out here,

well, out of the Rayleigh range then I'm back to the geometrical case.

But inside the Rayleigh range,

the geometrical optics description fails.

Well, this one depends on the actual filling of your pupil,

whether it's a top hat, giving you an area disk,

or whether it's a Gaussian or something else.

It turns out you often don't actually

know that because we're going to learn that there's also

phase in that function often caused by operations in your lenses.

But let's ignore that for a minute.

And I use the Gaussian beam expression here.

And if I ask how far back and forth could I push my pixel?

Maybe twice the Rayleigh range is reasonable.

Might depend on what you call acceptable levels of loss of light,

and of blur because this is a continuous analog function here.

So, there's some ambiguity on whether this should be two or one point seven,

or some other number, but fine,

let's just take two for a minute.

If I then, just plug in the Gaussian beam expression. Here is the wave length.

Here's the numerical aperture I'm interested in

a Gaussian beam expression in data so I

have to take numerical aperture for N to get data,

again just in case we're imbedded in medium here that's not vacuum.

And the important part is,

I get a constant out in front that's order unity,

exactly what that constant is depends on the previous discussion.

What do you call in-focus or not?

And also the exact shape of the light the Airy disk,

would give me a little bit different expression of the Gaussian.

So then, there's an ambiguity here,

of roughly one, two per bar is about one.

And that if you get down to being very precise,

you need to know what that is.

However, the term here will always be the same.

It doesn't matter what the shape of the light is,

or exactly what your definitions of out of focus mean.

It's going to be the numerical aperture of the lens squared, in the nominator,

very different than the linear time over here,

and in the top of the refractive index and the wavelength.

So, the point is, these have two very different behaviors.

Usually you're distinctly limited by one or the other,

and so you know which one to use.

In the rare cases,

where you're sort of in an intermediate zone here,

it turns out the right thing to do is add them.

And that then moves you smoothly between

the geometrical limited case and the diffraction limited case.

Let's take an example of using that geometrical case that you probably used.

Early cell phones, didn't or may be cheaper cell phones today,

when this video was recorded,

didn't have any mechanics in their itty bitty little lenses.

They were a fixed focal length camera,

but you didn't see this out of focus blur.

Generally, things all worked in focus.

Well, how did they do that?

We've just learned in all the previous design courses

here that your image is formed at a certain place,

and depending on where the object is,

so how can it not matter, where the object is?

And it comes back to this use of a finite size pixel again.

So here's just a quick sketch that gets the idea.

If I have a finite sized pixel,

I can sketch a set of object points here,

that all geometrically, put light completely on that pixel.

And so, I have a range here that I call a depth of field,

typically the language is,

if you're out in object space,

that the depth of field if you're back

here in image space that might be your depth of focus,

but they're essentially the same concepts and people will learn that language.

So, the point is how I guess I can have a fixed focal length camera with a detector,

a certain distance behind my lens,

and there's a range of distances that all look exactly the same.

I will not be able to tell I've gone out of focus.

So, cameras were often set up then with what's called the hyperfocal condition.

And basically, all that is,

is taking this far point and it's pushing it out to infinity.

If I can do that,

then all objects between infinity and this near point,

will appear to be in focus.

So the mountains in the distance up to the friend you're taking a picture of,

if all of those are outside the near-point,

then they all appear to be in focus.

And it's some very simple math,

to ask where that is.

I just take, the image of my pixel and ask how do I get H to go infinitely far away.

And I've done a little bit of math here.

And this has some penalties, on the overall,

how fine you can make the images,

because I've had to make the pixel,

big enough to meet this condition.

And obviously when my images are in closer,

my geometrical, or diffraction limited point spread function could be a lot smaller.

But when I do that,

then I wouldn't get this hyperfocal condition.

I would have to have a focusing system.

This is just an example it's not something you use terribly often,

but it's an example of where thinking about these geometrical depth of focus,

or depth of field considerations,

can be used to get you performance that you'd normally think would require,

in this case a system,

or an actuator in it to be able to focus the system.