4:27

even in the 1D case. So at least, with one dimension, we can use sorting, piggyback

Â on it, to beat the naive brute-force search bound and solve the problem in n

Â log n time. So our goal for this lecture is going to be to devise an equally good

Â algorithm for the two-dimensional case, so we want to solve closest pair of points in

Â the plane, again, in n log n, n time. So we will succeed in this goal. I'm going

Â to show you an n log n time algo rithm for 2D closest pair. It's going to take us a

Â couple steps. So let me begin with a high level approach. Alright. So the first I

Â need to try is just to copy what worked for us in the one-dimensional case. So the

Â one-dimensional case, we first sorted the points by their coordinate and that was

Â really useful. Now, in the 2D case, points have two coordinates, x coordinates and y

Â coordinates, so there's two ways to sort them. So let's just sort them both ways,

Â that is, the first step of our algorithm, which you should really think of as a

Â preprocessing step. We're going to take the input points. We invoke merge sort

Â once to sort them according to x coordinate, that's one copy of the points.

Â And then we make a second copy of the points where they're sorted by y

Â coordinates. So we're going to call those copies of points px, that's an array of

Â the points sorted by x coordinate, and py for them sorted by y coordinate. Now, we

Â know merge short takes n log n times, so this preprocessing step only takes o of n

Â log n time. And again, given that we're shooting for an algorithm with running

Â time big O of n log n, why not sort the points? We don't even know how we're going

Â to use this fact right now, but it's sort of harmless. It's not going to effect our

Â goal of getting a big of O n log n time algorithm. And indeed, this illustrates a

Â broader point, which is one of the themes of this course. So recall, I hope one of

Â the things you take away from this course is a sense for what are the four free

Â primitives, what are manipulations or operations you can do on data which

Â basically are costless. Meaning that if your data set fits in the main memory of

Â your computer, you can basically invoke the primitive and it's just going to run

Â blazingly fast, and you can just do it even if you don't know why. And again,

Â sorting is the canonical for free primitive, although, we'll see some more

Â later in the course and so, here, we're using exactly that principle.

Â So we don't even understand why yet we might wa nt the points to be sorted. It

Â just seems like it's probably going to be useful, motivated by the 1D case, so let's

Â go ahead and make assorted copies of the points by x and y coordinate upfront. So

Â reasoning by analogy with the 1D suggests that sorting the points might be useful,

Â but we can't carry this analogy too far. So in particular, we're not going to be

Â able to get away with just a simple linear scan through these arrays to identify the

Â closest pair of points. So, to see that, consider the following example.

Â So we're going to look at a point set which has six points. There's going to be

Â two points, which I'll put in blue which are very close in x coordinate, but very

Â far away in y coordinate. And then there's going to be another pair of points which

Â I'll do in green, which are very close in y coordinate, but very far away in x

Â coordinate. And then there's going to be a red pair of points, which are not too far

Â away in either the x coordinate or the y coordinate. So in this set of six points,

Â the closest pair is the pair of red points. They're not even going to show up

Â consecutively on either of the two arrays, right? So in the array that's sorted by x

Â coordinate, this blue point here is going to be wedged in between the two red

Â points, they won't be consecutive. And similarly in the, in py, which is sort of

Â by y coordinate, this green coordinate is going to be wedged between the two red

Â points. So you won't even notice these red point if you just do a linear scan if your

Â px and py, or py look at the consecutive pairs of points. So, following our

Â preprocessing step where we just invert, invoke merge sort twice we're going to do

Â a quite nontrivial divide and conquer algorithm to compute the closest pair. So

Â really, in this algorithm, we're applying the divide and conquer algorithm twice.

Â First, internal to the sorting subroutine, assuming that we use the merge sort

Â algorithm to sort. Divide and conquer is being used there to get an n log n running

Â time in this preprocessing step, and the n, we're going to use it again on sorted

Â arrays in a new way and that's what I'm going to tell you about next. So let's

Â just briefly review the divide and conquer algorithm design paradigm before we apply

Â it to the closest pair problem. So, as usual, the first step is to figure out a

Â way to divide your problem into smaller subproblems. Sometimes this has a

Â reasonable amount of ingenuity, but it's not going to. Here in the closest pair

Â problem, we're going to proceed exactly as we did in the merge sort and counting

Â inversions problems, where we took the array and broke it into its left and right

Â half. So here, we're going to take the input point set, and again, just recurse

Â on the left half of the points, and recurse on the right half of the points.

Â So here, by left and right, I mean with respect to the points x coordinates.

Â There's pretty much never any ingenuity in the conquer step, that just means you take

Â the sub-problems you identified in the first step, and you solve them

Â recursively. That's what we'll do here, we'll recursively complete the closest

Â pair in the left half of the points, and the closest pair in the right half of the

Â points. So where all the creativity in divide and conquer algorithms is in the

Â combined step. Given the solutions to your sub problems, how do you somehow recover a

Â solution to the original problem? The one that you actually care about. So for

Â closest pair, the questionis going to be, given that you've computed the closest

Â pair on the left half of the points, and the closest pair on the right half of the

Â points, how do you then quickly recover the closest pair for the whole point set?

Â That's a tricky problem, that's what we're going to spend most of our time on. So

Â let's make this divide and conquer approach for closest pair a little bit

Â more precise, so let's now actually start spelling out our closest pair algorithm.

Â The input we're given, it's, this follows the preprocessing steps or recall that we

Â invoke, merge sort, we get our two sorted copies of the poin t set Px, sorted by x

Â coordinate, and py sorted by y coordinate. So the first dividend is the division

Â step. So given that we have a copy of the points px sorted by x coordinate, it's

Â easy to identify the leftmost half of the points, those with the, those n over two

Â smallest x coordinates and in the right half, those were the n over two largest x

Â coordinates. We're going to call those Q and R respectively. One thing I'm skipping

Â over is the base case. I'm not going to bother writing that down, so base case

Â omitted, but it's what you would think it would be. So basically once you have a

Â small number point, say two points or three points, then you can just solve the

Â problem in constant time by a brute-force search. You just look at all the pairs and

Â you return the closest pair. So think of it being at least four points in the

Â input. Now, in order to recurse, to call clo pair again, in the left and right

Â halves, we need sorted version of Q and R, both by x coordinate and by y coordinate,

Â so we're just going to form those by doing suitable linear scans through px and py.

Â And so one thing I encourage you to think through carefully or maybe even code up

Â after the video is how would you form Qx, Qy, Rx and Ry given that you already have

Â Px and Py. And if you think about it, because Px and Py are already sorted just

Â producing these sorted sublists takes linear time. It's in some sense the

Â opposite of the merge subroutine used in merge sort. Here, we're sort of splitting

Â rather than merging. But again, this can be done in linear time, that's something

Â you should think through carefully later. So that's the division step, now we just

Â conquer, meaning we recursively call closest pair line on each of the two

Â subproblems, so when we invoke closest pair on the left half of the points on Q

Â we're going to get back what are indeed, the closest pair of points amongst those

Â in Q. So we're going to call those P1 and Pq, So among all pairs of points that both

Â lie in Q, P1 and Q1 minimize the distance between them. Similarly, we're going to

Â call Q2Q2 the results of the second recursive call, that is, P2 and Q2 are

Â amongst all pairs of points that both lie in R, the pair that has the minimum

Â Euclidean distance. Now, conceptually, there's two cases. There's a lucky case

Â and there's an unlucky case. In the original point set P, if we're lucky, the

Â closest pair of points in all of P, actually, both of them lie in Q or both of

Â them lie in R. In this lucky case, we'd already be done if the closest pair in the

Â entire point set they happen to both lie in Q, then this first recursive call is

Â going to recover them and we just have them in our hands P1Q1. Similarly, if both

Â of the closest pair of points in all of P lies on the right side in R, then they get

Â handed to us on a silver platter by the second recursive call that just operate on

Â R. So in the unlucky case, the closest pair of point in P happens to be split.

Â That is, one of the points lies in the left half, in Q, and the other point lies

Â in the right half, in R. Notice, if the closest pair of points in all of P is

Â split, is half in Q and half in R, neither recursive call is going to find it. Okay?

Â The pair of points is not passed to either of the two recursive calls, so there's no

Â way it's going to be returned to us. Okay? So we have not identified the closest pair

Â after these two recursive calls, if the closest pair happens to be split.

Â This is exactly analagous to what happened when we were counting inversions. The

Â recursive call on the left half of the array counted the left inversions. The

Â recursive call on the right half of the array counted the right inversions. But we

Â still had to count the split inversions, so in this closest pair algorithm, we

Â still need a special purpose subroutine that computes the closest pair for the

Â case in which it is split, in which there is one point in Q and one point in R. So

Â just like in counting inversions, I'm going to write down that subroutine and

Â I'm going to leave it unimplemented for now, we'll figur e out how to implement it

Â quickly in the rest of the lecture. Now, if we have a correct implementation of

Â closest split pair, so that takes us input the original point set sort of the x and y

Â coordinate, and returns the smallest pair that's split or one points in Q and one

Â points in R, then we're done. So then, the split, then the closest pair has to either

Â be on the lef or onm the right or it has to be split. Steps two through four

Â compute the closest pair in each of those categories, so those are the only possible

Â candidates for the closest pair and we just returned the best of them. So that's

Â an argument for y, if we have a correct implementation of the closest split para

Â subroutine, then that implies a correct implementation of closest pair. Now, what

Â about the running time? So the running time of the closest para algorithm is

Â going to be in part determined by the running time of closest split pair. So in

Â the next quiz, I want you to think about what kind of running time we should be

Â shooting for with a closest split pair subroutine. So the correct response of

Â this quiz is the second one, and the reasoning is just by analogy with our

Â previous algorithms for merge sort and for counting inversions. So, what is all of

Â the work that we would do in this algorithm or we do have this preprocessing

Â step we call merge sort twice, we know that's n log n, so we're not going to have

Â a running time better than n log n cause we sort at the beginning. And then, we

Â have a recursive algorithm with the following flavor, it makes two recursive

Â calls. Each recursive call is on a problem of

Â exactly half the size with half the points of the original one. And outside of the

Â recursive calls, by assumption, by, in the problem, we do a linear amount of work in

Â computing the closest split pair. So we, the exact same recursion tree which proves

Â an n log n bound for merge sort, proves an n log n bound for how much work we do

Â after the preprocessing step, so that gives us an overall running time bound of

Â n log n. Remem ber, that's what we were shooting for. We were working n log n

Â already to solve the one-dimensional version of closest pair and the goal of

Â these lectures is to have an n log n algorithm for the 2D versions. So this

Â would be great. So in other words, the goal should be to have a correct linear

Â time implementation of the closest split pair subroutine. If we can do that, we're

Â home-free, we get the desired n log algorithm. Now, I'm going to proceed in a

Â little bit to show you how to implement closest split pair, but before I do that,

Â I want to point out one subtle, the key idea, which is going to allow us to get

Â this linear time correct implementation. So, let me just put that on the slide. So,

Â the key idea is that we don't actually need a full-blown correct implementation

Â of the closets split pair subroutine. So, I'm not actually going to show you a

Â linear time subroutine that always correctly computes the closets split pair

Â of a point set. The reason I'm going to do that is that's actually a strictly harder

Â problem than what we need to have a correct recursive algorithm. We do not

Â actually need a subroutine that, for every point sets, always correctly computes the

Â closest split pair of points. Remember, there's a lucky case and there's an

Â unlucky case. The lucky case is where the closest pair in the whole point set P

Â happens to lie entirely in the left half of the points Q or in the right half of

Â the points R In that lucky case, we, one of our recursive calls will identify this

Â closest pair and hand it over to us on a silver platter. We could care less about

Â the split pairs in that case. We get the right answer without even looking at the

Â split pair, pairs. Now, there's this unlucky case where the split pairs happens

Â to be the closest pair of points. That is when we need this linear time subroutine,

Â and only. then, only in the unlucky case where the closest pair of points happens

Â to be split. Now, that's in some sense, a fairly trivial observation, but, there's a

Â lot of ingenuity here i n figuring out how to use that observation. The fact that we

Â only need to solve a strictly easier problem and that will enable the linear

Â time implementation that I'm going to show you next. So now, let's rewrite the high

Â level recursive algorithm slightly to make use of this observation that the closest

Â split pair subroutine only has to operate correctly in the regime of the unlucky

Â case, when in fact, the closest split pair is closer than the result of either

Â recursive call. So I've erased the previous steps 4 and 5, that, but we're

Â going to rewrite them in a second. So, before we invoke close split pair, what

Â we're going to do is we're going to see how well did our recursive calls do. That

Â is, we're going to define a parameter little delta, which is going to be the

Â closest pair that we found or the distance of the closest pair we found by either

Â recursive call. So the minimum of the distance between P1 and Q1, the closest

Â pair that lies entirely on the left, and P2Q2, the closest pair that lies entirely

Â on the right. Now, we're going to pass this delta information as a parameter into

Â our closest split pair subroutine. We're going to have to see why on earth that

Â would be useful and I still owe you that information, but, for now, we're just

Â going to pass delta as a parameter for use in the closest split pair. And then, as

Â before we just do a comparison between the three candidate closest pairs and return

Â the best of the, of the trio. And so, just so we're all clear on, on where things

Â stand, so what remains is to describe the implementation of closest split pair, and

Â before I describe it, let me just be crystal clear on what it is that we're

Â going to demand of the subroutine. What do we need to have a correct in o of n log n

Â time closest pair algorithm. Well, as you saw on the quiz, we want the running time

Â to be o of n always, and for correctness, what do we need? Again, we don't need it

Â to always compute the closest split pair, but we need it to compute the closest

Â split pair in the events that there is a split pair of distance strictly less than

Â delta, strictly better than the outcome of either recursive call. So now that we're

Â clear on what we want, let's go ahead and go through the pseudocode for this closest

Â split pair subroutine. And I'm going to tell you upfront, iIt's going to be fairly

Â straightforward to figure out that the subroutine runs in linear time, o of n

Â time. The correctness requirement of closest split pair will be highly

Â non-obvious. In fact, after I show you this pseudo you're not going to believe

Â me. You're going to look at the pseudocode and you'd be like, what are you talking

Â about? But in the second video, on the closest pair lecture, we will in fact show

Â that this is a correct sub-routine. So, how does it work? Well, let's look at a

Â point set. So, the first thing we're going to do is a filtering step. We're going to

Â prune a bunch of the points away and so to zoom in on a subset of the points. And the

Â subset of the points we're going to look at is those that lie in a vertical strip,

Â which is roughly centered in the middle of the point set. So, here's what I mean. By

Â center dot, we're going to look at the middle x coordinate. So, let x bar be the

Â biggest x coordinate in the left half, so that is in the sorted version of the

Â points by x coordinate, we look at the n over two smallest ex-coordinate. So, in

Â this example where we have six points, all this means is we draw, we imagine drawing

Â a line between the third points, so that's going to be x bar, the x coordinate of the

Â third point from the left. Now, since we're passed as input, a copy of the

Â points sorted by x coordinate, we can figure out what x bar is in constant time.

Â Just by accessing the relevant entry of the array, px. Now, the way we're going to

Â use this parameter delta that we're passed, so remember what delta is. So

Â before we invoke the closest split pair subroutine in the recursive algorithm, we

Â make our two recursive calls, we find the closest pair on the left, the closest pair

Â on the right, and delta is whatever the smaller of those two distances are. So

Â delta is the parameter that controls whether or not we actually care about the

Â closest split pair or not, we care if and only if there is a split pair at distance

Â less than delta. So, how do we use delta? Well, that's going to determine the width

Â of our strip, so the strip's going to have width 2 delta, and it's going to be

Â centered around x. And the first thing we're going to do is we're going to

Â ignore, forevermore, points which do not line in this vertical strip.

Â So the rest of the algorithm will operate only on the subset of p, the subset of the

Â points that lie on the strip, and we're going to keep track of them sorted by y

Â coordinate. So the formal way to say that they line the strip, is that they have x

Â coordinate in the interval with lower endpoint x bar minus delta and upper

Â endpoint x bar plus delta. Now, how long does it take to construct this set Sy

Â sorted by y coordinate? Well fortunately, we've been passed as input a sorted

Â version of the points Py So to extract Sy from Py, all we need to do is a simple

Â linear scan through p y checking for each point where its x coordinate is. So this

Â can be done in linear time. Now, I haven't yet shown you why it's useful to have this

Â sorted set as y, but if you take it on faith that it's useful to have the points

Â in this vertical strip sorted by y coordinate. You now see why it was useful

Â that we did this merge sort all the way at the beginning of the algorithm before we

Â even underwent any recurssion. Remember, what is our running time goal for closest

Â split pair? We want this to run in linear time, that means we cannot sort inside the

Â closest split pair subroutine. That would take too long. We want this to be in

Â linear time. Fortunately, since we sorted once and for all at the beginning of the

Â closest pair algorithm, extracting sorted sublists from those sorted lists of points

Â can be done, done in linear time, which is within our goals here. Now, it's the rest

Â of t he subroutine where you're never going to believe me that it does anything

Â useful. So, I claim that essentially with a linear scan through Sy, we're going to

Â be able to identify the closest split pair of points in the interesting, unlucky case

Â where there is such a split pair with distance less than delta. So here's what I

Â mean by that linear scan through Sy. So as we do the scan, we're, we're going to keep

Â track of the closest pair of points of a particular type that we've seen so far.

Â So, let me introduce some variables to keep track of the best candidate we've

Â seen so far. There's going to be a vary, variable best which will initialize to be

Â delta. Remember, we're uninterested in split pairs unless they have distance

Â strictly less than delta. So, and then we're going to keep track of the points

Â themselves, so we'll initialize the best pair to be null. Now, here is the linear

Â scan. So we go through the points of Sy in order y coordinate. Okay, well, not quite

Â all the points of Sy. We stop at the eighth to last point and you'll see why in

Â a second. And then, for each position I of the array Sy, we investigate the seven

Â subsequent points of the same array Sy. So for j going from one to seven, we look at

Â the Ith, and I plus jth entry of Sy. So if sy looks something like this array here,

Â in any given point in this double for loop, we're generally looking at an index

Â I, a point in this, in this of the array, and then some really quite nearby point in

Â the array I plus j, because j here's going to be at most seven. Okay? So we're

Â constantly looking at pairs in this array, but we're not looking at all pairs of all.

Â We're only looking at pairs that are very close to each other, within seven

Â positions of each other. And what do we do for each choice of i and j? Well, we just

Â look at those points, we compute the distance, we see if it's better than all

Â of the pairs of points of this form that we've looked at in the past and if it is

Â better, then we remember it. So we just remember the best, ie c losest pair of

Â points, of this particular type for choices of i and j of this form. So in

Â more detail, if the distance between the current pair of points of p and q is

Â better than the best we've seen so far, we reset the best pair of points to be equal

Â to p and q, and we reset the best distance, the closest distance seemed so

Â far to be the distance between p and q and that's it. Then, once this double for loop

Â terminates, we just return it the best pair. So one possible execution of closest

Â split pair is that it never finds a pair of points, p and q, at distance less than

Â delta. In that case, this is going to return null and then in the outer call. In

Â the closet pair, obviously, you interpret a null pair of points to have an infinite

Â distance. So if you call closest split pair, and it doesn't return any points,

Â then the interpretation is that there's no interesting split pair of points and you

Â just return the better of the results of the two recursive calls p1Q1 or P2Q2. Now,

Â as far as the running time of the subroutine, what happens here? Well, we do

Â constant work just initializing the variables. Then notice that the number of

Â points in Sy, well in the worst case, you have all of the points of P. So, it's

Â going to be the most endpoints, and so, you do a linear number of iterations in

Â the outer for loop. But here is the key point, in the inner for loop, right,

Â normally double for loops give rise to quadratic running time, but in this inner

Â for loop we only look at a constant number of other positions. We only look at seven

Â other positions and for each of those seven positions, we only do a constant

Â number of work. Right? We just, we want to compare distance and make a couple other

Â comparisons, and reset some variables. So for each of the linear number of outer

Â iterations, we do a constant amount of work, so that gives us a running time of o

Â of n for this part of the algorithm. So as I promised, analyzing the running time of

Â this closest split pair subroutine was not challenging. We just , in a

Â straightforward way, looked at all the operations. Again, because in the key

Â linear scan, we only do constant work per index, the overall running time is big O

Â of n, just as we wanted. So this does mean that our overall recursive algorithm will

Â have running time o of n log n. What is totally not obvious and perhaps even

Â unbelievable, is that this subroutine satifies the correctness requirements that

Â we wanted. Remember, what we needed, we needed that whenever we're in the unlucky

Â case, whenever, in fact, the closest pair of points in the whole point set is split,

Â this subroutine better find it. So, but it does, and that's being precise in the

Â following correctness claim. So let me rephrase the claim in terms of an

Â arbitrary split pair, which has distance less than delta, not necessarily the

Â closest such pair. So suppose, there exists, a p on the left, a point on the

Â left side and a point on the right side so that is a split pair and suppose the

Â distance of this pair is less than Q. Now, there may or may not be such a pair of

Â points, PQ.. Don't forget what this parameter delta means. What delta is, by

Â definition, is the minimum of d of p1q1, for p1q1 is the closest pair of points

Â that lie entirely in the left half of the point set Q and d of p2q2, or similarly,

Â p2Q2 is the closest pair of points that entirely on the right inside of R. So, if

Â there's a split pair with distance less than delta, this is exactly the unlucky

Â case of the algorithm. This is exactly where neither recursive call successfully

Â identifies the closest pair of points, instead that closest pair is a split pair.

Â On the other hand, if we are in the lucky case, then there will not be any split

Â pairs with distance less than delta, because the closest pair lies either all

Â on the left or on the right, and it's not split. But remember, we're interested in

Â the case where there is a split pair that has a distance less than delta where there

Â is a split pair that is the closest pair. So the claim has two parts. The first

Â part, part A, says the following. It says that if there's a split pair p and, and q

Â of this type, then p and q are members of Sy. And let me just sort of redraw the

Â cartoon. So remember what Sy is. Sy is that vertical strip. And again, the way we

Â got that is we drew a line through a median x coordinate and then we fattened

Â it by delta on either side, and then, we focused only on points that lie in the

Â vertical strip. Now, notice our counts split pair subroutine, if it ever returns

Â a pair of points, it's going to return a pair of points pq that belong to Sy.

Â First, it filters down to Sy, then it does a linear search through Sy. So if we want

Â to believe that our subroutine identifies best split pairs of points, then, in

Â particular, such split pairs of points better show up in Sy, they better survive

Â the filtering step. So that's precisely what part A of the claim is. Here's part B

Â of the claim and this is the more remarkable part of the claim, which is

Â that p and q are almost next to each other in this sorted array, Sy. So they're not

Â necessarily adjacent, but they're very close, they're within seven positions away

Â from each other. So, this is really the remarkable part of the algorithm. This is

Â really what's surprising and what makes the whole algorithm work. So, just to make

Â sure that we're all clear on everything, let's show that if we prove this claim,

Â then we're done, then we have a correct fast implementation of a closest pair

Â algorithm. I certainly owe you the proof of the claim, that's what the next video

Â is going to be all about, but let's show that if the claim is true, then, we're

Â home-free. So if this claim is true, then so is the following corollary, which I'll

Â call corollaryl 1. So corollary 1 says, if we're in the unlucky case that we

Â discussed earlier, if we're in the case where the closest point and the whole

Â points of p does not lie both on the left, does not lie both on the right, but rather

Â has one point on the left and one on the right but as it's a split pair, th en in

Â fact, the count split pair subroutine will correctly identify the closest split pair

Â and therefore the closest pair overall. Why is this true? Well what does count

Â split pair do? Okay, so it has this double for loop, and thereby, explicitly examines

Â a bunch of pairs of points and it remembers the closest pair of all of the

Â pairs of points that it examines. What does this, so what are the criteria that

Â are necessary for count split pair to examine a pair point? Well, first of all,

Â the points p and q both have to survive the filtering step and make it into the

Â array Sy. Right? So count split pair only searches over the array Sy. Secondly, it

Â only searches over pairs of points that are almost adjacent in Sy, that are only

Â seven positions apart, but amongst pairs of points that satisfy those two criteria,

Â counts but pair will certainly compute the closest such pair, right? It just

Â explicitly remembers the best of them. Now, what's the content of the claim?

Â Well, the claim is guaranteeing that every potentially interesting split pair of

Â points and every split pair of points with distance less than delta meets both of the

Â criteria which are necessary to be examined by the count split pair

Â subroutine. So first of all, and this is the content of part A, if you have an

Â interesting split pair of points with distance less than delta, then they'll

Â both survive the filtering step. They'll both make it into the array Sy., part A

Â says that. Part B says they're almost adjacent in Sy. So if you have an

Â interesting split pair of points, meaning it has distance less than delta, then they

Â will, in fact, be at most seven positions apart. Therefore, count split pair will

Â examine all such split pairs, all split pairs with distance less than delta, and

Â just by construction, it will compute the closest pair of all of them. So again, in

Â the unlucky case where the best pair of points is a split pair, then this claim

Â guarantees that the count split pair will compute the closest pair of points.

Â Therefore, having h andled correctness, we can just combine that with our earlier

Â observations about running time and corollary 2 just says, if we can prove the

Â claim, then we have everything we wanted. We have a correct O of n log n

Â implementation for the closest pair of points. So with further work and a lot

Â more ingenuity, we've replicated the guarantee that we got just by sorting for

Â the one-dimensional case. Now again, these corrollaries hold only if this claim is,

Â in fact, true and I have given you no justification for this claim. And even the

Â statement of the claim, I think, is a little bit shocking. So if I were you I

Â would demand an explanation for why this claim is true, and that's what I'm going

Â to give you in the next video.

Â