Before I give the high-level description, I should mention that this, algorithm was

discovered by, Tony Hoare, roughly, 1961 or so. This was at the very beginning of

Hoare's career. He was just about 26, 27 years old. He went on to do a lot of other

contributions, and, eventually wound up winning the highest honor in computer

science, the ACM Turing Award, in 1980. And when you see this code, I'll bet you

feel like you wish you had come up with this yourself. It's hard not to be envious

of the inventor of this very elegant QuickSort algorithm. So, just like in Merge Sort,

this is gonna be a divide-and-conquer algorithm. So it takes an array of some

length N, and if it's an array of length N, it's already sorted, and that's the

base case and we can return. Otherwise we're gonna have two recursive calls. The

big difference from Merge Sort is that, whereas in Merge Sort, we first split the

array into pieces, recourse, and then combine the results, here, the recursive

calls come last. So, the first thing we're going to do is choose a pivot element,

then partition the array around that pivot element, and then do two recursive calls.

And then, we'll be done. There will be no combined step, no merge step. So in the

general case, the first thing you do is choose a pivot element. For the moment I'm

going to be loose, leave the ChoosePivot subroutine unimplemented. There's going to

be an interesting discussion about exactly how you should do this. For now, you just

do it in some way, that for somehow you come up with one pivot element. For

example, a naive way would be to just choose the first element. Then you invoke

the Partition subroutine that we'll discuss in the last couple slides. [sound]. So

recall that the results in a version of the array in which the pivot element p is

in its rightful position, everything to the left of p is less than p, everything

to the right of the pivot is bigger than the pivot, and then all you have to do to

finish up is recurse on both sides. So let's call the elements less than p the

first part of the partitioned array, and the elements greater than p the second

part of the recursive array. And now we just call QuickSort again to

recursively sort the first part, and then the, recursively sort the second part. And

that is it. That is the entire QuickSort algorithm at the high-level. This is one

of the relatively rare recursive, divide- and-conquer algorithms that you're going

to see, where you literally do no work after solving the sub-problems. There is

no combine step, no merge step. Once you've partitioned, you just sort the two

sides and you're done. So that's the high- level description of the QuickSort

algorithm. Let me give you a quick tour of what the rest of the video's going to be

about. So first of all I owe you details on this Partition subroutine. I promise

you it can be implemented in linear time with no additional memory. So I'll show

you an implementation of that on the next video. We'll have a short video that

formally proves correctness of the QuickSort algorithm. I think most of you

will kinda see intuitively why it's correct. So, that's a video you can skip

if you'd want. But if you do want to see what a formal proof of correctness for a

divide-and-conquer algorithm looks like, you might want to check out that video.

Then, we'll be discussing exactly how the pivot is chosen. It turns out the running

time of QuickSort depends on what pivot you choose. So, we're gonna have to think

carefully about that. Then, we'll introduce randomized QuickSort, which is

where you choose a pivot element uniformly at random from the given array, hoping

that a random pivot is going to be pretty good, sufficiently often. And then we'll

give the mathematical analysis in three parts. We'll prove that the QuickSort algorithm runs in

N log N time, with small constants, on average, for a randomly chosen pivot. In

the first analysis video, I'll introduce a general decomposition principle of how you

take a complicated random variable, break it into indicator random variables, and

use linearity of expectation to get a relatively simple analysis. That's

something we'll use a couple more times in the course. For example, when we study

hashing. Then, we'll discuss sort of the key insight behind the QuickSort analysis,

which is about understanding the probability that a given pair of elements

gets compared at some point in the algorithm. That'll be the second part. And

then there's going to be some mathematical computations just to sort of tie

everything together and that will give us the bound the QuickSort running time.

Another video that's available is a review of some basic probability concepts for

those of you that are rusty, and they will be using in the analysis of QuickSort. Okay?

So that's it for the overview, let's move on to the details.