|Worst case time|
|Best case time|
|Average case time|
- Fast. On average, quicksort runs in time, which scales well as n grows.
- Parallelizable. Quicksort divides the input into two sections, each of which can be sorted at the same time in parallel.
- Slow Worst-Case. In the worst case, quicksort can take time. While this isn't common, it makes quicksort undesirable in cases where any slow performance is unacceptable
One such case is the Linux kernel. They use heapsort instead of quicksort , so viruses and hackers can't exploit the slow worst-case time of sorting to tie up a machine's processor (and because heapsort has a lower space cost).
The High-Level Idea
Quicksort works by dividing the input into two smaller lists: one with small items and the other with large items. Then, it recursively sorts both the smaller lists.
Partitioning Around a Pivot
First, we grab the last item in the list. We'll call this item the pivot.
There are other ways to choose the pivot, but this is one of the simplest.
Next, we'll scoot things around until everything less than the pivot is to the left of the pivot, and everything greater than the pivot is to the right.
As an example, take this input list:
The 4 at the end will be our pivot.
So we want to get everything smaller than 4 to the left of 4 and everything larger to the right.
We call this "partitioning," since we're dividing the input list into two parts (a smaller-than-the-pivot part and a larger-than-the-pivot part).
Here's how we do it:
The idea is to walk through the left-hand side of the list looking for items that don't belong there (because they're bigger than the pivot), and same thing with the right-hand side.
For our sample array:
We'll move two pointers—one on either end of the list— towards the middle. We'll skip over the pivot for now and add it in last.
We'll move the "left" pointer to the right until we get to an item that's larger than the pivot (which means it should go on the right-hand side of the list). 0 is less than 4, so we advance to the next item, 8. That's larger than the pivot and needs to be moved to the right-hand side of the list.
We've found something on the left that belongs on the right. Now we just need to find something on the right that belongs on the left, so we can swap the two of them.
So we'll move the "right" pointer to the left until we get to an item that's smaller than the pivot. 3 is smaller than 4, so we don't need to move the pointer at all.
Time to swap.
And repeat. We start moving the left pointer to the right again, until we find another item that's larger than the pivot. We'll repeat this process until the "left" and "right" pointers cross each-other.
The "left" and "right" pointers will never stop on the same item. Do you see why?
The left item keeps moving until it hits something larger than the pivot. The right item keeps moving until it hits something less than or equal to the pivot.
If they stopped on the same item, then that item would have to simultaneously be larger than the pivot and smaller than the pivot. That's impossible!
So, we move the left pointer past 1 and 2, finally stopping at 7.
And we move the right pointer past 8, 9, and 7, stopping at 2.
Since "left" and "right" have crossed, we know that all the items smaller than the pivot are before "left" and all the items larger than the pivot are after "right".
Now we just have to put the pivot in the right spot. We can do that by swapping the pivot and the "left" item.
The pivot is now where it needs to be in the final sorted list.
Recursively Sorting the Halves
Now that we've partitioned the input list into two smaller pieces, we can recursively sort each piece.
Just like merge sort, our base case will be a list with just one element, since that's already sorted.
Continuing the example above, we've already partitioned the input once, leaving us with two smaller lists:
Partitioning the sublist on the left:
And, for the other list, we've got:
We'll continue recursing on the smaller remnants: [0, 1], , , and [8, 9], ultimately ending up with this result:
And, we're done!
Here's how we'd code it up:
Similar to merge sort, we can visualize quicksort's execution as recursively breaking up the input into two smaller pieces until we hit a base case.
Each level in the diagram represents work, since we're doing a constant amount of work on each element in the list.
How many levels will there be? Assuming we pick a pivot that's close-ish to the median, we'll be dividing the lists roughly in half each time. So, on average, we'll have levels, for a total average-case time complexity of .
However, if we get very unlucky with our pivot, in the worst case it's possible for the complexity to get as high as .
To see this, suppose the input is already sorted and we always pick our pivot from the end of the list.
Because the pivot's final position is all the way to the right, we end up with an empty right sublist and the whole rest of the list in the left sublist.
This means we end up with levels, which costs time overall.
To avoid this poor worst-case behavior, some implementations shuffle the input before sorting. Others are more strategic about choosing their pivot, performing some heuristics to ensure the pivot is close-ish to the median.
Quicksort operates on the input in place, without any extra copies or data structures. However, in the worst case where there are recursive calls, the call stack results in a space complexity of .
That said, with clever optimizations including tail calls, it's possible to avoid call stack space, bringing the space complexity down to just even in the worst case.
So, officially, quicksort requires space. In practice, most non-optimized implementations (including ours above) take more than that.