It depends. Each algorithm comes with its own set of pros and cons.
Quicksort is a good default choice. It tends
to be fast in practice, and with some small tweaks its
dreaded worst-case time complexity
becomes very unlikely. A tried and true favorite.
Heapsort is a good choice if you can't tolerate
a worst-case time complexity of or
need low space costs. The Linux kernel
uses heapsort instead of quicksort
for both of those reasons.
Merge sort is a good choice if you want
a stable sorting algorithm. Also, merge sort can
easily be extended to handle data sets that can't fit in RAM,
where the bottleneck cost is reading and writing the input on
disk, not comparing and swapping individual items.
Radix sortlooks fast, with its
worst-case time complexity. But,
if you're using it to sort binary numbers, then there's a
hidden constant factor that's usually 32 or 64 (depending on
how many bits your numbers are). That's often way
bigger than , meaning radix
sort tends to be slow in practice.
Counting sort is a good choice in scenarios
where there are small number of distinct values to be sorted.
This is pretty rare in practice, and counting sort doesn't get
Each sorting algorithm has tradeoffs. You can't have it all.
So you have to know what's important in the problem
you're working on. How large is your input? How many distinct
values are in your input? How much space overhead is acceptable?
Can you afford
Once you know what's important, you can pick the sorting algorithm
that does it best. Being able to compare different algorithms and
weigh their pros and cons is the mark of a strong computer
programmer and a definite plus when interviewing.
Get the 7-day crash course!
In this free email course, I'll teach you the right way of thinking for breaking down tricky algorithmic coding questions.
No CS degree necessary.
Want more coding interview help?
Check out interviewcake.com for more advice, guides, and practice questions.