Big O Notation

Using not-boring math to measure code's efficiency

Get the 7-day crash course!

In this free email course, I'll teach you the right way of thinking for breaking down tricky algorithmic coding interview questions.

Get the coding interview crash course

In this free email course, I'll teach you the right way of thinking to break down tricky algorithmic coding interview questions.

7 days. One short-but-helpful email a day. Unsubscribe whenever.

No CS degree necessary.

The idea behind big O notation

Big O notation is the language we use for talking about how long an algorithm takes to run. It's how we compare the efficiency of different approaches to a problem.

It's like math except it's an awesome, not-boring kind of math where you get to wave your hands through the details and just focus on what's basically happening.

With big O notation we express the runtime in terms of—brace yourself—how quickly it grows relative to the input, as the input gets arbitrarily large.

Let's break that down:

  1. how quickly the runtime grows—It's hard to pin down the exact runtime of an algorithm. It depends on the speed of the processor, what else the computer is running, etc. So instead of talking about the runtime directly, we use big O notation to talk about how quickly the runtime grows.
  2. relative to the input—If we were measuring our runtime directly, we could express our speed in seconds. Since we're measuring how quickly our runtime grows, we need to express our speed in terms of...something else. With Big O notation, we use the size of the input, which we call "n." So we can say things like the runtime grows "on the order of the size of the input" () or "on the order of the square of the size of the input" ().
  3. as the input gets arbitrarily large—Our algorithm may have steps that seem expensive when n is small but are eclipsed eventually by other steps as n gets huge. For big O analysis, we care most about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as n gets very large. (If you know what an asymptote is, you might see why "big O analysis" is sometimes called "asymptotic analysis.")

If this seems abstract so far, that's because it is. Let's look at some examples.

Some examples

public static void printFirstItem(int[] items) { System.out.println(items[0]); }

This method runs in time (or "constant time") relative to its input. The input array could be 1 item or 1,000 items, but this method would still just require one "step."

public static void printAllItems(int[] items) { for (int item : items) { System.out.println(item); } }

This method runs in time (or "linear time"), where n is the number of items in the array. If the array has 10 items, we have to print 10 times. If it has 1,000 items, we have to print 1,000 times.

public static void printAllPossibleOrderedPairs(int[] items) { for (int firstItem : items) { for (int secondItem : items) { System.out.println(firstItem + ", " + secondItem); } } }

Here we're nesting two loops. If our array has n items, our outer loop runs n times and our inner loop runs n times for each iteration of the outer loop, giving us n^2 total prints. Thus this method runs in time (or "quadratic time"). If the array has 10 items, we have to print 100 times. If it has 1,000 items, we have to print 1,000,000 times.

N could be the actual input, or the size of the input

Both of these methods have runtime, even though one takes an integer as its input and the other takes an array:

public static void sayHiNTimes(int n) { for (int i = 0; i < n; i++) { System.out.println("hi"); } } public static void printAllItems(int[] items) { for (int item : items) { System.out.println(item); } }

So sometimes n is an actual number that's an input to our method, and other times n is the number of items in an input array (or an input map, or an input object, etc.).

Drop the constants

This is why big O notation rules. When you're calculating the big O complexity of something, you just throw out the constants. So like:

public static void printAllItemsTwice(int[] items) { for (int item : items) { System.out.println(item); } // once more, with feeling for (int item : items) { System.out.println(item); } }

This is , which we just call .

public static void printFirstItemThenFirstHalfThenSayHi100Times(int[] items) { System.out.println(items[0]); int middleIndex = items.length / 2; int index = 0; while (index < middleIndex) { System.out.println(items[index]); index++; } for (int i = 0; i < 100; i++) { System.out.println("hi"); } }

This is , which we just call .

Why can we get away with this? Remember, for big O notation we're looking at what happens as n gets arbitrarily large. As n gets really big, adding 100 or dividing by 2 has a decreasingly significant effect.

Keep up the momentum! Sign up to get a data structures and algorithms practice question sent to you every week.

Drop the less significant terms

For example:

public static void printAllNumbersThenAllPairSums(int[] numbers) { System.out.println("these are the numbers:"); for (int number : numbers) { System.out.println(number); } System.out.println("and these are their sums:"); for (int firstNumber : numbers) { for (int secondNumber : numbers) { System.out.println(firstNumber + secondNumber); } } }

Here our runtime is , which we just call . Even if it was , it would still be .

Similarly:

  • is
  • is

Again, we can get away with this because the less significant terms quickly become, well, less significant as n gets big.

We're usually talking about the "worst case"

Often this "worst case" stipulation is implied. But sometimes you can impress your interviewer by saying it explicitly.

Sometimes the worst case runtime is significantly worse than the best case runtime:

public static boolean contains(int[] haystack, int needle) { // does the haystack contain the needle? for (int n : haystack) { if (n == needle) { return true; } } return false; }

Here we might have 100 items in our haystack, but the first item might be the needle, in which case we would return in just 1 iteration of our loop.

In general we'd say this is runtime and the "worst case" part would be implied. But to be more specific we could say this is worst case and best case runtime. For some algorithms we can also make rigorous statements about the "average case" runtime.

Space complexity: the final frontier

Sometimes we want to optimize for using less memory instead of (or in addition to) using less time. Talking about memory cost (or "space complexity") is very similar to talking about time cost. We simply look at the total size (relative to the size of the input) of any new variables we're allocating.

This method takes space (we use a fixed number of variables):

public static void sayHiNTimes(int n) { for (int i = 0; i < n; i++) { System.out.println("hi"); } }

This method takes space (the size of hiArray scales with the size of the input):

public static String[] arrayOfHiNTimes(int n) { String[] hiArray = new String[n]; for (int i = 0; i < n; i++) { hiArray[i] = "hi"; } return hiArray; }

Usually when we talk about space complexity, we're talking about additional space, so we don't include space taken up by the inputs. For example, this method takes constant space even though the input has n items:

public static int getLargestItem(int[] items) { int largest = Integer.MIN_VALUE; for (int item : items) { if (item > largest) { largest = item; } } return largest; }

Sometimes there's a tradeoff between saving time and saving space, so you have to decide which one you're optimizing for.

Big O analysis is awesome except when it's not

You should make a habit of thinking about the time and space complexity of algorithms as you design them. Before long this'll become second nature, allowing you to see optimizations and potential performance issues right away.

Asymptotic analysis is a powerful tool, but wield it wisely.

Big O ignores constants, but sometimes the constants matter. If we have a script that takes 5 hours to run, an optimization that divides the runtime by 5 might not affect big O, but it still saves you 4 hours of waiting.

Beware of premature optimization. Sometimes optimizing time or space negatively impacts readability or coding time. For a young startup it might be more important to write code that's easy to ship quickly or easy to understand later, even if this means it's less time and space efficient than it could be.

But that doesn't mean startups don't care about big O analysis. A great engineer (startup or otherwise) knows how to strike the right balance between runtime, space, implementation time, maintainability, and readability.

You should develop the skill to see time and space optimizations, as well as the wisdom to judge if those optimizations are worthwhile.

Don't miss part 2: logarithms, amortized analysis, and more.

Sign up for updates. Get free practice problems every week!

. . .