But it doesn't have to end here! Sign up for the 7-day coding interview crash course and you'll get a free Interview Cake problem every week.
Write a function fib that takes an integer n and returns the nth Fibonacci number.
Let's say our Fibonacci series is 0-indexed and starts with 0. So:
Our solution runs in n time.
There's a clever, more mathy solution that runs in time, but we'll leave that one as a bonus.
If you wrote a recursive function, think carefully about what it does. It might do repeat work, like computing fib(2) multiple times!
We can do this in space. If you wrote a recursive function, there might be a hidden space cost in the call stack!
The nth Fibonacci number is defined in terms of the two previous Fibonacci numbers, so this seems to lend itself to recursion.
Can you write up a recursive solution?
As with any recursive function, we just need a base case and a recursive case:
Okay, this'll work! What's our time complexity?
It's not super obvious. We might guess n, but that's not quite right. Can you see why?
Each call to fib makes two more calls. Let's look at a specific example. Let's say n=5. If we call fib(5), how many calls do we make in total?
Try drawing it out as a tree where each call has two child calls, unless it's a base case.
Here's what the tree looks like:
We can notice this is a binary tree whose height is n, which means the total number of nodes is .
So our total runtime is . That's an "exponential time cost," since the n is in an exponent. Exponential costs are terrible. This is way worse than or even .
Our recurrence tree above essentially gets twice as big each time we add 1 to n. So as n gets really big, our runtime quickly spirals out of control.
The craziness of our time cost comes from the fact that we're doing so much repeat work. How can we avoid doing this repeat work?
We can memoize!
Let's wrap fib in a class with an instance variable where we store the answer for any n that we compute:
What's our time cost now?
Our recurrence tree will look like this:
The computer will build up a call stack with fib(5), fib(4), fib(3), fib(2), fib(1). Then we'll start returning, and on the way back up our tree we'll be able to compute each node's 2nd call to fib in constant time by just looking in the memo. n time in total.
What about space? memo takes up n space. Plus we're still building up a call stack that'll occupy n space. Can we avoid one or both of these space expenses?
Look again at that tree. Notice that to calculate fib(5) we worked "down" to fib(4), fib(3), fib(2), etc.
What if instead we started with fib(0) and fib(1) and worked "up" to n?
We use a bottom-up approach, starting with the 0th Fibonacci number and iteratively computing subsequent numbers until we get to n.
time and space.
This one's a good illustration of the tradeoff we sometimes have between code cleanliness and efficiency.
We could use a cute, recursive function to solve the problem. But that would cost time as opposed to n time in our final bottom-up solution. Massive difference!
In general, whenever you have a recursive solution to a problem, think about what's actually happening on the call stack. An iterative solution might be more efficient.
Powered by qualified.io