Just No more free questions left!
Upgrade NowI figured out how to get rich: online poker.
I suspect the online poker game I'm playing shuffles cards by doing a single "riffle."
To prove this, let's write a function to tell us if a full deck of cards shuffledDeck is a single riffle of two other halves half1 and half2.
We'll represent a stack of cards as an array of integers in the range 1..52 (since there are 52 distinct cards in a deck).
Why do I care? A single riffle is not a completely random shuffle. If I'm right, I can make more informed bets and get rich and finally prove to my ex that I am not a "loser with an unhealthy cake obsession" (even though it's too late now because she let me go and she's never getting me back).
Watch out for index out of bounds errors! Will your function ever try to grab the 0th item from an empty array, or the nth item from an array with n elements (where the last index would be n-1)?
We can do this in time and additional space.
Did you come up with a recursive solution? Keep in mind that you may be incurring a hidden space cost (probably ) in the call stack! You can avoid this using an iterative approach.
How can we re-phrase this problem in terms of smaller subproblems?
Breaking the problem into smaller subproblems will clearly involve reducing the size of at least one of our stacks of cards. So to start, let's try taking the first card out of shuffledDeck.
What should be true of this card if shuffledDeck is a riffle of half1 and half2?
If shuffledDeck is a riffle of half1 and half2, then the first card from shuffledDeck should be either the same as the first card from half1 or the same as the first card from half2.
If we're looking at our decks face-up, from above (so we only see the top card), this could be a single riffle:
While this could not:
Now that we know the first card checks out, how do we get to our subproblem?
Let's "throw out" the top card from shuffledDeck as well as the card it matched with from the top of half1 or half2. Those cards are now "accounted for."
Now we're left with a smaller version of the original problem, which we can solve using the same approach! So we keep doing this over and over until we exhaust shuffledDeck. If we get to the end and each card "checks out," we return true.
How do we implement this in code?
Now that we have a problem that's the same as the original problem except smaller, our first thought might be to use recursion. All we need is a base case. What's our base case?
We stop when we run out of cards in our shuffledDeck. So that's our base case: when we've checked all cards in shuffledDeck, we return true because we know all of the cards have been "accounted for."
This solution will work. But we can do better.
Before we talk about optimization, note that our inputs are of small and constant size. This function will take hardly any time or space, even if it could be more efficient. In industry, especially at small startups that want to move quickly, optimizing this might be considered a "premature optimization." But if we're going to do something inefficient, we should at least know about it. Great engineers have both the skill to see how to optimize their code and the wisdom to know when those optimizations aren't worth it. At this point in the interview I recommend saying "I think we can optimize this a bit further, although given the constraints on the input this probably won't be very resource-intensive anyway...should we talk about optimizations?"
Okay, back to our show. This function will take time and additional space.
Whaaaaat? Yeah. Take a look at this snippet:
In particular this expression:
That's a slice, and it costs time and space, where m is the size of the resulting array. That's going to determine our overall time and space cost here—the rest of the work we're doing is constant space and time.
In our recursing we'll build up n frames on the call stack. Each of those frames will hold a different slice of our original shuffledDeck (and half1 and half2, though we only slice one of them in each recursive call).
So, what's the total time and space cost of all our slices?
If shuffledDeck has n items to start, taking our first slice takes n-1 time and space (plus half that, since we're also slicing one of our halves—but that's just a constant multiplier so we can ignore it). In our second recursive call, slicing takes n-2 time and space. Etcetera.
So our total time and space cost for slicing comes to:
(n - 1) + (n - 2) + ... + 2 + 1This is a common series that turns out to be .
We can do better than this time and space cost. One way we could to that is to avoid slicing and instead keep track of indices in the array:
So now we're down to time, but we're still taking space in the call stack because of our recursion. We can rewrite this as an iterative function to get that memory cost down to .
What's happening in each iteration of our recursive function? Sometimes we're taking a card off of half1 and sometimes we're taking a card off of half2, but we're always taking a card off of shuffledDeck.
So what if instead of taking cards off of shuffledDeck 1-by-1, we iterated over them?
We walk through shuffledDeck, seeing if each card so far could have come from a riffle of the other two halves. To check this, we:
time and additional space.
Becky if you're reading this I didn't really mean what I said in the problem statement. It's just that things have been hard lately and anyway if you'll just give me another chance I promise it won't be like last time. I'm a wreck without you. Like a collapsed soufflé. Please Becky.
If you read the whole breakdown section, you might have noticed that we started with a recursive function that looked simple but turned out to cost time and space because each recursive step created its own new "slice" of the input array. If you missed that part, go back and take a look.
Be careful of the hidden time and space costs of array slicing! Consider tracking array indices "by hand" instead (as we do in our final solution).