## Get a free weekly practice problem!

Keep that axe sharp.

### You only have free questions left (including this one).

But it doesn't have to end here! Sign up for the 7-day coding interview crash course and you'll get a free Interview Cake problem every week.

I'm making a search engine called MillionGazillion™.

I wrote a crawler that visits web pages, stores a few keywords in a database, and follows links to other web pages. I noticed that my crawler was wasting a lot of time visiting the same pages over and over, so I made a array, $visited, where I'm storing URLs I've already visited. Now the crawler only visits a URL if it hasn't already been visited. Thing is, the crawler is running on my old desktop computer in my parents' basement (where I totally don't live anymore), and it keeps running out of memory because$visited is getting so huge.

How can I trim down the amount of space taken up by $visited? Your strategy shouldn't take a hit on runtime. Replacing common substrings like ".com" and "www" with characters that aren't allowed in URLs definitely wins us something, but we can do even better. How can we even further exploit overlaps or shared prefixes between URLs? ### Start your free trial! Log in or sign up with one click to get immediate access to free mock interview questions Actually, we don't support password-based login. Never have. Just the OAuth methods above. Why? 1. It's easy and quick. No "reset password" flow. No password to forget. 2. It lets us avoid storing passwords that hackers could access and use to try to log into our users' email or bank accounts. 3. It makes it harder for one person to share a paid Interview Cake account with multiple people. ### Start your free trial! Log in or sign up with one click to get immediate access to free mock interview questions Actually, we don't support password-based login. Never have. Just the OAuth methods above. Why? 1. It's easy and quick. No "reset password" flow. No password to forget. 2. It lets us avoid storing passwords that hackers could access and use to try to log into our users' email or bank accounts. 3. It makes it harder for one person to share a paid Interview Cake account with multiple people. How much space does this save? This is about to get MATHEMATICAL. How many characters were we storing in our flat array approach? Suppose$visited includes all possible URLs of length 5 or fewer characters. Let's ignore non-alphabetical characters to simplify, sticking to the standard 26 English letters in lowercase. There are 26^5 different possible 5-character URLs (26 options for the first character, times 26 options for the 2nd character, etc), and of course 26^4 different possible 4-character URLs, etc. If we store each 5-character URL as a normal string in memory, we are storing 5 characters per string, for a total of 5 * 26^5 characters for all possible 5-character strings (and 4 * 26^4 total characters for all 4-character strings, etc). So for all 1, 2, 3, 4, or 5 character URLs, our total number of characters stored is:

5 * 26^5 + 4 * 26^4 + 3 * 26^3 + 2 * 26^2 + 1 * 26 ^ 1

So for all possible URLs of length n or fewer, our total storage space is:

n26^n + (n-1)26^{(n-1)} + . . . + 1 * 26 ^ 1

This is .

How many characters are stored in our trie? The first layer has 26 nodes (and thus 26 characters), one for each possible starting character. On the second layer, each of those 26 nodes has 26 children, for a total of 26^2 nodes. The fifth layer has 26^5 nodes. To store all 1, 2, 3, 4, or 5 character URLs our trie will have 5 layers. So the total number of nodes is:

26^5 + 26^4 + 26^3 + 26^2 + 26^1

So for all URLs of length n or fewer, we have:

26^n + 26^{(n-1)} + ... + 26^1

This is . We've shaved off a factor of n.

Bonus trivia: although the HTTP spec allows for unlimited URL length, in practice many web browsers won't support URLs over 2,000 characters.