Get a free weekly practice problem!

Keep that axe sharp.

× No thanks

You only have free questions left (including this one).

But it doesn't have to end here! Sign up for the 7-day coding interview crash course and you'll get a free Interview Cake problem every week.

You have a linked list and want to find the kth to last node.

Write a method kth_to_last_node that takes an integer k and the head_node of a singly-linked list, and returns the kth to last node in the list.

For example:

class LinkedListNode attr_accessor :value, :next def initialize(value) @value = value @next = nil end end a ="Angel Food") b ="Bundt") c ="Cheese") d ="Devil's Food") e ="Eccles") = b = c = d = e kth_to_last_node(2, a) # returns the node with value "Devil's Food" (the 2nd to last node)

We can do this in time.

We can do this in space. If you're recursing, you're probably taking space on the call stack!

Start your free trial!

Log in or sign up with one click to get immediate access to free mock interview questions

Start your free trial!

Log in or sign up with one click to get immediate access to free mock interview questions

Both approaches use time and space.

But the second approach is fewer steps since it gets the answer "in one pass," right? Wrong.

In the first approach, we walk one pointer from head to tail (to get the list's length), then walk another pointer from the head node to the target node (the kth to last node).

In the second approach, right_node also walks all the way from head to tail, and left_node also walks from the head to the target node.

So in both cases, we have two pointers taking the same steps through our list. The only difference is the order in which the steps are taken. The number of steps is the same either way.

However, the second approach might still be slightly faster, due to some caching and other optimizations that modern processors and memory have.

Let's focus on caching. Usually when we grab some data from memory (for example, info about a linked list node), we also store that data in a small cache right on the processor. If we need to use that same data again soon after, we can quickly grab it from the cache. But if we don't use that data for a while, we're likely to replace it with other stuff we've used more recently (this is called a "least recently used" replacement policy).

Both of our algorithms access a lot of nodes in our list twice, so they could exploit this caching. But notice that in our second algorithm there's a much shorter time between the first and second times that we access a given node (this is sometimes called "temporal locality of reference"). Thus it seems more likely that our second algorithm will save time by using the processor's cache! But this assumes our processor's cache uses something like a "least recently used" replacement policy—it might use something else. Ultimately the best way to really know which algorithm is faster is to implement both and time them on a few different inputs!

Can we do better? What if we expect n to be huge and k to be pretty small? In this case, our target node will be close to the end of the it seems a waste that we have to walk all the way from the beginning twice.

Can we trim down the number of steps in the "second trip"? One pointer will certainly have to travel all the way from head to tail in the list to get the total length...but can we store some "checkpoints" as we go so that the second pointer doesn't have to start all the way at the beginning? Can we store these "checkpoints" in constant space? Note: this approach only saves time if we know that our target node is towards the end of the list (in other words, n is much larger than k).

Start your free trial!

Log in or sign up with one click to get immediate access to free mock interview questions

Reset editor

Powered by

. . .