Leetcode | Word Ladder II
In preparation for fulltime interviews in the coming months I’ve been doing a lot of leetcode questions. I have many feelings on whether or not this is the best way to determine someone’s coding abilities, but that’s for a different post. I wanted to share my struggles with one particular question: word ladder ii. I’m not going to paste the prompt here, so if you’re reading this maybe read the prompt first.
Why did I find this one particularly challenging? I’m not sure, exactly. I know all of the strategies involved, I’ve done similar questions before, I know all of the data structures… I think just the size of the problem perhaps got to me. Anyway, I’m going to go over a few failed attempts of mine (they both worked, just timed out on some test cases) followed by a working solution. I’ll conclude with some final thoughts.
Solution 1
My first mistake was I just started coding. This usually ends poorly for me with these types of questions. I tried to come up with a strategy but since I was timing myself, I figured the solution would unfold as I went. This is what my first code looked like:
class Solution(object): def __init__(self): self.min_so_far = float('inf') def findLadders(self, beginWord, endWord, wordList, trace=[], top=True, depth=0): g = self.build_graph(beginWord, endWord, wordList) out = [] possibles = [] trace = trace + [beginWord] if depth > self.min_so_far: return out for w in wordList: if self.hamming(beginWord, w) == 1: possibles.append(w) for p in possibles: if endWord in possibles: this_trace = trace + [endWord] if len(this_trace) < self.min_so_far: self.min_so_far = len(this_trace) return [this_trace] else: for t in self.findLadders(p, endWord, [w for w in wordList if w != p], trace, False, depth + 1): out.append(t) if top: if len(out) != 0: final = self.get_min_lists(out) print(final) return final else: return out else: return out def hamming(self, w1, w2): assert(len(w1) == len(w2)) return sum([w1[i] != w2[i] for i in range(len(w1))]) def get_min_lists(self, lsts): min_ = min(list(map(lambda l: len(l), lsts))) outs = [] for lst in lsts: if len(lst) == min_: outs.append(lst) return outs
The basic strategy here was to do a DFS which generated possibilities
on the fly (potential next words) and stopped once we found the endWord
. Once I did, I saved the min_so_far
to save how deep in the graph we are. This way, we never go deeper than that and can cut off searches early. Unfortunately, this had many flaws. Some include:
- DFS instead of BFS. DFS means that we could end up searching deeper than we need to. More on this later.
- Not using good data structures. Everything is saved as an array here. For searching repeatedly or checking for existence dictionaries and sets would work better.
- Overly recursive. Sure, the recursive solution may seem sexy but it’s rarely the fastest and in this case was the final nail in the coffin of this solution.
To be clear, this solution worked it was just very slow.
Solution 2
This solution DID perform much better, but unfortunately it was just adding enhancements to my already poor code. Here is what it looked like:
class Node(object): def __init__(self, val, path): self.val = val self.children = [] self.depth = None self.path = path self.target = False class Solution(object): def __init__(self): self.min_so_far = float('inf') self.ret = [] def build_graph(self, begin_word, end_word, word_list): root = Node(begin_word, []) root.children = self.set_children(root, word_list) root.depth = 0 queue = [root.children] self.graph_helper(root, end_word, word_list, queue, 0) return root def graph_helper(self, node, end_word, word_list, queue, depth): if depth <= self.min_so_far: for child in node.children: if child.val == end_word: node.children = [] node.target = True self.ret.append((node.path + [node.val, child.val])) if depth < self.min_so_far: self.min_so_far = depth break else: new_word_list = [w for w in word_list if w != child.val] child.children = self.set_children( child, new_word_list) if not node.target: for child in node.children: self.graph_helper( child, end_word, new_word_list, depth + 1) def set_children(self, node, word_list): ret = [] for w in word_list: if self.hamming(node.val, w) == 1: ret.append(Node(w, (node.path + [node.val]))) return ret def findLadders(self, beginWord, endWord, wordList): self.build_graph(beginWord, endWord, wordList) return self.get_min_lists(self.ret) def get_min_lists(self, lsts): min_ = min(list(map(lambda l: len(l), lsts))) outs = [] for lst in lsts: if len(lst) == min_: outs.append(lst) return outs
OhhHHhh! Look how shiny and modular everything is! He even added a Node
class. Unfortunately, at this point I didn’t realize that the problem was DFS and so even though this performed way better, it still wasn’t good enough to pass the tests on leetcode. TBH, in an interview this would probably fly because the code is sound. But, again, missing the benefits of BFS here is a silly mistake that wouldn’t have been good in an interview.
Solution 3
At this point I was depressed, sad, and stuck. So, I looked at the discussion section and found a great python solution that I adapted to get things working finally. Here is the code:
class Solution(object): def __init__(self): self.min_so_far = float('inf') self.ret = [] def findLadders(self, beginWord, endWord, wordList): w_set = set(wordList) if endWord not in w_set: return [] lts = "abcdefghijklmnopqrstuvwxyz" dist = float("inf") q = [beginWord] seen = {beginWord: 0} graph = {beginWord: set()} while q: cur = q.pop(0) d = seen[cur] if d > dist: break for i in range(len(cur)): for lt in lts: if lt != cur[i]: new = cur[:i] + lt + cur[i + 1:] if new in w_set and (new not in seen or d + 1 == seen[new]): if cur in graph: graph[cur].add(new) else: graph[cur] = {new} if new == endWord: dist = d + 1 if new not in seen: seen[new] = d + 1 q.append(new) res = [] def dfs(path, cur): if cur == endWord: res.append(path) else: if cur in graph: for new in graph[cur]: dfs(path + [new], new) dfs([beginWord], beginWord) return res
The author had a couple of good strategies which I will consider for future problems. Namely:
- Instead of calculating hamming distance with every word in the
wordList
, just iterate through every letter. This caps the number of combinations you try at each level and works well when your words are small and your wordLists are huge. - BFS first to generate the graph and only then DFS to search the graph. BFS for graph generation means that we will ALWAYS find the shortest distance and will never search deeper than that. There might be other solutions ending at that layer, but we will never search below that.
set()
‘s and{}
‘s save the day here. Lookup and existence checking are much faster than arrays for these data structures.
Now running this code on even the largest dataset in the test scenarios yields:
if __name__ == '__main__': cases = [["hit", "cog", ["hot", "dot", "dog", "lot", "log", "cog"], [["hit", "hot", "dot", "dog", "cog"], ["hit", "hot", "lot", "log", "cog"]]], ["qa", "sq", ["si", "go", "se", "cm", "so", "ph", "mt", "db", "mb", "sb", "kr", "ln", "tm", "le", "av", "sm", "ar", "ci", "ca", "br", "ti", "ba", "to", "ra", "fa", "yo", "ow", "sn", "ya", "cr", "po", "fe", "ho", "ma", "re", "or", "rn", "au", "ur", "rh", "sr", "tc", "lt", "lo", "as", "fr", "nb", "yb", "if", "pb", "ge", "th", "pm", "rb", "sh", "co", "ga", "li", "ha", "hz", "no", "bi", "di", "hi", "qa", "pi", "os", "uh", "wm", "an", "me", "mo", "na", "la", "st", "er", "sc", "ne", "mn", "mi", "am", "ex", "pt", "io", "be", "fm", "ta", "tb", "ni", "mr", "pa", "he", "lr", "sq", "ye"], [['qa', 'ca', 'cm', 'sm', 'sq'], ['qa', 'ca', 'ci', 'si', 'sq'], ['qa', 'ca', 'cr', 'sr', 'sq'], ['qa', 'ca', 'co', 'so', 'sq'], ['qa', 'ba', 'br', 'sr', 'sq'], ['qa', 'ba', 'bi', 'si', 'sq'], ['qa', 'ba', 'be', 'se', 'sq'], ['qa', 'ra', 're', 'se', 'sq'], ['qa', 'ra', 'rn', 'sn', 'sq'], ['qa', 'ra', 'rh', 'sh', 'sq'], ['qa', 'ra', 'rb', 'sb', 'sq'], ['qa', 'fa', 'fe', 'se', 'sq'], ['qa', 'fa', 'fr', 'sr', 'sq'], ['qa', 'fa', 'fm', 'sm', 'sq'], ['qa', 'ya', 'yo', 'so', 'sq'], ['qa', 'ya', 'yb', 'sb', 'sq'], [ 'qa', 'ya', 'ye', 'se', 'sq'], ['qa', 'ma', 'mt', 'st', 'sq'], ['qa', 'ma', 'mb', 'sb', 'sq'], ['qa', 'ma', 'me', 'se', 'sq'], ['qa', 'ma', 'mo', 'so', 'sq'], ['qa', 'ma', 'mn', 'sn', 'sq'], ['qa', 'ma', 'mi', 'si', 'sq'], ['qa', 'ma', 'mr', 'sr', 'sq'], ['qa', 'ga', 'go', 'so', 'sq'], ['qa', 'ga', 'ge', 'se', 'sq'], ['qa', 'ha', 'ho', 'so', 'sq'], ['qa', 'ha', 'hi', 'si', 'sq'], ['qa', 'ha', 'he', 'se', 'sq'], ['qa', 'na', 'nb', 'sb', 'sq'], ['qa', 'na', 'no', 'so', 'sq'], ['qa', 'na', 'ne', 'se', 'sq'], ['qa', 'na', 'ni', 'si', 'sq'], ['qa', 'la', 'ln', 'sn', 'sq'], ['qa', 'la', 'le', 'se', 'sq'], ['qa', 'la', 'lt', 'st', 'sq'], ['qa', 'la', 'lo', 'so', 'sq'], ['qa', 'la', 'li', 'si', 'sq'], ['qa', 'la', 'lr', 'sr', 'sq'], ['qa', 'ta', 'tm', 'sm', 'sq'], ['qa', 'ta', 'ti', 'si', 'sq'], ['qa', 'ta', 'to', 'so', 'sq'], ['qa', 'ta', 'tc', 'sc', 'sq'], ['qa', 'ta', 'th', 'sh', 'sq'], ['qa', 'ta', 'tb', 'sb', 'sq'], ['qa', 'pa', 'ph', 'sh', 'sq'], ['qa', 'pa', 'po', 'so', 'sq'], ['qa', 'pa', 'pb', 'sb', 'sq'], ['qa', 'pa', 'pm', 'sm', 'sq'], ['qa', 'pa', 'pi', 'si', 'sq'], ['qa', 'pa', 'pt', 'st', 'sq']]]] for case in cases: s = Solution() start = time.time() assert sorted(s.findLadders(case[0], case[1], case[2])) == sorted( case[3]), 'Case Failed...' end = time.time() print('This case took %s seconds to process.' % (end - start)) print('PASSED!') ##################################################### | => python word_ladder_ii.py This case took 0.00033211708068847656 seconds to process. This case took 0.002701997756958008 seconds to process. PASSED!
I’d call that a victory!
Complexity Analysis
#TODO