Thursday, October 6, 2011

A Wormhole Through Sudoku Space

I did what I suggested in my last post, and finally read about Peter Norvig's constraint propagation method for solving Sudoku. On one hand it's quite humbling to discover a thinking process so much more elaborate than what I could achieve, but on the other, I'm glad that I didn't read it first, because I wouldn't have learned as much from it.

It turns out that my insights about search were not so far off the mark... but then the elimination procedure is the real deal (in some cases, it can entirely solve an easy problem on its own). In fact the real power is unleashed when the two are combined. The way I understand it, elimination is like a mini-search, where the consequences of a move are carried over their logical conclusion, revealing, many steps ahead, if it's good or not. It is more than a heuristic, it is a solution space simplifier, and a very efficient one at that.

My reaction when I understood how it worked was to ask myself if there's a way I could adapt it for my current Python implementation, without modifying it too much. It is not exactly trivial, because the two problem representation mechanisms are quite different: Peter Norvig's one explicitly models the choices for a given square, while mine only does it implicitly. This meant that I couldn't merely translate the elimination algorithm in terms of my implementation: I'd have to find some correspondence, a way to express one in terms of the other. After some tinkering, what I got is a drop-in replacement for my Sudoku.set method:

    def set(self, i, j, v, propagate_constraints):
        self.grid[i][j] = v
        self.rows[i].add(v)
        self.cols[j].add(v)
        self.boxes[self.box(i, j)].add(v)
        if propagate_constraints:
            for a in range(self.size):
                row_places = defaultdict(list) 
                row_available = set(self.values)
                col_places = defaultdict(list) 
                col_available = set(self.values)
                box_places = defaultdict(list) 
                box_available = set(self.values)
                for b in range(self.size):
                    options = []
                    row_available.discard(self.grid[a][b])
                    col_available.discard(self.grid[b][a])
                    bi, bj = self.box_coords[a][b]
                    box_available.discard(self.grid[bi][bj])
                    for vv in self.values:
                        if not self.grid[a][b] and self.isValid(a, b, vv):
                            options.append(vv)
                            row_places[vv].append(b)
                        if not self.grid[b][a] and self.isValid(b, a, vv):
                            col_places[vv].append(b)
                        if not self.grid[bi][bj] and self.isValid(bi, bj, vv):
                            box_places[vv].append((bi,bj))
                    if not self.grid[a][b]:
                        if len(options) == 0: 
                            return False
                        elif len(options) == 1:
                            # square with single choice found
                            return self.set(a, b, options[0], propagate_constraints)
                if row_available != set(row_places.keys()): return False
                if col_available != set(col_places.keys()): return False
                if box_available != set(box_places.keys()): return False
                for vv, cols in row_places.items():
                    if len(cols) == 1:                       
                        # row with with single place value found
                        return self.set(a, cols[0], vv, propagate_constraints)
                for vv, rows in col_places.items():
                    if len(rows) == 1:                        
                        # col with with single place value found
                        return self.set(rows[0], a, vv, propagate_constraints)
                for vv, boxes in box_places.items():
                    if len(boxes) == 1:
                        # box with with single place value found
                        return self.set(boxes[0][0], boxes[0][1], vv, propagate_constraints)
            return True

Ok.. admittedly, it is very far from being as elegant as any of Peter Norvig's code.. it is even possibly a bit scary.. but that is the requirement, to patch my existing method (i.e. to implement elimination without changing the basic data structures). Basically, it complements the set method to make it seek two types of things:
  • a square with a single possible value
  • a row/column/box with a value that has only one place to go
Whenever it finds one of these, it recursively calls itself, to set it right away. While doing that, it checks for certain conditions that would make this whole chain of moves (triggered by the first call to set) invalid:
  • a square with no possible value
  • a row/column/box with a set of unused values that is not equal to the set of values having a place to go (this one was a bit tricky!)
So you'll notice that this is not elimination per se, but rather.. something else. Because really there's nothing to eliminate, this is what happens to the elimination rules, when they are forced through an unadapted data structure. With Peter Norvig's implementation, it is so much more elegant and efficient than this, of course. And speaking of efficiency, another obvious disclaimer is that of course this whole thing is not as efficient as Peter Norvig's code, and for many reasons. I wasn't interested in efficiency this time, but rather in finding a correspondence between the two methods.

Finally, we need to adapt the solver (or search method). The major difference with the previous greedy solver (the non-eliminative one) is the fact that a move is no longer a single change we do to the grid (and that can be easily undone when we backtrack). This time, an elimination call can change many squares, which is a problem with this method, because we cannot do all the work with the same Sudoku instance, for backtracking purposes, and such an instance is not as efficiently copied as a dict of strings. There are probably many other ways, but to keep the program object-oriented, here is what I found:

    def solveGreedilyWithConstraintPropagation(self):
        nopts = {} # n options -> (opts, (i,j))
        for i in range(self.size):
            for j in range(self.size):
                if self.grid[i][j]: continue
                opts_ij = []
                for v in self.values:
                    if self.isValid(i, j, v):
                        opts_ij.append(v)
                n = len(opts_ij) 
                if n == 0: return None
                nopts[n] = (opts_ij, (i,j))
        if nopts:
            opts_ij, (i,j) = min(nopts.items())[1]
            for v in opts_ij:
                S = deepcopy(self)
                if S.set(i, j, v, propagate_constraints=True):
                    T = S.solveGreedilyWithConstraintPropagation()
                    if T: return T
            return None
        return self

Again it's not terribly elegant (nor as efficient) but it works, in the sense that it yields the same search tree as Peter Norvig's implementation. Just before doing an elimination (triggered by a call to set), we deepcopy the current Sudoku instance (self), and perform the elimination on the copy instead. If it succeeds, we carry the recursion over with the copy. When a solution is found, the instance is returned, so that's why this method has to be called like this:

S = S.solveGreedilyWithConstraintPropagation()

To illustrate what's been gained with this updated solver, here are its 6 first recursion layers, when ran against the "hard" problem of my previous post:


Again, the code for this exercise is available on my GitHub.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.