# Sudoku Solver

The finished solver is live here.

## Contents

## Background

For a while around 2020 - 2021, my Grandpa was pretty into trying to solve Sudoku puzzles, and would often bring them to my house, where my siter and I would sometimes help him with them or solve the same puzzles ourselves to be able to help him with them (or just for fun, or to show off how much faster we could solve them, ¯\_(ツ)_/¯ ).

Anyway, all this thinking about ways of figuring out which numbers went where got me interested in algorithmic solving, and I decided to try my hand at building my own solver, without reference to anyone else’s solver, just figuring it out for myself.

I started by making some primitives for managing the board.

```
/// Coordinates on the Sudoku board.
#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
pub struct Coord {
/// Row (y).
pub row: usize,
/// Column (x).
pub col: usize,
}
/// Set of available numbers.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
// This is a bitmask in order to minimize the space needed to store the set.
// It has operations for checking which numbers it contains and addin or removing them.
struct AvailSet(u16);
/// Sudoku Board.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
// This is a 2D board, it's just been flattened into a single vec.
//
// This type provides methods for iterating over rows, columns, and sectors.
pub struct Board(Vec<AvailSet>);
```

Then I started working on solving it. I focused on this as just a libary first, an
interface to actually *use* the solver could come later.

## Deduction vs Induction

I decided to split the solver into two steps, which I refer to as deductive solving and inductive solving. Deductive solving is when you’re positively proving which numbers go where, or at least definitively eliminating some numbers. Inductive solving is when you can no longer definitively eliminate or prove anything and have to use a search algorithm such as a breadth-first or depth-first search to find the solution.

Note that the inductive algorithims can in theory be used to solve the whole puzzle,
however to improve efficiency you can prune a *lot* of potential solutions from the search
space with some solid deductions. Surprisingly, published sudoku puzzles such as those you
find in books or newspapers or whatnot can actually be completely solved deductively,
without any DFS or BFS at all.

### Deduction

To facilitate deduction, my sudoku board consists of a grid of sets, where each cell contains the set of numbers which the algorithm still thinks that cell could possibly take. The value of a cell is known when it has only one value left in the cell. To set up an initial board, all the known cells are set to contain only the known value, and all other cells contain every possible value, like this:

The solver then checks what deductions are possible and uses those to eliminate whatever numbers it can. Initially, I did this in the simplest way possible, by just looking for cells with a single number in them and eliminating that number from all of the cell’s neighbors (I define a cell’s neighbors as every cell in the same row, column, or sector). Eliminating a number from all neighbors may be the most obvious thing to do, but it isn’t very thorough. There are vanishingly few boards you can solve through simple elimination like this. However, since we have the induction step, it’s OK if our deduction step is somewhat primitive. The solver will still work, though we might not prune everything we could and will spend time searching boards that we could potentially prove are wring if we were smater.

So then the question becomes, ok, what else can we prove?

I ended up coming up with 4 groups of rules for reducing the set of possibilities, often based on techniques I would use when solving a board manually. They are

- Single value left in cell.
- Last place in row/column/sector for value.
- Sector in a row/column reduced to 3 values.
- Only possible sector in a row/column or only possible row/column in a sector.

Those are some very long and possibly confusing names. What do they even mean?

**Single value left in cell** is pretty straightforward. This is the one we started with.
If a cell only has one value left in it, you can eliminate that value from the rest of its
row, column, and sector.

**Last place in row/column/sector for value** is also fairly straightforward. If there’s
only one place left in a row, column, or sector that could contain a given value, then the
cell must have that value, and any other possibilities for that cell can be eliminated.
Here’s an example:

Here, the cell highlighted in green must be a 2 because 2 has been eliminated from all other cells in that sector and column (but not row), and it can be proved based on either the sector or column in this case.

**Sector in a row/column reduced to 3 values** is actually quite similar to the case where
a single value is left in a cell. In this case, instead of only having one value left in a
single cell, we look at the intersection between either a row and a sector or a column and
a sector. Because row/sector and column/sector overlaps have 3 cells each, if the set of
numbers remaining in a row/sector or column/sector intersection is 3, then you can
eliminate those numbers from the rest of the row or column, even if you don’t know exactly
where each one goes yet. Here’s an example:

Because the 4 and 9 in the green cells along with the 2 in that sector are exactly 3 numbers left for that row/sector intersection, it is possible to eliminate 4 and 9 from the rest of the row, even though we don’t yet known which one should be 4 and which one should be 9.

**Only possible sector in a row/column or only possible row/column in a sector** is
probably the most complicated to explain. Like sector in a row/column reduced to 3 values,
this deals with intersections between sectors and rows/columns. If a number appears in
only one sector within a row or column, then you know the number has to occupy that row
within the sector and can eliminate it from the rest of the sector. Likewise, if a number
appears in only one row or column within a sector, then you know the number has to occupy
that sector within the row and can eliminate it from the rest of the row or column. Here’s
an example:

Here the first row on the board can only contain the number 2 in the two cells highlighted in green. Because both of those possible cells are in the same sector, the 2 for that row must be in that sector, and therefore that sector cannot contain a 2 on any other row.

Note that some of these deductions require looking at value across rows, columns, and sectors. In order to avoid searching the whole board constantly, in addition to keeping track of which numbers are available in each cell individually, I also keep track of how many of each number is available in every row, column, sector, row-sector intersection, and column-sector intersection. This allows a lot of queries regarding the the number of items left in particular combinations to be trivial lookups rather than a search.

Because a Sudoku board is only 9 by 9, all these combinations don’t actually take up that much memory: 81 individual cells + 9 rows + 9 columns + 9 sectors + 81 row-sectors + 81 column-sectors = 270 items. Individual cells are 2 bytes each, and all of the other counter entries are 9 bytes each, for a total of 1863 bytes, not accounting for padding. This means that we can log progress for every step along the way to a solution without actually using all that much memory. I’m not sure exactly what a typical solution length is, but the example problem we’ve been using has 135 steps, so roughly 9MiB of memory to record every step along the way.

One other thing the deductive solver effectively does is check if the board is impossible to solve. As it eliminates possible solutions, if at any time it runs out of possibilities for any cell, then the board must not be solvable. If that happens from the initial board, then the puzzle is unsolvable, but if that happens from an inductive solver guess, then it means we need to try a different guess.

### Induction

What happens when we’ve eliminated everything we can and still don’t have a solution? We need to guess a number and then go back to trying to eliminate things. That’s the job of the inductive solver. The inductive solver is pretty simply implemented with a depth-first search that selects the first available cell with multiple values, guesses the first one and then goes back to deductive solving until either the guess is proved wrong or another guess is needed.

There are some other things that could be added here, like heuristically selecting cells with fewer available guesses rather than just guessing from the first unspecified cell, but honestly, the deductive part is more interesting, and it’s powerful enough that for fully-specified sudoku boards (ones with only one solution), DFS is often not needed at all, or only needs to go to depth 1 (as is the case in the example puzzle I’ve been using).

## UI

Once I had my solver, I needed a way to actually use it. What I ended up doing is binding the library on my web server (which happens to use Rust as well, so that was easy), and creating a REST API for it. I then put together a small React app to allow users to enter a puzzle and solve it, with an optional toggle to have the solver report all of the steps to reach the solution.

One issue which came up here was that the deductive solver would use whatever the next deductive step in its queue was based on just the order they were enqueued. That works fine for a machine, but when it comes to showing things to a human, it’s less good. I decided to implement a sort order for deduction step types and put them in a priority queue, so deductions are prioritized in a way that deductive reasons that would be more obvious to a human come first. Essentially, they’re chosen in the order I listed them above. So if there’s a lone number in a cell, that gets visited first, before any tricky deductions about intersections of rows/columns and sectors.

You can play with the complete solver here.