# [ACCEPTED]-What can be the efficient approach to solve the 8 puzzle problem?-sliding-tile-puzzle

Score: 15

I will just attempt to rewrite the previous 127 answer with more details on why it is optimal.

The 126 A* algorithm taken directly from wikipedia is

``````            function A*(start,goal)
closedset := the empty set                 // The set of nodes already evaluated.
openset := set containing the initial node // The set of tentative nodes to be evaluated.
came_from := the empty map                 // The map of navigated nodes.
g_score[start] := 0                        // Distance from start along optimal path.
h_score[start] := heuristic_estimate_of_distance(start, goal)
f_score[start] := h_score[start]           // Estimated total distance from start to goal through y.
while openset is not empty
x := the node in openset having the lowest f_score[] value
if x = goal
return reconstruct_path(came_from, came_from[goal])
remove x from openset
foreach y in neighbor_nodes(x)
if y in closedset
continue
tentative_g_score := g_score[x] + dist_between(x,y)

if y not in openset
tentative_is_better := true
elseif tentative_g_score < g_score[y]
tentative_is_better := true
else
tentative_is_better := false
if tentative_is_better = true
came_from[y] := x
g_score[y] := tentative_g_score
h_score[y] := heuristic_estimate_of_distance(y, goal)
f_score[y] := g_score[y] + h_score[y]
return failure

function reconstruct_path(came_from, current_node)
if came_from[current_node] is set
p = reconstruct_path(came_from, came_from[current_node])
return (p + current_node)
else
return current_node
``````

So let 125 me fill in all the details here.

`heuristic_estimate_of_distance` is the 124 function &Sigma; d(xi) where d(.) is 123 the Manhattan distance of each square xi 122 from its goal state.

So the setup

``````            1 2 3
4 7 6
8 5
``````

would have 121 a `heuristic_estimate_of_distance` of 1+2+1=4 since each of 8,5 are one 120 away from their goal position with d(.)=1 119 and 7 is 2 away from its goal state with 118 d(7)=2.

The set of nodes that the A* searches 117 over is defined to be the starting position 116 followed by all possible legal positions. That 115 is lets say the starting position `x` is as 114 above:

``````            x =
1 2 3
4 7 6
8 5
``````

then the function `neighbor_nodes(x)` produces the 2 113 possible legal moves:

``````            1 2 3
4 7
8 5 6

or

1 2 3
4 7 6
8   5
``````

The function `dist_between(x,y)` is defined 112 as the number of square moves that took 111 place to transition from state `x` to `y`. This 110 is mostly going to be equal to 1 in A* always 109 for the purposes of your algorithm.

`closedset` and 108 `openset` are both specific to the A* algorithm and 107 can be implemented using standard data structures 106 (priority queues I believe.) `came_from` is a data 105 structure used to reconstruct the solution 104 found using the function `reconstruct_path` who's details 103 can be found on wikipedia. If you do not 102 wish to remember the solution you do not 101 need to implement this.

Last, I will address 100 the issue of optimality. Consider the excerpt 99 from the A* wikipedia article:

"If the 98 heuristic function h is admissible, meaning 97 that it never overestimates the actual minimal 96 cost of reaching the goal, then A* is itself 95 admissible (or optimal) if we do not use 94 a closed set. If a closed set is used, then 93 h must also be monotonic (or consistent) for 92 A* to be optimal. This means that for any 91 pair of adjacent nodes x and y, where d(x,y) denotes 90 the length of the edge between them, we 89 must have: h(x) <= d(x,y) +h(y)"

So 88 it suffices to show that our heuristic is 87 admissible and monotonic. For the former 86 (admissibility), note that given any configuration 85 our heuristic (sum of all distances) estimates 84 that each square is not constrained by only 83 legal moves and can move freely towards 82 its goal position, which is clearly an optimistic 81 estimate, hence our heuristic is admissible 80 (or it never over-estimates since reaching 79 a goal position will always take at least as many 78 moves as the heuristic estimates.)

The monotonicity 77 requirement stated in words is: "The 76 heuristic cost (estimated distance to goal 75 state) of any node must be less than or 74 equal to the cost of transitioning to any 73 adjacent node plus the heuristic cost of 72 that node."

It is mainly to prevent 71 the possibility of negative cycles, where 70 transitioning to an unrelated node may decrease 69 the distance to the goal node more than 68 the cost of actually making the transition, suggesting 67 a poor heuristic.

To show monotonicity its 66 pretty simple in our case. Any adjacent 65 nodes x,y have d(x,y)=1 by our definition 64 of d. Thus we need to show

h(x) <= h(y) + 1 63

which is equivalent to

h(x) - h(y) <= 1

which 62 is equivalent to

&Sigma; d(xi) - &Sigma; d(yi) <= 1

which 61 is equivalent to

&Sigma; d(xi) - d(yi) <= 1

We 60 know by our definition of `neighbor_nodes(x)` that two neighbour 59 nodes x,y can have at most the position 58 of one square differing, meaning that in 57 our sums the term

d(xi) - d(yi) = 0

for all 56 but 1 value of i. Lets say without loss 55 of generality this is true of i=k. Furthermore, we 54 know that for i=k, the node has moved at 53 most one place, so its distance to a goal 52 state must be at most one more than in the 51 previous state thus:

&Sigma; d(xi) - d(yi) = d(xk) - d(yk) <= 1

showing 50 monotonicity. This shows what needed to 49 be showed, thus proving this algorithm will 48 be optimal (in a big-O notation or asymptotic 47 kind of way.)

Note, that I have shown optimality 46 in terms of big-O notation but there is 45 still lots of room to play in terms of tweaking 44 the heuristic. You can add additional twists 43 to it so that it is a closer estimate of 42 the actual distance to the goal state, however you 41 have to make sure that the heuristic is always 40 an underestimate otherwise you loose optimality!

## EDIT MANY MOONS LATER

Reading 39 this over again (much) later, I realized 38 the way I wrote it sort of confounds the 37 meaning of optimality of this algorithm.

There 36 are two distinct meanings of optimality 35 I was trying to get at here:

1) The algorithm 34 produces an optimal solution, that is the 33 best possible solution given the objective 32 criteria.

2) The algorithm expands the least 31 number of state nodes of all possible algorithms 30 using the same heuristic.

The simplest way 29 to understand why you need admissibility 28 and monotonicity of the heuristic to obtain 27 1) is to view A* as an application of Dijkstra's 26 shortest path algorithm on a graph where 25 the edge weights are given by the node distance 24 traveled thus far plus the heuristic distance. Without 23 these two properties, we would have negative 22 edges in the graph, thereby negative cycles 21 would be possible and Dijkstra's shortest 20 path algorithm would no longer return the 19 correct answer! (Construct a simple example 18 of this to convince yourself.)

2) is actually 17 quite confusing to understand. To fully 16 understand the meaning of this, there are 15 a lot of quantifiers on this statement, such 14 as when talking about other algorithms, one 13 refers to `similar` algorithms as A* that expand 12 nodes and search without a-priori information 11 (other than the heuristic.) Obviously, one 10 can construct a trivial counter-example 9 otherwise, such as an oracle or genie that 8 tells you the answer at every step of the 7 way. To understand this statement in depth 6 I highly suggest reading the last paragraph 5 in the History section on Wikipedia as well as looking 4 into all the citations and footnotes in 3 that carefully stated sentence.

I hope this 2 clears up any remaining confusion among 1 would-be readers.

Score: 11

You can use the heuristic that is based 8 on the positions of the numbers, that is 7 the higher the overall sum of all the distances 6 of each letter from its goal state is, the 5 higher the heuristic value. Then you can 4 implement A* search which can be proved 3 to be the optimal search in terms of time 2 and space complexity (provided the heuristic 1 is monotonic and admissible.) http://en.wikipedia.org/wiki/A*_search_algorithm

Score: 6

Since the OP cannot post a picture, this 4 is what he's talking about: As far as solving 3 this puzzle, goes, take a look at the iterative deepening depth-first search algorithm, as 2 made relevant to the 8-puzzle problem by 1 this page.

Score: 4

Donut's got it! IDDFS will do the trick, considering 34 the relatively limited search space of this 33 puzzle. It would be efficient hence respond 32 to the OP's question. It would find the 31 optimal solution, but not necessarily in 30 optimal complexity.

Implementing IDDFS 29 would be the more complicated part of this 28 problem, I just want to suggest an simple 27 approach to managing the board, the games 26 rules etc. This in particular addresses 25 a way to obtain initial states for the puzzle 24 which are solvable. An hinted in the notes 23 of the question, not all random assignemts 22 of 9 tites (considering the empty slot a 21 special tile), will yield a solvable puzzle. It 20 is a matter of mathematical parity... So, here's 19 a suggestions to model the game:

Make the 18 list of all 3x3 permutation matrices which 17 represent valid "moves" of the game. Such 16 list is a subset of 3x3s w/ all zeros and 15 two ones. Each matrix gets an ID which 14 will be quite convenient to keep track of 13 the moves, in the IDDFS search tree. An 12 alternative to matrices, is to have two-tuples 11 of the tile position numbers to swap, this 10 may lead to faster implementation.

Such matrices 9 can be used to create the initial puzzle 8 state, starting with the "win" state, and 7 running a arbitrary number of permutations 6 selected at random. In addition to ensuring 5 that the initial state is solvable this 4 approach also provides a indicative number of moves 3 with which a given puzzle can be solved.

Now 2 let's just implement the IDDFS algo and 1 [joke]return the assignement for an A+[/joke]...

Score: 4

This is an example of the classical shortest 9 path algorithm. You can read more about 8 shortest path here and here.

In short, think of all 7 possible states of the puzzle as of vertices 6 in some graph. With each move you change 5 states - so, each valid move represents 4 an edge of the graph. Since moves don't 3 have any cost, you may think of the cost 2 of each move being 1. The following c++-like 1 pseudo-code will work for this problem:

``````{
int[][] field = new int;
//  fill the input here
map<string, int> path;
queue<string> q;
put(field, 0); // we can get to the starting position in 0 turns
while (!q.empty()) {
string v = q.poll();
int[][] take = decode(v);
int time = path.get(v);
if (isFinalPosition(take)) {
return time;
}
for each valid move from take to int[][] newPosition {
put(newPosition, time + 1);
}
}
// no path
return -1;
}

void isFinalPosition(int[][] q) {
return encode(q) == "123456780"; // 0 represents empty space
}
void put(int[][] position, int time) {
string s = encode(newPosition);
if (!path.contains(s)) {
path.put(s, time);
}
}

string encode(int[][] field) {
string s = "";
for (int i = 0; i < 3; i++) for (int j = 0; j < 3; j++) s += field[i][j];
return s;
}

int[][] decode(string s) {
int[][] ans = new int;
for (int i = 0; i < 3; i++) for (int j = 0; j < 3; j++) field[i][j] = s[i * 3 + j];
return ans;
}
``````
Score: 2

See this link for my parallel iterative deepening search for a solution to the 15-puzzle, which is the 4x4 1 big-brother of the 8-puzzle.

More Related questions