Let us have an example to illustrate the relation of search and knowledge representation. Consider the simple number puzzle some of us may already know from our childhood days. The numbers from one to eight are arranged in a 3x3 field. Additionally, there is an empty spot you can move adjacent numbers to in order to change the state resp. configuration of the puzzle. Starting from a random configuration your goal is to move numbers around until you reach the solution configuration of the puzzle. Little surprise, the solution to this puzzle looks like this (_ marks the empty spot):
1 2 3 4 5 6 7 8 _
First we will need a representation for this problem that makes it accessible to search, i.e. some sort of graph representation. After that we will have to think about how to efficiently search this graph.
The projection of a configuration of the puzzle is straight forward and easy. A two-dimensional field as available in C,C++ and Java or a nested list like in LISP is the suitable data structure. Every number in the puzzle can be stored using the respective integer, the empty spot might be stored as -1 or any other integer not used for the puzzle tiles. Each configuration will be a node in our domain space to be searched. What remains is the problem of the edges between the nodes, i.e. the transition between configurations.
Let's assume this is our initial configuration of the puzzle which is represented as the root node of our search graph:
1 3 5 7 4 2 8 _ 6This configuration must be brought into the goal configuration, the solution shown above. As we can see there are three moves possible. We could move down 4, move right 8 and move left 6. This means this configuration has three successing configurations represented by nodes and connected by edges in our graph. To make life a bit more easy for us we make use of a tiny trick and assume we are not moving number tiles, but the empty spot. This way we only have a single tile to be moved in at most four different directions and we encode each move simply by calling it "left", "right", "up", "down" as it is now clear which tile to move.
For illustration here is a fragment of our domain graph, i.e. the graph reachable from our initial configuration:
1 3 5 7 4 2 8 _ 6 / left | right \ down 1 3 5 1 3 5 1 3 5 7 4 2 7 4 2 7 _ 2 _ 8 6 8 6 _ 8 4 6 / up \ right / up \ left / up | down | left \ right ... ... ... ... ... ... ... ...
Let's turn to how to search a graph constructed in the way described above. For simplicity we consider it a tree describing our domain to search in. A tree is just a directed acyclic graph where every node has at most one parent and there is a single root:
* / | \ x x $ / | \ / | \ / | \ x x x x x x x x xWhere
While the goal of the search is clear, there are different strategies how to find it. Even thought there are many others, JSL supports the basic search strategies depth-first, breadth-first and A*. Note, however, any other search strategy can be plugged into JSL most easily.
A depth-first search is the most intuitive and naive strategy. It could well be implemented without a library like this, but by a recursive descending function. The drawback of this strategy is it does not guarantee to find the "best" solution which normally is referred to as the shallowest goal in the tree/graph. Even worse if the tree/graph to be searched has infinite depth it is very likely depth-first does not find the solution at all. Worst case exponential time complexity is a property it has in common with many search strategies including breadth-first and A*.
This diagram shows the sequence of traversed nodes until the goal is found using depth-first search. The search starts with the root node (1) and descends as soon and as far as possible until the fringe of the tree. You can see it takes 10 nodes to examine until the goal is found:
1 / | \ 2 6 10 / | \ / | \ / | \ 3 4 5 7 8 9 x x xThis is not that bad, as the best goal actually is found. But consider this tree where there are two solutions $1 and $2, $2 being better as it is more shallow in the tree. Without a retry depth-first will find the worse solution $1, not $2:
* / | \ x x $2 / | \ / | \ / | \ $1 x x x x x x x xStill worse - as already mentioned - when the tree has an infinite depth the goal might not be found at all.
1 / | \ 2 x $ / | \ / | \ / | \ 3 x x x x x x x x | 4 | ...Depth-first will descend all the way down crossing 1,2,3,4... and will never find $.
Breadth-first has the advantage over depth-first search that it guarantees to always find the best solution in the first try. I has no problems with multiple goals or inifite depth of the search space. The drawback is - additional to exponential time complexity - it has exponential space complexity in the worst case. A breadth-first strategy will check all nodes of a lower hierarchy before further descending to higher ones:
1 / | \ 2 3 4 / | \ / | \ / | \ x x x x x x x x xObviously, this guarantees it finds the shallowest solution. Generally, it needs much less nodes to examine before finding the goal than depth-first. In this case it traverses 4 nodes only in contrast to 10 needed by depth-first.
While breadth-first is the best "brute force" strategy you can get there might be better ones when you have a bit more of information about your problem. Especially, a good estimation of how far a node might be away from the actual goal turns out to be very helpful. If we had such a measure why not simply going from the root node to the goal like this:
1 / | \ x x 2 / | \ / | \ / | \ x x x x x x x x xIn an A* search the next node to be traversed is determined by the cost of reaching that node and the estimated rest cost of finding the goal from that node. The most natural measure of cost would be the depth of the path from the root node.
You can see the heuristic estimation of the rest cost is central in an A* search. If there were two solutions to our search as already assumed in the depth-first example above
* / | \ x x $2 / | \ / | \ / | \ $1 x x x x x x x xand our heuristic had - wrongly - claimed the left node looked better than the right one it could well have found a non-optimal $1 solution like this:
1 / | \ 2 x $2 / | \ / | \ / | \ 3 x x x x x x x xWhile you will have to choose your heuristic carefully to have best results it can be shown if your heuristic is admissive, i.e. it's estimation of the rest cost is never higher than the actual rest cost, you will always find the optimal solution. When your heuristic is always 0 - which of course is admissive - and the measure of cost is the depth of the path from the root node this strategy is equal to breadth-first.
A* is the strategy used in the puzzle example. You can find it in the release of JSL. The - admissive - heuristic value of distance to the goal is calculated by the number of misplaced tiles of the puzzle. This will lead you to an optimal solution. An even better heuristic might be to include the distance of each misplaced tile to its "correct" place into the calculation. Note, this would in no case find a better solution, but might at most further decrease the number of configurations to traverse.