1. Let x be the start node.The list OPEN contains the frontier of the tree. At any moment the best node in the frontier is selected and the frontier is expanded at that point.
2. If x is the goal, return with success; otherwise expand it and put its successors on the list OPEN.
3. If OPEN is empty, then stop with failure; otherwise, let x be the "best" member of OPEN. Pop x. Go to 2.
It turns out the cost function f(n) of a node n should estimate the cost of the path from the start to the goal.
f(n) = g(n) + h(n)If h(n) overestimates the cost, when the goal node was reached, there might be a better path remaining to be discovered. With underestimation, this difficulty does not arise.
g(n) is the cost from the start to n
h(n) is the estimated cost from n to the goal
The best way is to make a local decision based on limited look-ahead. The look-ahead strategy will expand the tree a fixed number of plies (a ply consists of two layers, corresponding to each player) and then score the board positions at the leaf nodes. To do so, an evaluation function must be picked that scores different features of the board. The scores reflect "your" viewpoint of each leaf node. The minimax procedure allows the evaluations at the tree extremities to be propagated backward to choose your move.
As it turns out, minimax can be improved by using alpha and beta cutoffs. The idea is that since the value of a maximizing node can only go up and that of a minimizing node can only go down, some search subtree can be ignored.
Example:
o MAX
|
o---------------------o---------------------o-------------------o MIN
/ \ / | \ / \ / \
o o o o o o o o o
3 2 -2 -4 6 7 1 -1 -5 MAX
In this example, using alpha-beta cutoffs, -4, 6, and -5 will not
be looked at.