Idea of search algorithms
assume goal-based agent
single action will not satisfy goal, which action to take?
idea: search through sequence of actions to satisfy goal
Search problem
inital state
actions
transition model (result of doing action in state)
goal test
Measures
size of state space
memory to store a state
Node vs state
State
state is abstract representation of environment
part of state space
Node
data structure during search
a node also refers to a state
mulitple nodes might refer to the same state (loop)
Node expansion
given node n
find all legal actions in STATE(n)
generating a child of n for each new accesible state (i.e. node generation)
Frontier
contains all nodes which were discovered by the algorith but have not been expanded yet
queues
LIFO -> Depth-First Search
FIFO -> Breath-First-Search
priority queue -> UCS, A*
Search Algorithm #1
When do we revisit states?
Mulitple nodes refer to the same state
overhead because same state is looked at
results from cyclic state space
search tree might be infinite even if state space is finite
Complete search algorithm
finds solution whenever one exists
Optimal search algorithm
returns minimum cost path whenever solution exists
Blind vs heuristic search
blind (uninformed):
only insert states into frontier
do not exploit state for node ordering
BFS, DFS, Iterative Deepening, Uniform-Cost
heuristic (informed):
place most promising nodes in front of frontier
exploit state information
Measures of search algorithm
branching factor (maximum number of successors of any state)
depth (minimal length to shortes goal node)
space complexity (e.g. how many states are stored during a run)
time complexity (e.g. how many node expansions are required)
Breadth-First Search
FIFO queue
complete
optimal if step cost 1
expanded nodes: b^d
Depth-First-Search
LIFO queue
complete for finite search tree
not optimal
expanded nodes: b^m (worst case)
m = max depth of tree
“solution bottom right”
space complexit: b*m
number of nodes in LIFO-queue when algorithm reached depth m
Iterative Deepening Search
Idea
Idea: Combine benefits of DFS (space) and BFS (optimal/complete)
DFS until depth k
iterate over multiple k’s
Evaluation
space complexity: b*d
BFS vs DFS vs IDS
BFS is complete/optimal but high space complexity
DFS is not complete/optimal but good space complexity
IDS is complete/optimal and has good space complexity
Node expansion all similar
Uniform-Cost Search
Setting: Arcs has costs
Goal: Find path with minimum arc-costs
Consequence:
Do not return first path which was found, there might be a cheaper way!
Adjust algorithm: Test node for goal during expansion, not during generation
Search Algorithm #2
goal test during expansion not during generation of node
goal state might be enqueue multiple times, but lowest is expanded first -> finds optimal solution
if combining this with detection of revisited states: only keep node with lowest value
Uniform-Cost Search and BFS
BFS = UCS with arc costs of 1
BFS is variant of UCS
Heurisitc search
so far states are simply enqueued without rating them
can do better: exploit information in states to determine which states to expand next
Best-First Search
expand promising nodes earlier
value of node is computed using evaluation function f
use prority queue
Evaluation function f(N)
Common choices for f(N):
the costs of solution path through node N:
f(N) = g(N) + h(N)
g(N) are costs from start to N
h(N) are costs from N to goal
“A*”
the costs from node N to goal:
f(N) = h(N)
“Greedy-best Search”
Heuristic function h(N)
independent of current search tree
estimates the costs from N to closest goal node
Admissable heuristic
let h*(N) be costs of optimal path from node N to goal
h(N) is admissable if for all N:
0 <= h(N) <= h*(N)
under-estimating -> optimistic
How to come up with admissable heuristic?
consider a relaxed version of the problem
e.g: do not consider obstacles in robot navigation
A*
search algorithm #2
f(n) = g(n) + h(n)
h must be admissible!
A* evaluation
optimal and complete if revisted states are not discarded!
A* and revistied states
if we discard re-visited states, this example would not return an optimal solution
optimal path: 102
returned path: 104
not discarding revisted states -> optimal solution
Problem when not discarding revisited states
risk of exponential search trees
Consistent heuristics
familiy of heurisitics, which allows to handle revisited states efficiently
heuristic h is consistent (or monotone, non-decreasing), if
For each node N and child N’:
h(N) <= c(N,N’) + h(N’)
a consistent heuristics never decreases from one node to the successor node by more than the cost of the action taken
Correlation admissible/consistent heuristics
consistent -> admissible
A* and consistent heuristics
if h is consistent, then whenever A* expands a node N, it has already found an optimal path to N
thus, we can discard revisited states
How to implement A* with consistent heuristics?
Discard revisited states while keeping an optimal solution:
store states of expanded nodes in CLOSED
during node generation, check if state is in CLOSED
if so, do not enqueue new node
if not, enqueue (make sure that no other node in queue refers to the same state, if so keep the node state with smallest f)
When to use admissible/consistent heuristics
admissible: If A* does not discard revisited states
consistent: If A* discards revisited states
otherwise: A* is not complete and optimal
A* and dump heuristics
e.g. h(N) = 0 is consistent
this is then Uniform-Cost-Search
BFS and UCS are particular cases of A*
Heuristic accurancy
let h1 and h2 be two consistent heurisitcs
h2 is said to be dominate (more accurrate, informed) if:
h1(N) <= h2(N) ∀ N
h2 has smaller average branching factor -> less node expansions -> more efficient A*
When to use search techniques
small search space, and
no other techniques available, or
developing a more efficient technique is not woth the effort
large search space, and
no other technique available, and
good heuristics
Overview
Consequences of detecting revisited states
in general: size of search tree can not be bigger than size of state space -> reduces the node expansions if cycles in state space (consider cannibal problem -> massive reductin of nodes)
DFS: becomes complete
BFS: reduced time/memory if cycles in state space
UCS: same than BFS
A*: same than BFS + if revisited states are discarded, a consistent heuristic is requried to stay complete/optimal
Last changed2 years ago