AI Search Problems Overview

What is an agent?


agent - an entity that perceives and acts

planning agent - an agent that make decisions based on (hypothesized) consequences of actions
  • Planning agents simulate situations and performs/executes the best action according to its simulations done in the "planning" stage
    • Planning agents consider how the world would be
  • Planning agents must have a model of how the world evolves in response to actions
  • Planning agents must formulate a goal (test)
  • Planning agents are complete (i.e. they never get stuck and always find a solution, since the plan ahead)
  • The agent does not actually perform all the plans out in the real world
  • The planning is all done "in simulation"

search agent - a planning agents that solves search problems


Search problems


search problem - a type of planning problem with the four components (listed below); the environment in which a rational planning agent exists in
  • Search operates over models of the real world

Components of a search problem

  1. state space - the set of all possible states that are possible in your given world (composed of the world state and search state)
    • states - abstracted representations/configurations of the world; abstractions/representations of the world
      • One difference in detail gives a completely new/another state
      • Types of states
        • world states - states that include every last detail of the environment
        • search states - states that keep only the details needed for planning (abstracted states)
      • The features/components of a state (the minimum state space) must each help uniquely determine/define the state but still be valuable enough to help solve the search 
      • The state space has a large set of world states and a smaller set of search states
      • The state space has a lot of world states (has a large set of world states), because world states include every detail
      • The state space has fewer (but still a lot of) search states, because search states are less detailed than world states
  2. transition/successor function, T(s,a,s') - a function that, given a state, returns the adjacent/next states that can be reached from the given/current state (and the cost of the transition)
    • A successor function is a function that takes in a state "s" and an action "a" and computes the cost of performing that action "a" as well as the successor state "s' "
    • successor state, s' - the state the world would be in if the given agent performed that action "a" from state "s"
  3. Actions and their costs
  4. Start and goal states
    • start state - the state in which an agent exists initially
    • goal test - a function that takes a state as input and determines whether it is a goal state (outputs yes or no)
    • goal state - a state that passes the goal test
      • There can be multiple goal states

Solution to a search problem

plan - the solution to a search problem; a sequence of actions which transforms the start state to a goal state
  • How to make a plan
    1. Consider the start state
    2. Explore the state space using the successor function, iteratively computing successors of various states until the agent has arrived at a goal state, at which the path from the start to goal state will have been determined

strategy - the order in which states are considered (the order in which states are expanded/popped out from the fringe of the search tree or state space graph)


Types of search problems (summary chart)


Comments

Popular posts from this blog

Tips on Pointers and Arrays in C

Standard/Planning Search Problems