人工智慧 -- 智慧體 (Agent)
Chapter 1 : Introduction

    humanly    rationally
Acting    Turing Test    law of thought
Think    cognitive modeling    rational agent

Chapter 2 : Intelligent Agent

Rational agent
1. Performance measure that defines degree of success
2. Percept Sequence
3. What the agent knows about the environment
4. The action that the agent can perform

Ideal rational agent :
   An ideal rational agent should do whatever action is expected to  maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

                  mapping
percept sequence <------> action

ideal mapping :
   Specifying which action an agent ought to take in response to any given percept sequence provides a design for an ideal agent.

Autonomy
   A system is autonomous to the extent that its behavior is determined by its own experience, not some body tell him.

Agent architecture (easy memorize -->  PAGE)
   Percept     :
   Action      :
   Goal        :
   Environment :

Four type of agent :
Simple reflex agents
  condition-action rule

Agents that keep track of the world
  Needed two kind of information :
    How the world evolves independently of the agent
    How the agent's own action affect the world.

Goal-based agent
  Goal information
  Search or Planning

Utility-based agents

Properties of environment
accessible vs. inaccessible (chess vs. poker)
Accessible :
  agent's sensor apparatus gives it access to the complete state of the environment.

deterministic vs. nondeterministic (chess vs.poker)
Deterministic :
  next state of the environment is complete determined by the current state and the actions selected by the agents.

episodic vs. nonepisodic (Image analysis system vs. chess)
Episodic :
  the agent's experience can be divided into "episodes".
static vs. dynamic (porker vs. taxi driving)
Dynamic :
  the environment can change while an agent is deliberating.

discrete vs. continuous (chess vs. taxi driving)
Discrete :
   there are limited number of distinct, clearly defined percepts and actions.

Chapter 3 Solving Problems by Searching

1. How an agent can act by establishing goals and considering sequences of actions that might achieve those goal. A goal and a set of means for achieving the goal is called a problem, and the process of exploring what the mean can do is called search.

Problem Formulation
State space : 
(path:any sequence of  actions leading by any sequence of actions.)

Single state problem :
whole world state is accessible.
Single state problem formulation
  (datatype PROBLEM components : 
  INITIAL-STATE, OPERATORS, GOAL-TEST, PATH-COST-FUNCTION )
  Component
    initial state (where are we now ?)
    operator (or successor function) set 
       (what's the possible next state ?) :
    succ(stateNow) = { state | state that reachable from stateNow }
    goal test     : (is this state is goal ?)
       state = goal_state ?
path cost(g(p)): 
   in most case, path cost = Σ(cost(vi, vi+1))
Output : solution, a path from the initial state to a state that satisfies the goal test.

Multiple state problem : the world state is not fully accessible.
  initial state --> initial state set
  state space   --> state set space

Measure problem solving performance
  total cost = search cost (time) + path cost (distance)
  What is the exchange rate between time and distance ?

Abstraction : removing detail from a representation.
example : in routing , we don't consider the weather, law enforcement, traffic jamming...
Abstraction Principle : 
  removing as much detail as possible while retaining validity and ensuring that the abstract actions are easy to carry out.
Step 2 : Search.
Step 3 : Return solution in action sequence form.
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License