What is an AI system?
Environment:
Environment (real or artificial) that the agents act in (traffic, chess board)
Which characteristics of environments exist?
discrete vs. continious
If the environment has a limited number of distinct, clearly defined states then it is called discrete; otherwise it is continuous.
Observable vs. Partially Observable or Unobservable
If it is possible to determine the complete state of the environment at each time point, then it is called observable; otherwise it is only partially observable or unobservable.
static vs. dynamic
If the environment does not change while the agents acts -> static
Single Agent vs. Multiple Agents
The environment may contain other agents which may be of the same or different kind as that of the agent.
Accessible vs. Inaccessible
If an agent can obtain complete and accurate information about the state's environment, then such an environment is called an accessible environment else it is called inaccessible.
Deterministic vs. Non-deterministic/Stochastic
If the next state of the environment is completely determined by the current state and the actions of the agent, then the environment is deterministic; otherwise it is non-deterministic.
Episodic vs. Non-episodic/Sequential
In an episodic environment, each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself. In sequential environments the agent requires memory of past actions.
Important things about environment
Distinguish easy and more difficult environments
What are the three steps for an agent? What are rules of an AI agent?
Rules of an AI agent:
What are problems with rational agents?
Rationality is if performance is maximized
performance is measured by functions that evaluate sequences of actions
not always desired (e.g. Lottery is negative expected value)
sometimes choosing not the best alternativ
use heuristics instead
What type of Agents exist?
Reflex agent
Selects action on the basis of only the current percept but ignores the percept history.
limited in decision-making, no knowledge about non-perceivables, hard to handle in complex environments
model-based agent
like reflex agent, but better view of environment, track status of environment
not clear how own actions affect world, world model unclear
goal-based agent
build on model-based information, but know desired world (state)
takes decision-making into account
difficult with long sequences of actions to find goal
Utility-based agent
Instead of providing goals, Utility-based agents use a utility function providing a way to rate each action/scenario based on the desired result
help with conflicting goals
Certain goals can be reached in different ways
Maps a state or sequence of state onto a real number
Learning Agent
These agents employ an additional learning element to gradually improve and become more knowledgeable over time about an environment.
Components:
1. LearningElement: It is responsible for making improvements by learning from the environment
2. Critic: Gives feedback, describing how well the agent is doing with respect to a fixed mea surement
3. Performance Element: It is responsible for selecting actions
4. ProblemGenerator: Responsible for suggesting actions that will lead to new experiences
What are approaches to make Agents intelligent?
Search algorithms
Understand/Define "finding a good action" as a search problem and use search algorithms
most are tree-based
Reinforced Learning
Developed within the field of psychology
Trial and Error
Reactions/Actions are based on our observation and experience
Genetic Algorithms
“Survival of the fittest” -> no optimization but rather Robustness
Learning Goals:
You should be able to:
Explain the fundamental structure of an agent and the difference in agent types
Given a description, identify agents and environments as well as their properties
Zuletzt geändertvor 10 Monaten