In artificial intelligence, agents are categorized in five different types depending on their level of intelligence. The types of agents in AI are as follows-
Now let us study about the different types of agents in AI in detail one at a time.
Simple Reflex Agents
The working of a Simple Reflex Agents is just similar to our body’s reflex system. As our reflex system works on the basis of the current situation and it does not consider any past experiences, the Simple Reflex Agents work in a similar fashion.
Simple Reflex Agents take into consideration only the current situation and acts accordingly. The success of these agents is possible only if the decision to be made is based on the current percept. Hence, we can say that this agent succeeds only when the whole environment is completely observable.
The Decision Making Function in these agents is the Condition Action Rule. This rule is responsible for mapping the condition to an action. Based on the value of the condition (i.e., true or false) an action is taken. If the condition turns out to be true, the action is taken else it is not taken. Infinite loops are often unavoidable here but these can be avoided if the agent has the ability to randomizing itself.
Some limitations of Simple Reflex Agents are:
- Intelligence in these agents is very limited.
- These are generally too big to generate and store.
- These agents do not consider past experience.
- They are unable to adapt environmental changes easily.
Model Based Reflex Agents
These agents are able to handle environments that are partially observable. These agents keep track of the previous and the current state and find a rule that matches the current situation. So, these agents have a memory of their own.
Hence, a Model Based Agent works on:
- Model – This contains the knowledge about how different things take place in an environment.
- Internal State – This represents the current state of the agent based on its precept history.
In these agents the internal states are managed and are adjusted according to the precepts from the environment. So, in order to update the status of the previous and the current states, the agent requires the following information:
- How the world has evolved now(independent of the agent)? and
- How will the world be affected by the agents’ actions?
Goal Based Agents
These agents take the Model Based Agents a step further. Here the agents are more capable to work out their actions on the basis of a ‘goal’. Goal stores information regarding the desirable situations. This directs the agent to choose amongst the various actions planned to reach to the goal. As a result, good searching and planning techniques play an important role in these agents.
Goal Based Agents keep track of the information regarding how far they are from reaching the goal yet. All actions they take are taken with an intention to reduce the distance from the goal. In order to do so, the agent updates its knowledge regularly hence increasing its flexibility. Due to this flexibility, these agents change their behavior with respect to the changing environment.
Utility Based Agents
The Utility Based Agents are similar to the Goal Based Agents. There are multiple sequences in which a goal state can be achieved. Some sequences may be better than the others. The success or failure of this agent depends on the choice of the appropriate route taken to reach to the goal.
The difference in Goal Based and Utility Based Agents lies in the fact that Goal Based Agents only measure if a state can be a possible goal state or not. But the Utility Based Agents keep a measure of the desirability (preference / utility) of these multiple routes to reach the goal state. This measure is kept by a utility function. The utility function maps a state with a real number which describes the happiness degree of the agent.
A Utility Based Agent has to keep a regular track of its environment. This enables it to reach a goal state in the quickest, cheapest, and safest way. A rational agent selects the best possible action based on the utility of the output. Hence these agents involve a great deal of perception, reasoning, learning and representation.
As the name suggests, a learning agent has the capability to learn from its past experiences. It starts its working in unknown environments by using its basic knowledge and then it automatically adapts to the environment through its learning process to reach its goal.
These agents comprise of the following components:
- Learning Element – It is through the learning element that the agent learns about the changes in the environment.
- The Critic – The critic gives feedback to the learning agent. The critic evaluates the feedback based on the performance standards fixed in advance. The agent comes to know about its performance based on the feedback given to it by the critic.
- Performance Element – This is responsible for selecting the best external action for the agent.
- Problem Generator – This component suggests actions for the agent which are based on new informative experiences. It also keeps track of the past.
These agents are hence capable of learning, analyzing their performance, and finding new and innovative ways to improve their performance.
Agents interact with the environment with the help of precepts and actions. But while solving complex problems these agents usually fail. So, in order to solve complex problems, problem solving agents are used. These different types of agents in AI use techniques such as heuristics and B-tree algorithms in order to reach to the solution successfully.