Episode 3: The Agent’s Mind – Rules, Logic, and Learning

We already know that an AI agent perceives its environment and acts on it… the big question is: how does it decide what to do?   Does it follow a manual, improvise, or learn as it goes?

To answer that, we have to get inside the head, or rather, the “mind,” of an agent.

Types of artificial minds: how do agents think?

  1. Reactive agents: These are like automatic reflexes. They perceive something and react without thinking too much. For example, a smoke detector: if there is smoke, it activates the alarm. It does not analyze the context or evaluate options. They are simple, fast, but limited. They cannot plan or learn.
  2. Rule-based agents: Here things improve. These agents have a set of “if A happens, do B” rules.

Let’s think about a basic technical support bot:

  • If the user says “I forgot my password,” the bot responds with instructions on how to recover it.
  • If it says “I can’t connect,” the bot suggests checking the connection.

It is a hand-programmed logic. Effective for limited scenarios, but… if a query appears that is not in the rules, the agent simply does not know what to do.

  1. Knowledge-based agents: Now let’s imagine an expert medical system. It not only follows predefined rules, but also has a knowledge base about diseases, symptoms, and treatments. It can deduce, reason, and offer diagnoses based on that knowledge base, even combining data to reach new conclusions.
    They are more sophisticated, but they also depend on how complete and up-to-date the knowledge base is.
  2. Learning agents: Here we reach the next level: agents that, in addition to perceiving and acting, learn from experience. They don’t just apply rules or consult a repository, they improve their performance every time they interact with the environment. How do they do this? Through a system of rewards and objectives.

The key: goal and reward

An agent who is learning needs to be clear about what goal to pursue and how to measure their progress. This is achieved by:

  • Objective function: the purpose it must fulfill. Example: reduce response times in technical support.
    Reward: a signal indicating whether you are doing well or poorly. If you respond quickly and correctly, you earn points; if you confuse the user, you lose points.

Thus, the agent adjusts their decisions to maximize the reward. Like a child learning that if they tidy up their toys, they will receive praise… or a treat.

And why is this important?

Because not all problems require the same type of agent. Some can be solved with simple rules, others need an expert with extensive knowledge, and others can only be addressed with agents that learn and adapt to changing situations.

In a world of constant change, having agents who are capable of learning can make the difference between becoming obsolete or leading the change.

At SMS Sudamérica , when we design intelligent solutions, we don’t stick to just one type of agent. We know that every challenge requires a different approach:

  • If the problem is recurring and limited, we apply rule-based agents that are fast and efficient.
  • If the domain is complex, such as in industrial projects or the healthcare sector, we use knowledge-based systems with validated sources and robust decision structures.
  • And when the environment is dynamic or uncertain, we rely on agents that learn, capable of improving over time and adapting to new conditions.

But in all cases, our unique selling point is the same: creating agents that not only perform tasks, but also enhance human intelligence. With transparency, ethics, and a clear purpose.

In the next episode of “Stories of the Future, Today!, we’ll explore how agents learn through reinforcement, and what it means for an AI to experiment, fail, and improve… just like us.

Because the future will not belong to those who have the most data, but to those who know how to learn from it best.

Note by: María Dovale Pérez

🔊 If you don’t have time to read the full article, you can listen to the debate between two AIs on the topic! Press play and discover their arguments: