Episode 2: Sensors, perceptions, and environments

Imagine you are a secret agent. But not just any secret agent: a secret agent who must infiltrate a building… blindly. No map, no flashlight, no sensors. Impossible, right? Well, for an AI agent, it’s just as critical: without perception, no decision is possible.

And then…

  • How does an AI agent perceive the world?
  • What kind of “eyes” does it have?
  • And how reliable is what you see?

Let’s start at the beginning: sensors. A sensor is any device or system that allows an agent to collect information about its environment.

  • A Roomba has infrared sensors that detect walls.
  • An autonomous car uses cameras, LiDAR, and radar to detect approaching pedestrians.
  • A chatbot like ChatGPT doesn’t have cameras, but its sensor is… the text we write to it.

Each agent is limited by what it can perceive. If its sensor is poor or its “vision” is partial, its ability to make good decisions is also compromised.

However, not all environments are the same. In artificial intelligence, we usually classify them to understand how difficult it is for an agent to operate in them. Here are the key categories:

Types of environments faced by an agent

  1. Deterministic vs. Stochastic
    • A deterministic environment is like chess: if I move the queen, the board changes in a predictable way.
    • Stochasticity is like driving in real life: you can plan your route, but there are always unforeseen events… a dog crossing the road, a car braking suddenly, sudden rain.
  2. Known vs. Unknown
    • In a known environment, the agent knows how the world works. For example, a flight simulator: all physical rules are encoded.
    • In an unfamiliar place, the agent learns as they go. It’s like when we move to a new city and don’t know where the supermarkets are or what the best route is.
  3. Fully observable vs. Partially observable
    • A fully observable environment is like a game of chess: you see all the pieces, you know the complete state.
    • But in real life, most environments are partially observable. An autonomous car cannot see what is behind a curve or inside another vehicle. It needs to estimate or predict what it cannot see.

Let’s take an example:

  • An agent who plays chess operates in an environment:
    • Deterministic
    • Known
    • Fully observable

Each move has a defined effect and the board is always visible.

  • An autonomous car faces:
    • A stochastic environment
    • Often unknown in detail (each street may be different)
    • And partially observable (you can’t see everything)

That’s why training an autonomous car is much more complex than programming a chess champion.

So… how do they decide with limited perceptions?

When an agent cannot see everything, it needs internal models of the world: a representation of what it thinks is happening and what might happen if it acts in a certain way. It’s like when you hear a noise in the kitchen: you didn’t see anything, but you infer that your cat just knocked something over (or at least that’s what we want to believe!).

That’s why sensors are just the tip of the iceberg. What’s really valuable is how the agent interprets that data to complete the puzzle of the environment.

And why should you care about all this?

Because in a future filled with agents, the more we understand about their “eyes” and “ears,” the better we will be able to coexist with them. And above all, we will know where they can fail… because if their perception is limited, their decisions will be too.

At SMS Sudamérica, when designing solutions, we don’t just build powerful models: we first analyze what digital “sensors” the system will have, what environments it will face, and what perception gaps exist. Only then can we guarantee that the decisions made by our agents are sound, transparent, and aligned with the real context in which they will operate.

For us, developing responsible technology is not just a technical issue: it is an ethical responsibility to every customer, every industry, and every person who lives with these intelligences. Because only by understanding the world they perceive can we design a future where collaboration between humans and machines is truly virtuous.

Note by: María Dovale Pérez

🔊 If you don’t have time to read the full article, you can listen to the debate between two AIs on the topic! Press play and discover their arguments: