>

Why We Took the Guardrails Off Our AI

Interior of an autonomous vehicle with no driver looking out at a futuristic smart city
Written by
Keith Gipson
Published on
February 23, 2026

The Safety Driver Problem

n 2024, Tesla removed the safety driver from its autonomous vehicles. No backup human behind the wheel. No emergency override pedal. Just the AI, doing what it was built to do.

The reaction was predictable. Some people panicked. Others said it was reckless. But here is what actually happened: the cars performed better without the safety driver than with one. The AI had always been capable. The safety driver was never making it safer. They were just making everyone feel better.

The building automation industry is stuck in the same moment right now. And at Facil.AI, we have made a decision: it is time to take the guardrails off.

Why the Industry Defaults to "Human in the Loop"

If you have spent any time evaluating Building Management System (BMS) optimisation tools, you have heard the phrase "human in the loop" more times than you can count. Every vendor says it. Every pitch deck features it. It sounds responsible. It sounds safe.

But what does it actually mean?

It means the AI generates a recommendation, then waits for a human to approve it. It means the system identifies an optimisation opportunity at 2am on a Sunday, and nothing happens until Monday morning when someone logs in and clicks "approve." It means you are paying for artificial intelligence that is not allowed to be intelligent.

"Human in the loop" is not a feature. It is an admission that the AI is not good enough to act on its own.

And for most vendors, that is the truth. Their AI is not good enough. So they wrap it in guardrails and sell the guardrails as a feature.

What We Mean by "Removing the Guardrails"

Let us be clear about what we are not saying. We are not saying building operators should have no visibility into what the AI is doing. We are not saying there should be no monitoring, no dashboards, no oversight.

What we are saying is this: the AI should not need permission to do its job.

Our AI-agents at Facil.AI make over 30,000 control loop adjustments per day across the buildings they manage. Every five minutes, they are learning, adapting, and optimising. They do not generate reports and wait for approval. They act. They make micro-adjustments that a human operator could never replicate at that speed or scale.

When we say we removed the guardrails, we mean:

  • The AI is not constrained by rigid, human-defined parameters that were set once and never updated
  • The AI develops deep understanding of every piece of equipment it manages, learning its characteristics, tolerances, and optimal operating conditions
  • The AI self-regulates through intelligence, not through external constraints imposed by people who are not watching the system 24/7

This is the difference between a thermostat with a schedule and an intelligence that understands the building.

Forced Resiliency Through Intelligence

The old approach to AI safety in buildings was simple: set hard limits. Define parameters. If the AI tries to do something outside those limits, stop it.

The problem? Those limits were set by humans, based on assumptions, at a single point in time. They do not adapt. They do not learn. They just sit there, preventing the AI from doing anything the original programmer did not anticipate.

We took a different path. Instead of constraining the AI with external rules, we built intelligence into the AI itself. Our AI-agents develop what we call forced resiliency: the ability to maintain safe, efficient operation not because something is stopping them from doing otherwise, but because they genuinely understand the equipment and the environment they are managing.

Think about it this way. A new driver follows rules: stay under the speed limit, stop at red lights, do not exceed the lane markings. An experienced driver still follows those rules, but they also read the road. They anticipate the lorry that is about to change lanes. They adjust their speed for the curve ahead. They do not need the guardrails on the motorway because they understand how to drive.

That is what our AI does. It does not need external constraints because it has developed genuine understanding.

The Results Speak for Themselves

We are not making a philosophical argument here. We have data.

Our AI-agents have delivered 48% energy efficiency improvements across the buildings they manage. Not the industry-standard 15% that most vendors promise. Forty-eight percent. That is not a marginal improvement. That is a fundamentally different level of performance.

And here is the thing: those results come precisely because the AI operates autonomously. It does not wait for approval. It does not generate recommendations that sit in someone's inbox. It acts, continuously, in real time, thousands of times per day.

If we had kept the guardrails on, if we had required a human to approve every adjustment, those results would be impossible. You cannot achieve 30,000 optimisations per day with a human in the loop. The maths simply does not work.

"But What If Something Goes Wrong?"

This is always the question. And it is a fair one.

The answer is not "nothing can go wrong." The answer is that our AI is better at handling things going wrong than a human operator would be.

Our AI-agents perform continuous diagnostics on every piece of equipment they manage. They detect drift, anomalies, and degradation in real time. When conditions change, when connectivity drops, when equipment behaviour shifts, the AI adapts. It does not need someone to notice the problem, log in, diagnose it, and decide what to do. It handles it.

We also provide full visibility through a secure web interface. Building operators can see exactly what the AI is doing, review its decisions, and understand its reasoning. The difference is that visibility is not the same as control. You can watch and understand without needing to approve every action.

Monitoring is important. Requiring approval for every micro-adjustment is not.

The Competition Wants You to Be Afraid

Here is something worth thinking about: why does every other vendor in this space lead with guardrails and human oversight as their primary selling points?

Because they have to. Their AI is not capable of operating autonomously. So they frame that limitation as a feature. They tell you that human oversight is responsible and that autonomous AI is dangerous. They want you to believe that the safety driver is essential.

But the safety driver is not protecting you. The safety driver is protecting the vendor from having to build AI that actually works.

At Facil.AI, we have decades of experience designing, installing, and commissioning BMS systems in mission-critical environments. Our team has worked in hospitals, data centres, universities, and retail chains where equipment failure is not an option. That experience is baked into our AI. It is not something we bolt on as an afterthought.

We did not remove the guardrails because we are reckless. We removed them because we built something good enough that they are not needed.

Where We Go From Here

The building automation industry is at an inflection point. The tools are finally capable of delivering truly autonomous operation. The question is whether building operators and facility managers will embrace that capability, or whether the industry will keep hiding behind the comfortable fiction of "human in the loop."

We know where we stand. Our AI-agents are autonomous, intelligent, and constantly improving. They deliver results that are simply not possible with traditional approaches. And they do it without needing a human to hold their hand.

The guardrails are off. And the buildings are better for it.

Keith Gipson is the CEO of Facil.AI. To learn more about how autonomous AI-agents are transforming building energy management, visit Facil.AI or get in touch.

Easy? Sí, Facil. Easy? Sí, Facil. Easy? Sí,

Easy? Sí, Facil. Easy? Sí, Facil. Easy? Sí,