The Cascade Radius
- James W.
- 3 days ago
- 2 min read

LinkedIn Post #39 | Cognitive Corp | February 2026
DRAFT — Pending James review
Your building AI made a decision at 2:47 AM.
By 6:15 AM, 400,000 people felt the consequences.
Not because the decision was wrong. Because nobody asked how far it would travel.
We talk about building AI accuracy. We measure energy savings, comfort scores, maintenance intervals. We optimize inside the envelope.
But the hardest governance question isn’t what happens inside the building. It’s what happens when the decision leaves.
Consider the cascade radius:
A transit authority’s maintenance AI deprioritizes a ventilation repair at Station 47. Reasonable — the model ranked 11 other stations higher. But Station 47 serves a community with no alternative transit options. The cascade: 47,000 daily riders disrupted. Local businesses lose foot traffic. The decision was optimized. It was also inequitable. And nobody in the system was asking.
A bank headquarters’ environmental AI adjusts cooling in the operations center to improve energy efficiency. Three degrees warmer. Reasonable — within comfort guidelines. But the operations center runs the systems that clear $2 trillion in daily transactions. The cascade: a thermal event affecting financial infrastructure. The decision was efficient. It was also ungoverned. And the building AI had no concept of what it was cooling.
A utility’s facility AI reallocates HVAC resources from a grid operations center to a visitor lobby during peak tour hours. Hospitable. But the operations center coordinates emergency response for 5.2 million customers. The cascade: a comfort optimization that compromised grid reliability infrastructure. The decision made sense locally. It was invisible globally.
This is the cascade radius problem. Every building AI decision has a blast zone. Some decisions stay inside the walls. Some reach the street. Some reach the grid, the financial system, or the daily commute of millions.
Current building AI doesn’t measure cascade radius. It doesn’t ask: if this decision is wrong, who feels it? How far does it travel? What systems depend on the environment I’m optimizing?
The Building Constitution was designed for exactly this. Not just governing what AI does inside a building — but accounting for where those decisions go when they leave.
Because the measure of governed building AI isn’t accuracy inside the envelope. It’s accountability for the cascade.

Comments