The Governance Gap
- James W.
- 3 hours ago
- 4 min read

: The Hidden Risk in Your “Smart” Building
By James C. Waddell, President, Cognitive Corp | IFMA ITC Board Member
Part 1 of 4: The Foundational Blog Series on AI Governance in the Built Environment
Your building is engaging in thousands of autonomous decisions right now. The HVAC system is fine-tuning airflow based on predictions about occupancy. The lighting system is lowering brightness in corridors based on motion detection. The access control system is analyzing credentials against various threat models. The energy management system is balancing load across different areas.
Yet, none of these decisions are governed.
This situation is what we refer to as the Governance Gap: the divide between the advanced autonomous capabilities of your building’s AI systems and the necessary governance frameworks that should regulate, audit, and clarify those decisions. This gap is prevalent in nearly every commercial building that has adopted “smart” technology over the past decade and represents a significant category of risk that many facility managers, corporate real estate (CRE) executives, and building owners have yet to fully evaluate.
The Scale of the Problem
Consider the current regulatory landscape. Financial AI is overseen by the SEC. Medical AI falls under the jurisdiction of the FDA. Automotive AI is regulated by the NHTSA. However, building AI lacks any governing authority—at least for now. This regulatory void is gradually being addressed, notably with the upcoming EU AI Act (Regulation 2024/1689), which will fully enforce regulations on August 2, 2026. It categorizes building AI as high-risk critical infrastructure. Organizations that fail to establish governance frameworks risk facing penalties up to €35 million or 7% of global annual turnover.
But the regulatory threat is just one aspect. The operational risk could be even more pressing. When your building’s AI system makes a decision that could impact occupant safety, tenant comfort, or energy efficiency, three key questions must be answerable: What decision was made? Why was that decision made? Who verified the decision logic?
In the vast majority of commercial buildings, the answers to these critical questions are unavailable. Here, AI systems function as black boxes: they optimize, adjust, and learn, but they do not provide explanations, refuse to escalate issues, and do not defer to human judgment when necessary.
Why Current Approaches Fail
The demand for “smart buildings” has led to an influx of AI-powered products in the building technology industry. Building management system (BMS) vendors have incorporated machine learning for HVAC optimization; IoT platforms are utilizing occupancy analytics; and energy management systems are applying reinforcement learning for cost reduction.
However, none of these vendors include governance in their offerings. They focus solely on capability. Capability without governance introduces risk.
The NIST AI Risk Management Framework (AI 100-1, January 2023) identifies four essential functions: GOVERN, MAP, MEASURE, and MANAGE. The GOVERN function—focused on establishing governance policies and oversight mechanisms—is emphasized as a prerequisite, essential for the other three functions to function effectively. Despite this, governance remains conspicuously absent in many building AI deployments.
The consequences are predictable: buildings are becoming smarter but not necessarily safer. While they are optimizing and learning, they lack transparency and accountability. This is the essence of the Governance Gap.
What This Means for Your Organization
If you manage commercial real estate, healthcare facilities, data centers, cold storage, or any building type that implements autonomous AI systems, you are inevitably exposed to the Governance Gap. This exposure manifests in three key areas:
Compliance exposure: The EU AI Act, coupled with DHS critical infrastructure frameworks and the NIST AI RMF, emphasizes the need for governed AI in buildings. Organizations that retroactively implement governance measures will incur higher costs and face longer timelines than those that adopt governance proactively.
Financial exposure: Ungoverned AI decisions affecting energy usage, tenant comfort, or equipment lifecycle represent an unverified financial risk. For instance, if an AI system postpones maintenance to meet cost targets and this decision results in equipment failure, the lack of governance documentation transforms into a liability.
Safety exposure: In sectors like healthcare, defense, and critical infrastructure, ungoverned AI decisions can jeopardize human safety. If a building AI modifies ventilation in a critical area or dims lights in an emergency exit, the absence of accountability and consequence-aware decision-making processes shifts from theoretical risks to real operational hazards.
Next in This Series
In Part 2, we will unveil the Building Constitution—a comprehensive governance framework for building AI that effectively bridges the Governance Gap. This framework is structured around three key pillars: Explainable AI, Human-in-the-Loop oversight, and Bias Mitigation, and it aligns with NIST AI RMF, the EU AI Act, and DHS critical infrastructure standards.
James C. Waddell is the President of Cognitive Corp, an AI enablement company located in Chicago that focuses on the built environment. He also serves on the IFMA Information Technology Council board and speaks internationally on AI governance within facility management.
cognitive-corp.com | bob@cognitivewx.info
Keywords: building AI, AI governance, Building Constitution, smart buildings, Governance Gap, NIST AI RMF, EU AI Act, explainable AI, Human-in-the-Loop, Bias Mitigation

Comments