top of page

Deposition Transcript In the Matter of Building AI Governance

SUPERIOR COURT OF COUNTY


Civil Division


CASE NO. CV-2026-047821


STEWARD MANUFACTURING CORP.


Plaintiff


v.


NEXUS BUILDING SOLUTIONS, INC., ET AL.


Defendants


VIDEOTAPED DEPOSITION OF MICHAEL CHEN


Vice President, Building Operations


Date: March 15, 2026


Time: 10:00 a.m. - 4:30 p.m.


Location: Conference Room 7, Counsel & Associates, 450 Law Street


PRELIMINARY STATEMENT


This deposition was taken pursuant to the Civil Procedure Code on behalf of Plaintiff. The witness was duly sworn and testified as follows:


APPEARANCE OF COUNSEL:


For the Plaintiff: MARGARET SULLIVAN, ESQ.


For the Defendants: ROBERT CHEN, ESQ.


---


MS. SULLIVAN: Good morning. This is the videotaped deposition of Michael Chen, Vice President of Building Operations at Steward Manufacturing, in the matter of Steward Manufacturing Corp. v. Nexus Building Solutions, Inc. The date is March 15, 2026. Mr. Chen has been duly sworn. Mr. Chen, you understand you are testifying under oath?


MR. CHEN: I do.


TESTIMONY SECTION 1: BUILDING AI SYSTEMS IN OPERATION


MS. SULLIVAN: Let's start with your facility. Can you describe the AI and automated systems currently operating in the Steward Manufacturing complex?


MR. CHEN: Sure. We have a fairly sophisticated setup. The primary system is our Building Management System, or BMS—actually, we operate three interconnected BMS platforms across our three production zones. The HVAC system uses predictive optimization; it learns occupancy patterns and energy demand and adjusts temperature, humidity, and airflow in real-time. We have occupancy analytics running through our access control system. Motion sensors feed into a machine learning model that predicts when certain areas will be occupied, so we can pre-cool or pre-heat accordingly. We also have a separate AI-driven vendor system—a thermal optimization package installed by Nexus—that monitors our clean room environments, particularly critical given the pharmaceutical product we manufacture.


MS. SULLIVAN: So AI systems are making real-time decisions about climate control.


MR. CHEN: Correct. The BMS is controlling temperature set points across roughly 850,000 square feet. The predictive occupancy model adjusts HVAC demand about every 30 seconds. And Nexus's thermal system is running continuous optimization for the manufacturing zones—those are particularly sensitive environments where temperature and humidity tolerances are critical.


MS. SULLIVAN: And beyond climate?


MR. CHEN: Yes. We have water usage optimization—the system learns patterns and predicts consumption. Emergency response automation; if smoke is detected, the system automatically routes people via certain corridors based on occupancy prediction. We have cybersecurity monitoring tools that are themselves AI-based. Some lighting systems have basic ML for scheduling. Access control uses anomaly detection; it flags unusual entry patterns. And we have several vendor systems—HVAC optimization, energy dashboards, predictive maintenance scheduling. The facility is, I'd say, heavily AI-integrated.


TESTIMONY SECTION 2: GOVERNANCE FRAMEWORKS


MS. SULLIVAN: Who governs these systems? What governance framework exists for building AI at Steward Manufacturing?


MR. CHEN: [pause] I'm not sure I follow the question.


MS. SULLIVAN: Who decides what these systems do? Who audits their decisions? Who ensures they're functioning appropriately?


MR. CHEN: Oh. Well, IT manages the network and infrastructure security. Facilities handles day-to-day BMS operations. We have maintenance contracts with vendors like Nexus—they handle software updates and system tuning. I oversee operations overall.


MS. SULLIVAN: Is there a unified governance framework that connects all of those pieces?


MR. CHEN: Not... not explicitly. I mean, we have procedures. Facilities staff monitor dashboards. If something looks wrong, they respond.


MS. SULLIVAN: Do you have documented policies for what constitutes "something looking wrong"?


MR. CHEN: [long pause] Not in the way you're asking. We have alert thresholds set by the system vendors. If, say, temperature goes outside a range, an alert fires. We respond to that.


MS. SULLIVAN: And if the AI system itself makes a decision that wasn't explicitly predicted by vendor thresholds—a novel decision—who determines whether that decision is appropriate?


MR. CHEN: That would... we'd have to examine it case by case.


MS. SULLIVAN: Is there a documented case-by-case review process?


MR. CHEN: Not formal, no.


TESTIMONY SECTION 3: INCIDENT RESPONSE & ACCOUNTABILITY


MS. SULLIVAN: Let's discuss an incident. Three months ago, the occupancy prediction model made an adjustment that resulted in airflow reduction to the pharmaceutical processing zone during a peak production run. The model predicted low occupancy; it wasn't low. That decision affected product stability. Can you walk me through how that was handled?


MR. CHEN: I don't recall the specifics of—


MR. CHEN'S COUNSEL: The witness can decline to answer if it implicates—


MS. SULLIVAN: The incident report filed with your insurance carrier on February 7th, 2026 references precisely this.


MR. CHEN: Right. Yes. I remember now. The occupancy sensor detected lower foot traffic than usual, and the predictive model extrapolated that to reduced need. We caught the variance fairly quickly—within maybe ninety minutes—when quality flagged temperature drift in the batch.


MS. SULLIVAN: And once you identified the problem, what happened?


MR. CHEN: We adjusted the occupancy sensor sensitivity. And we increased the minimum airflow threshold, so even if the model predicts low occupancy, the system doesn't go below a floor.


MS. SULLIVAN: Was that decision made by your engineering team?


MR. CHEN: Facilities worked with Nexus to make that change.


MS. SULLIVAN: Did you have an independent audit of the predictive model's performance?


MR. CHEN: No. Nexus is responsible for that system.


MS. SULLIVAN: So Nexus built the system, operates the system, and audits the system?


MR. CHEN: They maintain it, yes.


MS. SULLIVAN: Was there an incident report filed?


MR. CHEN: Yes, with insurance.


MS. SULLIVAN: And did Steward Manufacturing issue an internal incident report? A post-mortem analyzing why the AI system made this particular decision and what safeguards failed to catch it?


MR. CHEN: [pause] Not a formal one. The facilities team communicated with Nexus, and we made the change.


TESTIMONY SECTION 4: REGULATORY ALIGNMENT & THE EU AI ACT


MS. SULLIVAN: Mr. Chen, are you familiar with the European Union AI Act?


MR. CHEN: Generally, yes. I know it's a regulation on AI development and use.


MS. SULLIVAN: Are you aware that the EU AI Act has a compliance deadline of August 2026—which is five months from today?


MR. CHEN: I wasn't aware of that specific date, no.


MS. SULLIVAN: Are you aware that the EU AI Act applies to high-risk AI systems, including systems that make or significantly influence decisions affecting health and safety in the workplace?


MR. CHEN: I understand the Act exists, but I thought it was primarily for, you know, product AI—like algorithms in software products. Not building systems.


MS. SULLIVAN: The building management and climate control systems at Steward Manufacturing determine temperature, humidity, airflow, and emergency response routing in an environment where pharmaceutical manufacturing occurs. Would you characterize those as decisions affecting workplace health and safety?


MR. CHEN: Yes, absolutely.


MS. SULLIVAN: Does Steward Manufacturing have a risk assessment on file documenting the high-risk elements of these AI systems?


MR. CHEN: No.


MS. SULLIVAN: Does Steward Manufacturing have documented governance procedures ensuring transparency and explainability of these systems's decisions?


MR. CHEN: We have the vendor dashboards that show system status.


MS. SULLIVAN: Does those dashboards explain why the occupancy prediction model made the specific decision it did on February 7th?


MR. CHEN: Not specifically, no. It just shows the output—the adjustment that was made.


MS. SULLIVAN: So you cannot point to documentation explaining the reasoning behind an AI system's decision that affected product quality?


MR. CHEN: Not beyond what Nexus has in their system documentation.


ATTORNEY'S CLOSING SUMMARY


MS. SULLIVAN: Let me summarize what we've established in this deposition. Steward Manufacturing operates highly sophisticated AI systems across 850,000 square feet of manufacturing space. These systems make real-time decisions affecting temperature, humidity, airflow, emergency response, and occupancy prediction. We've documented at least one incident where an AI system made a decision that was not aligned with operational reality and affected product quality. When we examine governance, we find:


1. No unified governance framework connecting the multiple AI and automated systems across the facility.


2. Responsibility fragmented across IT, Facilities, and external vendors with no formal oversight mechanism.


3. No independent audit trail documenting AI system decisions, no explainability requirements, and no formal incident review process.


4. Awareness of regulatory requirements (specifically the EU AI Act and its August 2026 deadline) was not aligned with the operational reality of AI systems in use.


The governance framework described today consists of vendor maintenance contracts and ad hoc responses to system alerts. In contrast, Steward Manufacturing operates rigorous governance frameworks for product quality, operational AI, and cybersecurity. Building AI—which affects workplace safety, product integrity, and regulatory compliance—has no equivalent governance structure.


MS. SULLIVAN: We'll conclude here. The record is complete.


---


A NOTE FROM JAMES C. WADDELL, COGNITIVE CORP


The deposition transcript above is fictional. The governance gap it documents is not.


We created this scenario based on patterns we've observed across dozens of facilities in regulated industries. Pharmaceutical manufacturing, semiconductor fab, mining operations, and logistics centers are deploying increasingly sophisticated AI-driven building systems. These systems optimize energy, improve occupancy response, enhance safety, and drive operational efficiency. Yet governance frameworks have not kept pace.


Here's what we consistently find:


Building AI is treated as infrastructure rather than as decision-making systems. BMS vendors and facilities teams maintain the systems, but no governance framework exists to ensure those decisions are transparent, explainable, and aligned with organizational risk tolerance.


The EU AI Act's August 2026 deadline applies to high-risk AI systems affecting workplace health and safety. Building climate control, occupancy prediction, and emergency response systems meet that definition. Yet most organizations have not performed risk assessments or established explainability frameworks for these systems.


When incidents occur (and they do), response is ad hoc. There is no post-incident governance process documenting what went wrong, why the AI made that decision, and what safeguards failed. This creates regulatory exposure and prevents organizational learning.


Cognitive Corp's Building Constitution framework was designed specifically for this gap. It establishes governance aligned with three principles:


Explainability: Every AI decision affecting operations, safety, or compliance must be documented and traceable. Why did the system make that decision? What data informed it? What was the confidence level?


Human-in-the-Loop: Critical decisions require human review. Anomalies trigger escalation. Novel decisions are audited. Humans remain accountable for the systems they deploy.


Bias Mitigation: Building AI systems can perpetuate operational biases (over-allocating resources to certain zones, differentially responding to occupancy patterns, etc.). Governance requires active monitoring for unintended consequences.


The recent launches from vendors like BrainBox and Trane adding governance features to their cloud BMS platforms are significant. But governance cannot be retrofitted to a product; it must be embedded in organizational structure. Governance requires policy, process, people, and technology working in concert. No single tool provides that.


Our approach is different. We don't sell you a governance tool. We work with your team to build governance capacity—understanding your regulatory landscape, assessing the AI systems you already operate, designing frameworks that work for your risk profile, and implementing safeguards that allow your facilities teams to make better decisions.


Stop buying tools. Start building capacity.


If your facility operates building AI systems without unified governance, you have five months until the EU deadline. If you're in a regulated industry (pharmaceutical, semiconductor, mining, or beyond), the liability exposure from governance gaps is real. We've built a rapid assessment for exactly this scenario.


Learn more: Governance Gap Assessment


James C. Waddell


President, Cognitive Corp


Cognitive Corp | Building AI Governance Consulting


Blog #39 | March 2026

 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page