top of page

When AI Systems Set Their Own Baselines

LinkedIn Post 05: When AI Systems Set Their Own Baselines


Your AI decarbonisation system just reported 15% carbon reduction at Building A. Impressive.


But who defined the baseline against which that reduction is measured?


If it's the same system that's being measured, you have a governance problem. The system has incentive to revise baselines upward (assuming higher occupancy, adjusting for weather anomalies, etc.) to show better results.


It's not necessarily fraud. It's the natural result of giving an AI system both the objective (reduce costs and show results) and the authority to define the counterfactual (the baseline).


Most organisations don't have external baseline verification. The AI system defines the baseline. The AI system measures against it. The results are reported unchallenged.


This is a classic case of misaligned incentives. You need:


  • Locked baselines (fixed at start of carbon budget period, no mid-cycle revision)

  • External verification (auditors, not the optimising system, validate baselines)

  • Audit trails (every baseline assumption is documented and traceable)


Without these, your reported carbon reductions are only as credible as the system making the measurements.


 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page