Spaxiom Logo
Spaxiom Technical Series - Part 12

Decarbonization and Resource Optimization

Energy-Comfort Tradeoffs and Pareto Frontiers for Intelligent Building Management

Joe Scanlin

November 2025

About This Section

This section demonstrates Spaxiom's application to energy optimization and decarbonization in buildings and campuses. Modern facilities face the dual challenge of reducing energy consumption while maintaining occupant comfort—a classic multi-objective optimization problem.

You'll see how Spaxiom's typed conditions (Section 2.3) and INTENT patterns (Section 2.4) enable intelligent building management by treating energy as a first-class signal. We show how RL agents can optimize for different energy-comfort tradeoffs, tracing Pareto frontiers that simultaneously reduce both energy consumption and comfort violations compared to static baselines. Includes complete code examples and a visualization of energy-comfort tradeoff space.

6. Use Case: Decarbonization and Resource Optimization

6.1 Energy as a first-class signal

Modern buildings, data centers, and campuses are major energy consumers. AI and IoT are increasingly used to optimize:

Spaxiom treats these control surfaces as actuated sensors:

Conditions can express tradeoffs between comfort and energy:

from spaxiom import Condition, Quantity
from spaxiom.units import kW, degC

power   = PowerMeterSensor("building_power")
temp    = ZoneTempSensor("floor5_temp")

high_load   = Condition(lambda: power.read() > 500 * kW)
too_hot     = Condition(lambda: temp.read() > 26 * degC)
too_cold    = Condition(lambda: temp.read() < 20 * degC)
discomfort  = too_hot | too_cold

6.2 Reward shaping and Pareto frontiers

We define a simple reward function over a control horizon:

R = α · Esaved − β · Cdiscomfort

where:

An RL or planning agent operating on top of Spaxiom can optimize for different (α, β) settings to trace a Pareto frontier between energy and comfort.

Energy-Comfort Pareto Frontier 1400 1200 1000 800 600 Annual Energy (MWh) 0 100 200 300 400 500 Comfort Violations (hours/year) Baseline (450 hrs, 1200 MWh) α=1.0 (340 hrs, 950 MWh) α=1.5 (220 hrs, 750 MWh) α=2.0 (150 hrs, 600 MWh) α=3.0 (100 hrs, 520 MWh) Improvement Baseline (static) Spaxiom RL policies

Figure 3: Energy-Comfort Pareto frontier for building HVAC optimization. The baseline static schedule (red) operates at 450 hours/year comfort violations with 1200 MWh annual energy consumption. Spaxiom-based RL policies (green) trace a Pareto frontier by varying the reward weight α (energy vs. comfort tradeoff). Each policy strictly dominates the baseline: achieving either lower energy for equal comfort (vertical improvement) or better comfort for equal energy (horizontal improvement). The α=2.0 policy achieves 67% reduction in comfort violations (150 hrs) with 50% energy savings (600 MWh), demonstrating that Spaxiom's event-driven observation space enables simultaneous optimization of conflicting objectives.

Critically, the observation space for the agent is not raw sensor streams, but Spaxiom events and quantities:

obs = {
    "occupancy_band": field.percent() // 10,  # 0–10, 10–20, ...
    "temp": float(temp.read().to("degC").value),
    "time_of_day": current_time_of_day_band(),
    "hvac_state": hvac.current_state(),
}

This keeps input dimensionality and tokenization cost low, while preserving enough signal for effective control.