Sensor Abstraction, Normalization, and Temporal Alignment
Joe Scanlin
November 2025
This section explains how Myome integrates data from dozens of heterogeneous health sensors and testing services. You'll learn about the sensor abstraction layer that provides a unified interface, dynamic calibration techniques using Kalman filtering, and multi-resolution time-series alignment for cross-domain correlation analysis.
The proliferation of consumer health devices creates integration challenges—each vendor provides proprietary APIs, data formats, and measurement protocols. Myome addresses this through a sensor abstraction layer that decouples data sources from analysis logic.
Each sensor type (heart rate monitor, glucose meter, environmental sensor) implements a common interface:
interface HealthSensor {
// Unique identifier for this sensor instance
id: string;
// Sensor type (heart_rate, glucose, sleep, etc.)
type: SensorType;
// Manufacturer and model information
metadata: SensorMetadata;
// Initialize connection to device/API
connect(): Promise<void>;
// Stream real-time measurements
streamData(): AsyncIterator<Measurement>;
// Retrieve historical data for date range
getHistorical(start: Date, end: Date): Promise<Measurement[]>;
// Get sensor calibration parameters
getCalibration(): CalibrationParams;
// Update calibration (for devices requiring periodic calibration)
setCalibration(params: CalibrationParams): Promise<void>;
// Disconnect and cleanup resources
disconnect(): Promise<void>;
}
interface Measurement {
timestamp: Date;
value: number;
unit: string;
confidence: number; // 0-1, measurement reliability
metadata?: Record<string, any>; // Device-specific annotations
}
This abstraction enables the system to treat a $300 continuous glucose monitor identically to a $3000 medical-grade device—both produce timestamped glucose measurements, differing only in accuracy (reflected in the confidence field) and sampling frequency.
Adapter implementations handle vendor-specific quirks:
class LevelsHealthCGMAdapter implements HealthSensor {
type = SensorType.GLUCOSE;
private api: LevelsHealthAPI;
async streamData(): AsyncIterator<Measurement> {
// Levels provides 5-minute glucose readings
while (true) {
const reading = await this.api.getLatestGlucose();
yield {
timestamp: reading.time,
value: reading.mgDl,
unit: 'mg/dL',
confidence: this.assessConfidence(reading),
metadata: {
sensor_age_days: reading.sensorAge,
temperature: reading.temperature
}
};
await sleep(5 * 60 * 1000); // Poll every 5 minutes
}
}
private assessConfidence(reading: LevelsReading): number {
// Sensor accuracy degrades over 14-day wear period
const ageFactor = 1.0 - (reading.sensorAge / 14) * 0.1;
// Temperature extremes reduce accuracy
const tempFactor = Math.abs(reading.temperature - 37) < 2 ? 1.0 : 0.9;
return ageFactor * tempFactor;
}
}
Consumer health devices often exhibit systematic biases compared to clinical gold standards. For example, wrist-worn heart rate monitors can underestimate peak heart rate during exercise by 10-15 bpm compared to chest strap electrocardiography. Continuous glucose monitors show mean absolute relative difference (MARD) of 8-12% versus venous blood draws.
Myome implements dynamic calibration to correct these biases:
Where calibration parameters \(\alpha\) (scaling), \(\beta\) (offset), and \(\gamma\) (baseline) are determined through:
For continuous glucose monitoring, a more sophisticated calibration accounts for lag between interstitial and blood glucose:
Where \(\tau\) is the physiological lag (typically 5-15 minutes) and \(\alpha, \beta\) are calibrated against fingerstick measurements.
The calibration process is automated through a Kalman filter that continuously refines parameters as new reference measurements become available:
class KalmanCalibrator:
"""Adaptive calibration using Kalman filtering"""
def __init__(self, initial_alpha=1.0, initial_beta=0.0):
# State: [alpha, beta]
self.state = np.array([initial_alpha, initial_beta])
# State covariance (uncertainty in calibration params)
self.P = np.eye(2) * 0.1
# Process noise (how much params can drift over time)
self.Q = np.eye(2) * 0.001
# Measurement noise (uncertainty in reference measurements)
self.R = 0.05
def predict(self):
"""Predict step: params may drift slightly"""
self.P = self.P + self.Q
def update(self, sensor_value, reference_value):
"""Update calibration when reference measurement available"""
# Measurement model: reference = alpha * sensor + beta
H = np.array([sensor_value, 1.0])
# Kalman gain
S = H @ self.P @ H.T + self.R
K = self.P @ H.T / S
# Update state
innovation = reference_value - (H @ self.state)
self.state = self.state + K * innovation
# Update covariance
self.P = (np.eye(2) - np.outer(K, H)) @ self.P
return self.state # Return updated [alpha, beta]
def calibrate(self, raw_value):
"""Apply current calibration to raw sensor reading"""
alpha, beta = self.state
return alpha * raw_value + beta
Different sensors operate on different schedules: continuous glucose monitors sample every 5 minutes, heart rate variability is computed from 5-minute windows, sleep stages are scored in 30-second epochs, and blood tests occur quarterly. Analyzing cross-domain correlations requires temporal alignment.
Myome uses a multi-resolution time series representation:
When computing correlations between metrics at different resolutions, we use time-matched aggregation:
Input: Time series \(A\) at resolution \(r_A\), time series \(B\) at resolution \(r_B\)
Output: Correlation coefficient \(\rho\) with time alignment
1. Determine common resolution: \(r = \max(r_A, r_B)\)
2. Resample A to resolution r:
\(A' = \text{Aggregate}(A, \text{resolution}=r, \text{method}=\text{mean})\)
3. Resample B to resolution r:
\(B' = \text{Aggregate}(B, \text{resolution}=r, \text{method}=\text{mean})\)
4. Align timestamps:
\(\text{timestamps} = \text{Intersect}(\text{timestamps}(A'), \text{timestamps}(B'))\)
5. Compute correlation:
\(\rho = \text{Correlation}(A'[\text{timestamps}], B'[\text{timestamps}])\)
6. Return \(\rho\) with confidence interval based on sample size