Butlr provides deployable, heat-based sensing and AI analytics designed for research labs, facilities managers, and educators. This guide explains how to evaluate thermal occupancy sensors in a controlled lab, run virtual simulations, benchmark performance, and plan real-world pilots while preserving privacy and integration flexibility.
Why heat-based sensing matters
Heat-based (thermal) sensing detects the presence and movement of people by measuring differences in infrared heat signatures rather than capturing visual images. That approach delivers several important benefits for institutions and educators.
- Privacy-first: No identifiable images, faces, or personally identifiable information are collected. Data focuses on thermal signatures and counts rather than identity.
- Robust indoors: Effective in low-light and cluttered environments where camera algorithms struggle. Works in labs, classrooms, data centers, and shared spaces.
- Low maintenance: Passive sensing with long-lived hardware options reduces calibration and upkeep compared with complex camera systems.
- Complementary: Pairs well with other sensor types for multimodal research (CO2, badge systems, environmental sensors) to validate occupancy models.
Limitations to be aware of
- Thermal diffusion: Heat spreads and can blur small, tightly spaced bodies at longer distances, requiring careful mounting and sensor spacing.
- Environmental interference: HVAC flows, windows, and temperature gradients affect raw readings; algorithms must account for these factors.
This section outlines a practical test plan tailored for academic labs, facilities research groups, or sensor evaluation programs.
Equipment checklist
- A set of Butlr thermal sensors (wireless and/or wired variants) for comparative testing.
- Mounting hardware and adjustable stands to test ceiling and wall placements.
- Data collection gateway or local server for synchronized logging.
- Ground-truth tools: manual headcounts, badge swipes, or temporary camera feeds (used only for verification, then removed or anonymized).
- Environmental monitors: temperature, humidity, and airflow sensors to log confounders.
- Analysis workstation with access to Butlr analytics or local analytics tools.
Key metrics to measure
- Occupancy detection accuracy: true positives, false positives, and false negatives against ground truth.
- Latency: time from event (entry/exit) to detection and to aggregated count.
- Granularity: ability to distinguish simultaneous occupants in small areas.
- Robustness: performance under varied lighting, temperature shifts, and HVAC cycles.
- Power and connectivity reliability: battery life, wireless drop rates, and wired uptime.
Test protocol (recommended)
- Baseline calibration: record 24 hours of empty-room baseline to understand ambient thermal patterns.
- Controlled scenarios: run scripted entries and exits with known counts at varying speeds and spacing.
- Busy scenarios: simulate typical peak traffic periods to evaluate congestion handling.
- Environmental variation: change HVAC settings and introduce localized heat sources to test resiliency.
- Repetition and timing: repeat each scenario multiple times and log timestamps for statistical validity.
Timeline and sample scope
- Small pilot: 1–3 sensors, 2 weeks of mixed testing to tune parameters.
- Medium evaluation: 5–15 sensors, 4–6 weeks covering weekdays and controlled night testing.
- Comprehensive lab trial: campus-scale pilot for 8–12 weeks including seasonal environmental variation.
When comparing sensors, use standardized inputs and clear acceptance criteria. Typical performance expectations for validated heat-based systems include high detection accuracy for single- and small-group occupancy, sub-minute latency for aggregate counts, and graceful degradation under environmental stress.
Factors that influence performance
- Mounting height and angle: ceiling mounts provide stable overhead views; wall mounts can provide directional sensitivity.
- Field of view and sensor density: larger spaces require more sensors or alternate spacing to avoid blind spots.
- Algorithm tuning: models benefit from localized calibration and adaptive thresholds to account for persistent heat sources.
Suggested acceptance criteria
- Detection accuracy above 85–90% for single-occupant scenarios after calibration.
- False positive rate below 10% in typical office or classroom settings.
- End-to-end reporting latency under 60 seconds for aggregated occupancy metrics.