Like this? Or the other blueprint?
My heatpump also has STANDBY and COOLING, but no defrost.
This is correct. For now the blueprint ignores the defrost mode anyways, as it belonges to the previous state ![]()
v1.2.7 – The “Load Shifter” Update
This release is specifically tuned for two groups: those optimizing energy by shifting loads into cheap hours, and those with buildings that store heat well (heavy slabs, thick walls).
Track B: Daily Learning
Hourly learning (Track A) struggles in high-thermal-mass buildings — energy pumped in at 02:00 will be attributed to the wrong temp-key. Track B replaces this with a single nightly update at midnight, evaluating the full 24-hour heat balance instead of reacting to hourly noise.
Thermal Mass Correction
If you’re intentionally overheating the structure when electricity is cheap, you can now add an indoor temperature sensor. The model uses the midnight-to-midnight delta to correct for stored energy before updating the learned coefficient. This keeps the model from interpreting pre-charged thermal mass as heat lost to the outdoors.
Instant Re-learning (retrain_from_history)
After a configuration change — or simply switching to Track B — you can now rebuild the full correlation table from up to 90 days of existing history.
Reconfiguration Wizard
Setup is now a structured, three-step wizard:
DHW Precision
Energy consumed during domestic hot water cycles is now excluded at 2-minute granularity. Mid-hour mode switches are handled correctly, preventing DHW activity from polluting the thermal model.
A Note On System Control
Heating Analytics is a feed-forward analysis engine. It provides ground truth about your building’s physics and produces accurate demand forecasts — but it does not write setpoints or trigger your heat pump directly. This is intentional: closing that loop without careful design causes instability, and your comfort preferences belong in your own automations. Simple Home Assistant automations for price-shifting or prioritizing solar-gain periods are well-suited for acting on the forecast data.
Full technical details in the GitHub changelog
Hi,
I changed to track B:
(Floor_m² × 35) / 1000 / avg_COP.
144+35/1000/3.7=1,36
Is that what the integration expects?
Hi Robert,
Two good questions — let me take them in turn.
On the slider formula
Yes, that calculation is exactly what the integration expects (which rounds to 1.4 on the slider). However, the thermal mass correction only has an effect if you are actively load-shifting — i.e., intentionally pre-charging the slab or house during cheap hours and then running the pump less during expensive hours. If you’re running a constant setpoint 24/7, the midnight-to-midnight ΔT will be near zero regardless, and the slider value won’t influence the result. In your case, the 1.4 value indicates compensating for 1.4 kWh per degree delta from midnight to midnight.
On Track B for your setup
If your heat pump spends meaningful time idle (because the slab or hydronic separation holds temperature well between cycles), Track B is likely a better fit than Track A. Hourly learning can misinterpret those idle periods as anomalies, whereas Track B’s single nightly update sees the full 24-hour energy balance and isn’t confused by the phase shift between “energy in” and “temperature response”.
On the diagnostic numbers
The full parameter reference is in the README (Configuration Options and Advanced Sensors sections) and the DESIGN.md covers the underlying model architecture in detail. If there are specific attributes or sensor values you’re seeing that aren’t clear, paste them here and I’ll explain them directly.
Thanks. Long read. I will keep it running and keep an eye on it. We don’t load shift heating, does not make sense with our house. What we do is overheating water and heating buffer with excess solar energy.
The dashboard I had to modify quite a not, most sensors were missing.
Not sure it it has any practical value, but interesting anyway
Learning about the third biggest energy consumer in the house is always worth it.
Thanks for this wonderful integration.
But i’m seeing a lot of errors on the dashboard. Have the entity names changed?
Hi @MartyBrMartin — thanks for the kind words!
Yes, this is a known pain point, and it’s actually pushing me to rethink the dashboard approach entirely. Rather than maintaining a single ready-made dashboard that breaks whenever sensor names or attributes evolve, I’m moving toward providing building blocks in the tools/ folder that you assemble yourself. More on that in a second.
For your immediate issue: the easiest way to find the correct entity names is to go to Settings → Devices & Services → Heating Analytics and browse the entities listed there — they’ll show the exact IDs as they exist in your installation.
Where things are heading
The tools/ folder now contains a growing set of ready-to-paste Plotly graph cards. Here’s a taste of what’s in there already:
plotly_today_breakdown_pie.yaml — donut chart of today’s per-unit energy splitplotly_week_ahead_forecast.yaml — 7-day bar + temperature line forecastplotly_heat_demand_curve.yaml — heat demand vs. outdoor temperature, segmented by windheating_forecast_sensor_and_dashboard.yaml — 48-hour hourly forecast with solar contributionThese cards plug directly into standard integration sensors — no fragile pre-built dashboard to maintain.
Also: v1.2.8 is out today, with richer diagnostics on calibrate_wind_thresholds (per-bucket MAE, data quality rating, improvement %) — useful if you’ve been tuning your model.
One ask from me: if you end up building something nice on top of these sensors, I’d love to include it. The attributes on most sensors are quite rich (breakdowns, reliability flags, weekly summaries, per-unit contributions…) and there are endless ways to combine them into useful cards. Dashboard contributions / ideas very welcome — either as a PR to tools/ or just posted here as inspiration.
Thank you so much for the quick reply. I only installed the integration a few days ago and am currently exploring its features.
If I notice anything or have any suggestions, I’ll get in touch right away.
I appreciate being able to communicate directly with the developer. Thanks for your offer.
Best regards,
Martin
Hi, some entities are missing on my system.
For example, “sensor.heating_analytics_temp_actual_today”
There is no entity with “temp” in its name under Heating_Analytics.
Hi again @MartyBrMartin — yes, that confirms it. sensor.heating_analytics_temp_actual_today is a relic from an early, pre-public release of the integration and no longer exists as a standalone entity.
The integration evolved quite a bit during its alpha/beta phase before the official release. As the data model matured, several sensors were merged, renamed, or moved into attributes to keep the entity registry clean. The old pre-built dashboard unfortunately just didn’t keep up with those early architectural changes.
Because of this, I’m moving away from maintaining a single monolithic dashboard. In fact, I will be removing that old dashboard template completely as soon as I finish extracting the remaining useful graphs into individual, modular cards.
For temperature-related data, you’ll now find it exposed as attributes on the Thermal State sensor, which provides rich information that a single entity never would be able to. The standalone Plotly cards in the tools/ folder show exactly how to pull those out directly for your own dashboards.
Thanks for the clarification. I’ll take a look at the tools folder now.
Regards,
Martin
This release is a big one. Here are the highlights that matter for users.
If your heat pump is controlled by a price optimizer (MPC) that shifts consumption to cheap hours, the standard learning model has a fundamental problem: it sees a massive energy spike at 3 AM and learns “the house loses enormous heat at 3 AM.” The model collapses over time.
Track C solves this. Instead of learning from the electricity meter (which shows when the MPC chose to run), it fetches actual delivered thermal energy from the MPC and reconstructs what the heat pump would have consumed if it ran continuously, matching heat loss in real time.
New in v1.3.0: each hour’s thermal load is now divided by that hour’s actual COP — computed from the MPC’s learned Carnot efficiency, leaving water temperature, and a defrost penalty for cold+humid conditions. Cold hours (low COP) get correctly attributed with higher electrical cost than mild hours (high COP). The 24-hour total is renormalized to match your actual metered consumption, so the daily sum is always anchored to reality.
Track C retains the same hourly resolution as Track A — full heating curve characterization in 2–4 weeks. It requires the companion heatpump_mpc integration (which handles heat pump scheduling and exposes thermal production data) and is enabled in Configure → Advanced Options when Daily Learning Mode is active.
For multi-unit installations (heat pump + panel heaters), each unit’s learning strategy is auto-assigned: the MPC unit uses the synthetic baseline, non-MPC units contribute actual meter data per hour. No manual configuration needed.
The cloud coverage model has been upgraded from a linear interpolation to a Kasten-derived power law:
Old: cloud_factor = 1 - cloud/100
New: cloud_factor = 1 - 0.75 × (cloud/100)^1.5
The difference matters most at partial cloud coverage (30–70%), where the old linear model significantly overstated the solar reduction. Thin or partial clouds transmit more energy than a simple linear model predicts. Your solar coefficients will absorb the change automatically over a few sunny days — no action needed.
Important: your weather entity matters. If your weather integration doesn’t provide a numeric cloud_coverage attribute (0–100%), the solar model falls back to mapping condition text like “sunny” → 10%, “cloudy” → 85%. This is far too coarse for reliable solar learning and was the primary cause of model quality issues for several beta testers.
You can check what your weather entity provides in Developer Tools → States — look for cloud_coverage in the attributes.
| Provider | cloud_coverage | humidity | hourly forecast | Works out of box? |
|---|---|---|---|---|
| Met.no | ||||
| Open-Meteo (standard) | Solar model degraded | |||
| Open-Meteo (custom fork) |
v1.3.0 now warns you in the log when input data is missing or degraded:
cloud_coverage — solar model running on coarse condition-text mappingsun.sun entity unavailable — entire solar model producing zeroThese warnings appear once per hour in the HA log. If you’ve been wondering why your model isn’t converging well, check for these messages first.
diagnose_modelA new service for troubleshooting model quality:
service: heating_analytics.diagnose_model
data:
days: 30
Returns a diagnostic report you can inspect directly in Developer Tools → Services:
isolate_sensor) for MPC: get_forecast can now return only the demand that a specific unit must cover, preventing double-counting in the MPC solver.Full changelog: CHANGELOG.md
No breaking changes. No migration required. 343 tests pass.
Also released: Heat Pump MPC v0.2.1 — adds COP-only mode for A2A heat pumps, ground-source heat pump support, and Home Assistant blueprints for common automation patterns.
Found something confusing or unexpected? Open an issue — even “I don’t understand what X means” is valuable feedback that helps improve the documentation.
Heating Analytics v1.3.1 — Solar model corrections
Quick follow-up to yesterday’s v1.3.0. If you’re on 1.3.0, update when convenient — no breaking changes.
Cloud coverage exponent corrected. The 1.3.0 release described the cloud model as “Kasten-derived” but the exponent was wrong (1.5 instead of the published 3.4 from Kasten & Czeplak 1980). At 50% cloud this over-attenuated solar by 20 percentage points, forcing per-unit coefficients to compensate for the cloud model instead of representing your windows.
Solar coefficient learning fixes. LMS replaced with NLMS (Normalized LMS) — units with large south-facing windows no longer oscillate. Screen correction no longer double-counted in predictions. Screen position is now averaged over the hour instead of sampled once at the boundary.
Startup warnings suppressed. The six data quality warnings from 1.3.0 fired on every restart before HA entities were available. Now suppressed during startup — same warnings, just no false alarms.
New service: diagnose_solar. Per-unit coefficient health, stability windows, battery decay calibration, temporal bias analysis. Run it after a week of data to validate your solar model:
service: heating_analytics.diagnose_solar
data:
days_back: 30
Battery decay default changed from 0.60 to 0.75 (half-life 2.4h). Use diagnose_solar with apply_battery_decay: true to auto-calibrate for your building.
369 tests pass. Full changelog in CHANGELOG.md.