Hi all,
I use a utility meter sensor to track heating energy use per day, defined in yaml like so:
utility_meter:
energy:
unique_id: heizstrom_verbrauch_letzter_tag
name: "Heizung Verbrauch 24h"
source: sensor.heizstrom_positive_active_energy_total
cycle: daily
It’s based on a UI configured EDL21 sensor and usually works. However, in the stats view for the last couple days, I see significant spikes where the value in the states table is off by exactly a factor of 1000. Next value in the sensor is OK again.
This is one example from the states table for that sensor:
As this happened several times during the last ~10 days and it skews the stats quite a lot, I’d like to fix this AND not have it happen again.
First fixing:
There are two options, one would be to remove those entries from states and re-connect the before and after to skip the bad state value. Is this correct, from a data model point of view? Do I have to update other tables as well, and if so, how? (Side note: these bad values are not (yet?) part of the “outliers” in Dev Tools → Statistics).
Other option would be to just correct the actual value by dividing by 1000; do I have to do something else to other tables as well?
Next: preventing. I have no idea how or why this happens, but there is always a state with “last_reported_ts” and one with “old_state_id” NULL quite near to that bogous value, see excerpt here:
I would now guess that this could be somehow connected to HA restarts…?
Any help is appreciated, thanks,
Markus


