Why are there no sanity checks for energy meters. I see from time to time, the Z-wave/zigbee energy meters get a bad reading of the kWh, so it goes from 125kWh to 1.500.000 and then back to the “normal”
I agree, there is a built in filter, but that means you are creating a new entity and some how I feel that I don’t need more than necessary…
So there should be a possibility to filter the original entity regarding as you say spikes, or when an incremental entity suddenly is lower than the previous…
I get a random negative readings which means my dashboard for energy is all wonkey. Would be nice if I don’t have solar or its a smart plug then I should never have negative readings.
After moving to our current house, I hooked everything up again to run HomeAssistant and other things at our new place. Shortly after I got energy readings, I noticed that our energy and gas usage went through the roof in the exact week we moved in… According to HomeAssistant, that is
We are aware and taking action to reduce our energy usage, but we supposedly used about 18.000 kWh and 2943 m3 in one week? Even for a workshop, that’s a lot and we don’t own one
I went searching in the homeassistant database, but no luck so far.
Would be nice though if this kind of filtering would happen automatically. I mean we’re doing statistics here, so given an a priori Bayesian distribution, what is the probability of a sample being larger than the mean plus two times standard deviation? 2.5% which is actually quite large in this particular case , but you get the idea.
Finally found the time to further look into this. It took a while to find the precise entry, even after I determined the date of the entry should be around July 12/13th. But I found the entries which caused the “problem”, replaced the values with statisticly correct (well, more or less) ones, and that fixed the spikes Thank you for your great tip!
I believe this is caused by an integration not sending the correct values. I had similar problem with mine and was due to the MQTT defaulting to 0 when there was no data rather than unavailable. This meant that at startup it would default to 0, then some time later get a reading of like 1,500,000 which would show as a spike, then continue with the normal increasing values…
After setting the values to unknown until there was actual data completely fixed this issue.