Thank you for the explaining on how to setup custom components like that.
This part has been frustrating me more than once, because I didnt seem to have the required components exposed. GHeheh… now I know.
I have copied the files, and it is being used by HA. Under settings I see DSMR options.
I do NOT see as much increase of cpu load as before. So it does look like you improved this a lot for me.
After reboot the load has not yet stabalized fully, but this looks very proising. I will update this later today.
Just a sanity question on my stats above.
the IOwait spikes. are these completly as excpected? CPU=Pentium gold 5400, or is it HDD performance depending?
I’m not the IO Wait expert, but I think it’s HDD related. What still changed in this integration is that force_update is True, so where previously recorder only wrote changes to the database, now every datapoint is written (so roughly every 30 seconds when this is set as minimum time between entity updates).
Some questions about the dsmr improvement you did, out of curiosity.
If I set the minimum time to 60 seconds, and there is a new measurement every 10 seconds, will that result in an update to the database every 10 seconds?
Or does it work like, it updates the database every 60 seconds, with 5 or 6 new datapoints?
what is the reason for wanting force_update: true with a fixed time. from your perspective.
If force_update = false, what will be the frequency of putting data in the database? other negatives?
Or are there other things depending and hooking into the dsmr database values, i.e. templates.?
Regardless. many thanks for this fix now already. much appreciated. !!
Databases (like recorder integration and influxdb integration) listen to state changed events. When they receive an event for an entity, they will write a datapoint. One remark there, “write” does not necessarily mean an actual write, the recorder can be configured with commit_interval and also influxdb buffers data not to trigger writes too much.
If force_update = False, there is no saying in when the database (recorder or influxdb) gets updated. If the value does not change, no state changed event is raised and this will essentially mean the recorder and influxdb will not see that data was received because they listen to state changed events. This is the reason that typically for sensor entities, force_update is set True, because without databases can miss points for hours even though data was received.
With this new feature, the minimum time between updates is configurable. Data that is received in between is flushed, so it will not back-write the intermediate data that has been received.
The way it is in 0.117, every update from meter generates a state event for all 30 entities. Some meters report data every 5 seconds, so that means 30 events every 5 seconds. Apparently for some hardware, like Rpi3, this is too much to handle. That’s why this minimum time between updates was added. When your hardware allows it, you can set the value to 0 and basically disable it and get every update in your database.
Thank you very much for the detailed explanation ! Much appreciated.
I understand now why my system was having trouble handling it, and why this is starting 0.117.
On thing of extra information for you on that. I dont think in this case the issue is only with limited hardware like Rpi’s. I am running a 4core intel with 32G Ram, 2xSSD’s in ZFS_mirror.
I can imagine yes. I think debug level will log all state events. I have to say, with or without this change, if I set global level to debug, my logging goes wild anyway. I try to set a specific component or core if really neccessary to debug at most.
Yes, it’s merged. You can check the release notes all the way on the bottom under “All changes”. If you search for “dsmr” you can find the pull request.
Hi. I’m migrating to a new server (from Ubuntu to Debian with the super visor supported). On the new server I found that I have 100% CPU usage (4Core i7) and memory (32GB) filles in under 30 minutes until everything freezes. Removing the DSMR component from the integrations GUI panel resolves it.
It’s a brand new install where I restore the HASSIO config and my add-on containers.
NOTE: the cable is not attached to the new system and it works, without memory leak, on the old system. However I had the same a week ago. Restoring my broken network connections (DNS) most likely resolved that, but I can’t get it to work here.
I have Influx configured too, but on the system where it works CPU averaging < 10% .
I did some debugging too, when running docker stats it is clearly the HA container, not the supervisor on my case. I installed pys-py and that gave me a SVG trace that I don’t understand, but perhaps someone here has a better understand of that.
Last update before I reset my system en go on, when is remove the “Netbeheer” integration from the GUI, but not from the sensors part in the config, I get the py-spy output below. Noticeable differences in the the lines about “serial”. Is it possible the DSMR part is waiting for the serial to come available and hangs there? There are no DSMR processes too.
Just a note to say that there seems to be a memory increase again, with the release of 2021.2.1. It also seems to be related to the Brother Printer integration like it was (for me) last time. Removing this integration for now stops the increase.
I am seeing this issue in core-2021.3.3 (as well as core-2021.x.x) with ONVIF integration. With two ONVIF cameras (Intel NUC, supervised install), core memory goes from 20% to 100% used in a few hours and causes core to crash and restart. Removing ONVIF cameras fixes this. Adding only one ONVIF camera causes slower increase in memory usage, but eventually it goes to 100% (including SWAP) and crashes core.