I updated from 2026.2.3 to 2026.3.2 today (docker) and noticed the frontend was struggling every now and again and has a few moments where all MQTT devices when to unknown and then “restarted” a few moments latter.
Checked Node Exporter data and it looks like a memory leak.
You can see that 2026.3.2 was installed at around 11am and then i reverted back to 2026.2.3 just after 4pm and the memory leak disappeared.
The HACS (and all other) integrations stayed the same (no updates) and nothing else changed between the revert, it was a direct update of the docker compose. I didn’t find any obvious errors in the logs.
Has anyone else seen anything like this?
Any tips on diagnosing the issue? I did do a memory profile a few months back for a slow-down (not memory leak) but couldn’t use it to determine the integration or what was causing an issue.
Restart in safe mode (under advanced options in the restart box).
If the leak stops then it was one of your 3rd party integrations or cards. If it does not stop then it is likely an endless loop in one of your automations or scripts.
I’ve had Apps (I’m looking at you VSCode) do this to me. If tom_l’s suggestion doesn’t give you anything you can go to the Home Assistant Supervisor Integration and you can go through all the installed Apps and enable the memory and/or cpu usage meters to track them.
I will try updating to 2026.3.2 again and using a safe mode restart. I am not seeing the issue on 2026.2.3 with no other changes so I doubt it is an automation - I did go through and check these and nothing has a last triggered constantly updating (will check for a loop though).
I am using Docker and not Home Assistant OS so the AddOn memory monitoring won’t help. I do know it is the python3 -m homeassistant --config /config process that was using the most memory.
I updated to 2026.3.2 again at 15:50, still had a leak. Then i restarted in Safe Mode at 17:45, the max memory footprint is much lower but there is still clearly a leak somewhere:
To check for any running automations and have also check the traces of all the scripts with none still running.
There is obviously a leak but i don’t know how to go back and find it as i don’t think there is a way to see which Device or Integrations is requesting or using the most memory.
I guess the time consuming step is to disable each individual automation - restart and then monitor the memory again. Unfortunately I don’t really have time for this at the moment.
Did you ever figure this out? I am seeing the same issue, python3 also seems to be the culprit. Claude tells me its my HTML card rendering sensors frequently. Sure enough if i never look at the HTML card my instance is stable but if i look at that card for more than 20 to 30 seconds whether in browser or via companion app then i get runaway memory issues until the whole thing crashes, even if i then kill the window after 20 seconds the memory keeps rising. Looking for alternatives to the HTML card now…
Hi, unfortunately not, i am still seeing the issue on 2026.3.4. I’ve not really found a definitive cause. Running profiler.start_log_objects fixes the issue as per the memory usage chart above, i just have a script to run this start-up with a time of every 10 minutes and this looks to cause memory garbage collection to behave as it should.
The main objects showing up as incrementing are ‘cell’, ‘function’ and ‘HASSJob’, however these aren’t increasing by much and with the profiler fixing the issue i don’t know if these are the ones that were causing the large memory leak.
edit: I don’t think i even have a HTML Card being used and also see the issue if i don’t access the frontend so i think these are slightly different although possibly related memory leak issues.