I previously installed HA on an existing Linux Mint server to give it a try. It worked just fine with the exception of an “Unhealthy system - Not privileged” error I would have to clear occasionally.
In January I purchased a new server specifically for running HA, installed Debian 12, did a clean install of HA Supervised, and restored my HA backup. It works perfectly but I’m again seeing an intermittent “Unhealthy system - Not privileged” error that I can’t figure out how to permanently clear. The error is from the old server and dated 7/23.
This new server has only been installed a few weeks and docker reports that HA supervisor IS privileged. I’d prefer not to ignore the error. If a real problem occurs in the future I’d like to know about it.
Any idea how to permanently clear an old error of this type?
I am not sure if the Supervisor container gets restored when reading in a backup, but whatever. This can’t hurt.
Reinstall Supervisor and OS-agent and then restart your host (not just the HA core container).
I wasn’t sure if that would cause a problem with my existing installation when I previously searched for a solution, but you were right and HA started normally.
It has taken a couple of weeks for the error to reoccur after restarting the server so it’ll take a while to be sure it’s gone. Keeping my fingers crossed. Thanks for the response.
No joy. The “Unhealthy system - Not privileged” error message returned, still dated 7/23 - 6 months before my current installation’s OS (Debian 12) and Home Assistant were installed.
Supervisor’s container is privileged:
docker inspect --format=‘{{.HostConfig.Privileged}}’ d18b68b61537
true
Anyone have any idea how to clear an old error that isn’t valid?
Normally it will say in the Settings → System → Repairs what to do.
You can also try to run “ha core check” from the CLI to see if it just needs another check.
I’ve been working on this and restarted the OS which clears the error every time, but if I remember correctly the suggested resolution was to reinstall the Supervisor which has already been done.
I ended up finding the file with the error in it and editing the file.
Either the the error will be resolved permanently, it’ll come back in a week or two, or possibly I’ll have to restore a recent backup. Even the worst case scenario is easily manageable.
In this case the error occurs much more often than just when Supervisor updates are pushed. I believe today’s update was only a core update and the error popped back up again.