"Container hassio_supervisor killed" every few minutes

Hi, I am having trouble figuring out why my Raspberry Pi 4B 4GB has such high load.
Other stats seem okay, but the load value is rather high. (60% is high right?)
And every few days, the load value peaks over 1 and the system stops to write to MariaDB, although I can still control the lights.

In the forum, someone mentioned that looking into portainer may give you some insight what is wrong.
I am not familiar with docker or portainer or anything, but this caught my attention

It says “Container hassio_supervisor killed” event happened every few minutes.
Could this be the problem?
Is there a way to fix this?
I am running from hass.io Supervisor 2020.12.7, Core 5.9 and using SSD boot without SD card on a Raspberry Pi 4B 4GB.

1 Like

Having Portainer installed gives you the opportunity to go to the Containers tab, search the hassio_supervisor container and check it’s logs. If it’s constantly killed, then either

  • this is an intended behaviour (I am not familiar with HASS OS)
  • this is a fault, the container’s process encounteres an error and dies (which kills the container)
  • this is caused by a defined health check, but the outcome of this health-check is negative

Anyway, the logs should provide some info.

PS: Without portainer, it’s docker logs --tail 50 hassio_supervisor for the last 50 lines of the container’s output.

Thank you for your comment.
Here is the logs

21-01-03 09:04:55 INFO (SyncWorker_6) [supervisor.docker.addon] Starting Docker add-on hassioaddons/portainer-aarch64 with version 1.3.0
21-01-03 09:04:57 INFO (MainThread) [supervisor.api.security] /host/info access from a0d7b954_portainer
21-01-03 09:21:26 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state CoreState.RUNNING
21-01-03 09:21:26 INFO (MainThread) [supervisor.resolution.check] System checks complete
21-01-03 09:22:35 INFO (MainThread) [supervisor.snapshots] Found 3 snapshot files
21-01-03 09:22:45 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json
21-01-03 10:04:12 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token
21-01-03 10:21:26 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state CoreState.RUNNING
21-01-03 10:21:26 INFO (MainThread) [supervisor.resolution.check] System checks complete
21-01-03 10:31:22 INFO (MainThread) [supervisor.auth] Auth request from 'core_mosquitto' for 'mqtt'
21-01-03 10:31:22 INFO (MainThread) [supervisor.auth] Successful login for 'mqtt'
21-01-03 10:36:37 INFO (MainThread) [supervisor.auth] Auth request from 'core_mosquitto' for 'mqtt'
21-01-03 10:36:37 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token
21-01-03 10:36:37 INFO (MainThread) [supervisor.auth] Successful login for 'mqtt'
21-01-03 10:44:57 INFO (MainThread) [supervisor.host.info] Updating local host information
21-01-03 10:44:57 INFO (MainThread) [supervisor.host.services] Updating service information
21-01-03 10:44:57 INFO (MainThread) [supervisor.host.network] Updating local network information
21-01-03 10:45:00 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information
21-01-03 10:45:00 INFO (MainThread) [supervisor.host] Host information reload completed
21-01-03 11:21:26 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state CoreState.RUNNING
21-01-03 11:21:26 INFO (MainThread) [supervisor.resolution.check] System checks complete
21-01-03 11:22:45 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json

Nothing seems like an “error” to me. (I am a very beginner)
It is now around 20:00 local time, and the logs are from a few hours ago. Or maybe its in UTC, then it’s a few minutes ago.

Do you know another way to find out what is causing high load?

I don’t see any errors neither, but docker logs only contains the output to STDOUT. Maybe the supervisor does log more information to a file (homeassistant.log)? I don’t know as I don’t use Home Assistant OS.

In Portainers “Containers” with, the column “State” indicates the presence of an health-check. The state is “running”, if the container is up and has no health check or “healthy” if it does.

If health-check is defined, you could run docker inspect -f '{{.Config.Healthcheck}}' hassio_supervisor to find what is done during check. But again, it could also be an intended behaviour.

Besides Docker and Home Assistant, on Linux the “top” command is the first step in identifying resource-consuming processes.

Also (just coming in hot from another post): See https://developers.home-assistant.io/docs/operating-system/debugging/


I think Home Assistant OS is different. There are 0 containers, and instead there are images.
If I run docker inspect -f '{{.Config.Healthcheck}}' hassio_supervisor it says <nil>.


I have run “top” command, but it’s hard to understand what the problem is.
I will try to search for how to interpret the top screen.
Thank you very much for your time!

“top” shows running processes and their resource allocation. It also prints the “load” index, and having a load average of 0.67 is far below “overloaded”.

I am sorry, but I am out of options to help, as I do not use Home Assistant OS.

Please don’t be sorry, it’s just me being a novice and I can’t ask the proper questions. Have a great day!

Hi. did you find any solution to this problem?

I actually uninstalled portainer and forgot about this problem.
I re-installed portainer and checked today, and the problem has seem to be gone by itself.
However system load is still around 0.6
I am not sure about the details about this situation. Sorry.

I too am seeing this, as of Home Assistant 2021.3.4. Supervisor container gets killed every few minutes. No noticeable effect on the system though; I just noticed it in the Portainer events tab. I only see the below error in the logs, but I would not expect a bad request (400) response from the API to result in a total failure. Possibly a bug in the supervisor?

21-03-21 15:09:06 ERROR (MainThread) [supervisor.api.ingress] Ingress error: 400, message=‘Invalid response status’, url=URL(‘http://172.30.33.2:8099/socket.io/?EIO=4&transport=websocket&sid=zD4TshK2gBghJY3lAAAE’)