Hello everyone. I am having some issues with upgrading my hassio (used the supervised-installer on a nuc running debian). My nuc keeps on freezing and after a reboot it see all my containers stopped (except my portainer of course):
I think I might have a correct supervisor container, each time I start the container I get the following response:
[services.d] starting services
[services.d] done.
20-05-08 07:51:53 INFO (MainThread) [supervisor.bootstrap] Use the old homeassistant repository for machine extraction
20-05-08 07:51:53 INFO (MainThread) [__main__] Initialize Supervisor setup
20-05-08 07:51:53 INFO (MainThread) [supervisor.bootstrap] Setup coresys for machine: qemux86-64
20-05-08 07:51:53 INFO (SyncWorker_0) [supervisor.docker.supervisor] Attach to Supervisor homeassistant/amd64-hassio-supervisor with version 220
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/docker/api/client.py", line 261, in _raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.7/site-packages/requests/models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.40/images/sha256:24aa6505d0451445e40565b7311859e28a5dccaddc1ecd12bc04fdf865f89851/json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/src/supervisor/supervisor/__main__.py", line 41, in <module>
loop.run_until_complete(coresys.core.connect())
File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete
File "/usr/src/supervisor/supervisor/core.py", line 31, in connect
await self.sys_supervisor.load()
File "/usr/src/supervisor/supervisor/supervisor.py", line 42, in load
await self.instance.cleanup()
File "/usr/src/supervisor/supervisor/utils/__init__.py", line 31, in wrap_api
return await method(api, *args, **kwargs)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/supervisor/supervisor/docker/interface.py", line 321, in _cleanup
for image in self.sys_docker.images.list(name=self.image):
File "/usr/local/lib/python3.7/site-packages/docker/models/images.py", line 364, in list
return [self.get(r["Id"]) for r in resp]
File "/usr/local/lib/python3.7/site-packages/docker/models/images.py", line 364, in <listcomp>
return [self.get(r["Id"]) for r in resp]
File "/usr/local/lib/python3.7/site-packages/docker/models/images.py", line 316, in get
return self.prepare_model(self.client.api.inspect_image(name))
File "/usr/local/lib/python3.7/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/docker/api/image.py", line 246, in inspect_image
self._get(self._url("/images/{0}/json", image)), True
File "/usr/local/lib/python3.7/site-packages/docker/api/client.py", line 267, in _result
self._raise_for_status(response)
File "/usr/local/lib/python3.7/site-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/local/lib/python3.7/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("readlink /var/lib/docker/overlay2: invalid argument")
20-05-08 07:51:53 ERROR (MainThread) [asyncio] Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f46ed8aa050>
20-05-08 07:51:53 ERROR (MainThread) [asyncio] Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f46ed65d4d0>
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
Note that I provided 30GB of disk space for my hassio debian VM and only about 19% of that is in use.
On this post I found someone saying:
In my case… i used portainer just now. Clicked on hassio_supervisor container. Chose Duplicate/edit and recreated with latest image. restarted all containers and i think im up and going again, think i lost hacs though
Is this something that is worth trying? If possible I want to be able to fix this issue without losing any data, all help is appreciated!