Can't get into the frontend after moving Docker container to a new host

I upgraded my NAS and as such moved all my Docker containers over from the old one to the new one. I had no issues whatsoever with Mosquitto and Zigbee2MQTT but I can’t for the life of me seem to get Home Assistant itself running anymore.

Symptoms:

  • The Docker container is running and keeps running. Doesn’t crash.
  • When I go to the UI in a browser that previously visited it through the old server or with the Android app, it either keeps trying to load for a few minutes followed by “Unable to connect to Home Assistant.” and the retry button, or it shows that screen right away.
  • When I go to the UI in a browser that has not previously used it, I get a login screen. When I log in with my actual credentials I get the same error as above, and if I purposefully use bad credentials I get directed back to the login screen to try again, so something seems to be running.
  • None of my automations work, so nothing much else seems to be running.

This is my log file:

2020-06-04 01:44:45 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-06-04 01:44:45 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for zigbee2mqtt_networkmap which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-06-04 01:44:47 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for discord_game which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-06-04 01:44:54 ERROR (SyncWorker_6) [homeassistant.components.octoprint] Endpoint: printer Failed to update OctoPrint status. Error: 409 Client Error: CONFLICT for url: http://octopi-cr-x:80/api/printer
2020-06-04 01:44:56 ERROR (MainThread) [homeassistant.core] Error doing job: Unclosed client session
2020-06-04 01:44:56 ERROR (MainThread) [homeassistant.core] Error doing job: Unclosed connector
2020-06-04 02:03:40 WARNING (MainThread) [homeassistant.components.http.ban] Login attempt or request with invalid authentication from ***.***.***.***

Funnily enough that last line only gets added when I use the correct credentials, actual bad login attempts don’t get logged.

I should probably add that I’m using a reverse proxy for SSL termination, this is my config for that:

http:
  server_host: 127.0.0.1
  use_x_forwarded_for: true
  trusted_proxies: 127.0.0.1
#  base_url: https://sub.domain.tld

I tried it both with and without the base_url option but the result is the same.

In case it helps, this shows my Docker setup:

CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                 PORTS                                                                                                        NAMES
1805ea6bc601        homeassistant/home-assistant:latest   "/init"                  About an hour ago   Up 40 minutes                                                                                                                       HomeAssistant
8daf0012a85b        koenkk/zigbee2mqtt                    "./run.sh"               3 hours ago         Up 3 hours                                                                                                                          Zigbee2MQTT
264e74fdd0c3        eclipse-mosquitto:latest              "/docker-entrypoint.…"   3 hours ago         Up 3 hours             0.0.0.0:1883->1883/tcp, 0.0.0.0:9001->9001/tcp                                                               Mosquitto

Home Assistant and Zigbee2MQTT both were started with --net=host, which is why the ports section is empty.

Does anyone know what I can do to get past this?

Did you migrate all of your config directory, including the hidden .storage directory?

Yes, the .storage and .cloud folders are both there. The latter is empty though as I don’t use the cloud service.

How about starting a “clean” instance of HA. Does it work?

1 Like

A clean install asks me to make an account, and when I do it loads for a while and pops up with “Oh snap! Something went wrong.” in a javascript error. After that I get the login screen, but if I log in with the credentials I literally just created I get the HA logo and a spinner, which after a while just produces another JS popup with the text “Something went wrong loading onboarding, try refreshing.” Refreshing just ends up in the same result.

The log in that instance is empty except for this:

2020-06-04 10:55:46 WARNING (MainThread) [homeassistant.components.webhook] Received message for unregistered webhook 33772c9016d8723e846af1035ce6b52c6273a7cde933fc2347446a476f1de417 from 127.0.0.1
2020-06-04 10:55:49 WARNING (MainThread) [homeassistant.components.webhook] Received message for unregistered webhook 33772c9016d8723e846af1035ce6b52c6273a7cde933fc2347446a476f1de417 from 127.0.0.1

I should add that when I went to bed last night I noticed that all automations were working again, so my old instance of HA seems to run eventually, I just have no way at all to get into the UI.

Well I would say that if you have issues with a clean install, then just transferring your existing configuration is definitely not going to work.

You need to identify what’s preventing a clean install from working properly in your new hardware

That’s the thing, it’s not actually new hardware. To clarify: I upgraded just my drives, so I moved the old RAID array into another NAS, installed the new drives in the NAS I was already using for HA until then and configured my Docker from there. The hardware is exactly the same apart from the drives, and the software shouldn’t matter too much considering it’s a Docker container…

Is there any other log I could look at? The log I’m seeing now is not telling me anything and I can’t get it to work without more information… I tried finding more information using the container’s terminal but I can’t find anything interesting in the folders that are usually interesting for this.

I agree it should not matter. I too run docker in NAS/unraid .

the problem is that you should at least be able to start a new instance without any problems.

It maybe network related, but frankly dont know enough to say.

you can increase the logs by selecting the debug option in https://www.home-assistant.io/integrations/logger/

Thanks. The extra logging didn’t help much, sadly. But I did find out some more. I actually got into the UI after a whole collection of setting tweaks and after finally turning off this bit of configuration to make the reverse proxy work:

http:
  server_host: 127.0.0.1
  use_x_forwarded_for: true
  trusted_proxies: 127.0.0.1

If I comment those lines or at least remove/comment the server_host line I can get into Lovelace by going to port 8123 on the local IP or hostname of the NAS. My domain won’t work at that point, which sucks. This is the nginx config that Synology’s reverse proxy feature came up with, and it looks fine to me.

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name sub.domain.tld;

    ssl_certificate /usr/syno/etc/certificate/ReverseProxy/d19ccfee-bf82-422b-987c-99aee1889f39/fullchain.pem;
    ssl_certificate_key /usr/syno/etc/certificate/ReverseProxy/d19ccfee-bf82-422b-987c-99aee1889f39/privkey.pem;
    add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload" always;

    location / {
        proxy_connect_timeout 60;
        proxy_read_timeout 60;
        proxy_send_timeout 60;
        proxy_intercept_errors off;
        proxy_http_version 1.1;

        proxy_set_header        Host            $http_host;
        proxy_set_header        X-Real-IP            $remote_addr;
        proxy_set_header        X-Forwarded-For            $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto            $scheme;
        proxy_pass http://localhost:8123;
    }

    error_page 403 404 500 502 503 504 @error_page;

    location @error_page {
        root /usr/syno/share/nginx;
        rewrite (.*) /error.html break;
        allow all;
    }

}

If I try to log in through the reserve proxy I get a notification in the tab that is working, saying:

Login attempt or request with invalid authentication from {my external IP here}

…which is weird because they’re the same credentials that I can log in with just fine in the tab that uses the local hostname and port 8123.

Does this make sense to anyone?

I got it working. The nginx config was missing these two:

        proxy_set_header        Connection         $connection_upgrade;
        proxy_set_header        Upgrade            $http_upgrade;

Working like a charm now.