InfluxDB not working after full restore

Hey folks. I need some help. I just migrated from a Pi4 that was booting from an NVMe disk (using an Argon40 case) to a Home Assistant Yellow where the data disk is external. After setting up my HAY by restoring from full backup, and moving my data disk to a new NVMe on the HAY, I can’t seem to get InfluxDB to connect anymore. I keep getting this same error in supervisor logs:

Logger: homeassistant.components.influxdb
Source: components/influxdb/__init__.py:487
Integration: InfluxDB (documentation, issues)
First occurred: November 12, 2023 at 11:44:33 PM (20 occurrences)
Last logged: 12:03:34 AM

InfluxDB database is not accessible due to '404: {"error":"database not found: \"homeassistant\""}'. Please check that the database, username and password are correct and that the specified user has the correct permissions set. Retrying in 60 seconds.
Cannot connect to InfluxDB due to 'HTTPConnectionPool(host='a0d7b954-influxdb', port=8086): Max retries exceeded with url: /write?db=homeassistant (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f87b9c7d0>: Failed to establish a new connection: [Errno 111] Connection refused'))'. Please check that the provided connection details (host, port, etc.) are correct and that your InfluxDB server is running and accessible. Retrying in 60 seconds.

I can see all my previous energy data, but nothing new seems to be being logged either. I haven’t found any similar articles to this so I’m really not sure where to check next to resolve this :frowning:.

Note: I guess I did try disabling auth, but the error is the same. Also trying to access my InfluxDB at the default port (8086) simply gives me “404 page not found”.

EDIT: Attaching tacre-level log for InfluxDB startup:

[![Sponsor Frenck via GitHub Sponsors][github-sponsors-shield]][github-sponsors]

[![Support Frenck on Patreon][patreon-shield]][patreon]

Scalable datastore for metrics, events, and real-time analytics.

## About

InfluxDB is an open source time series database optimized for high-write-volume.
It's useful for recording metrics, sensor data, events,
and performing analytics. It exposes an HTTP API for client interaction and if
often used in combination with Grafana to visualize the data.

This add-on comes with Chronograf {MESSAGE} Kapacitor pre-installed as well. Which
gives you a nice InfluxDB admin interface for managing your users, databases,
data retention settings, and lets you peek inside the database using the
Data Explorer.

![Chronograf in the Home Assistant Frontend][screenshot]

[discord-shield]: https://img.shields.io/discord/478094546522079232.svg
[discord]: https://discord.me/hassioaddons
[forum-shield]: https://img.shields.io/badge/community-forum-brightgreen.svg
[forum]: https://community.home-assistant.io/t/home-assistant-community-add-on-influxdb/54491?u=frenck
[github-sponsors-shield]: https://frenck.dev/wp-content/uploads/2019/12/github_sponsor.png
[github-sponsors]: https://github.com/sponsors/frenck
[maintenance-shield]: https://img.shields.io/maintenance/yes/2023.svg
[patreon-shield]: https://frenck.dev/wp-content/uploads/2019/12/patreon.png
[patreon]: https://www.patreon.com/frenck
[project-stage-shield]: https://img.shields.io/badge/project%20stage-production%20ready-brightgreen.svg
[release-shield]: https://img.shields.io/badge/version-v4.8.0-blue.svg
[release]: https://github.com/hassio-addons/addon-influxdb/tree/v4.8.0
[screenshot]: https://github.com/hassio-addons/addon-influxdb/raw/main/images/screenshot.png","advanced":false,"stage":"stable","repository":"a0d7b954","version_latest":"4.8.0","protected":true,"rating":8,"boot":"auto","options":{"auth":true,"reporting":true,"ssl":true,"certfile":"fullchain.pem","keyfile":"privkey.pem","envvars":[],"log_level":"trace"},"schema":[{"name":"log_level","optional":true,"type":"select","options":["trace","debug","info","notice","warning","error","fatal"]},{"name":"auth","required":true,"type":"boolean"},{"name":"reporting","required":true,"type":"boolean"},{"name":"ssl","required":true,"type":"boolean"},{"name":"certfile","required":true,"type":"string"},{"name":"keyfile","required":true,"type":"string"},{"name":"envvars","type":"schema","optional":true,"multiple":true,"schema":[{"name":"name","required":true,"type":"string"},{"name":"value","required":true,"type":"string"}]},{"name":"leave_front_door_open","optional":true,"type":"boolean"}],"arch":["aarch64","amd64","armv7","i386"],"machine":[],"homeassistant":"0.92.0b2","url":"https://github.com/hassio-addons/addon-influxdb","detached":false,"available":true,"build":false,"network":{"80/tcp":null,"8086/tcp":8086,"8088/tcp":8088},"network_description":{"80/tcp":"Web interface (Not required for Ingress)","8086/tcp":"InfluxDB server","8088/tcp":"RPC service for backup and restore"},"host_network":false,"host_pid":false,"host_ipc":false,"host_uts":false,"host_dbus":false,"privileged":[],"full_access":false,"apparmor":"default","icon":true,"logo":true,"changelog":true,"documentation":true,"stdin":false,"hassio_api":true,"hassio_role":"default","auth_api":true,"homeassistant_api":false,"gpio":false,"usb":false,"uart":false,"kernel_modules":false,"devicetree":false,"udev":false,"docker_api":false,"video":false,"audio":false,"startup":"services","services":[],"discovery":[],"translations":{},"ingress":true,"signed":true,"state":"started","webui":null,"ingress_entry":"/api/hassio_ingress/kfTjxuphOKfBjftqgBsPi_06ylMXGtPeTu0IiL6oDaA","ingress_url":"/api/hassio_ingress/kfTjxuphOKfBjftqgBsPi_06ylMXGtPeTu0IiL6oDaA/","ingress_port":1337,"ingress_panel":false,"audio_input":null,"audio_output":null,"auto_update":true,"ip_address":"172.30.33.1","version":"4.8.0","update_available":false,"watchdog":true,"devices":[]} .ip_address // empty
[10:07:14] TRACE: bashio::cache.set: addons.self.ip_address 172.30.33.1
[10:07:14] TRACE: bashio::fs.directory_exists: /tmp/.bashio
[10:07:14] TRACE: bashio::dns.host
[10:07:14] TRACE: bashio::dns dns.info.host .host
[10:07:14] TRACE: bashio::cache.exists: dns.info.host
[10:07:14] TRACE: bashio::fs.file_exists: /tmp/.bashio/dns.info.host.cache
[10:07:14] TRACE: bashio::cache.exists: dns.info
[10:07:14] TRACE: bashio::fs.file_exists: /tmp/.bashio/dns.info.cache
[10:07:14] TRACE: bashio::api.supervisor GET /dns/info false
[10:07:14] DEBUG: Requested API resource: http://supervisor/dns/info
[10:07:14] DEBUG: Request method: GET
[10:07:14] DEBUG: Request data: {}
[10:07:14] DEBUG: API HTTP Response code: 200
[10:07:14] DEBUG: API Response: {"result": "ok", "data": {"version": "2023.06.2", "version_latest": "2023.06.2", "update_available": false, "host": "172.30.32.3", "servers": [], "locals": ["dns://192.168.50.6", "dns://192.168.50.1"], "mdns": true, "llmnr": true, "fallback": true}}

[10:07:14] TRACE: bashio::jq: {"result": "ok", "data": {"version": "2023.06.2", "version_latest": "2023.06.2", "update_available": false, "host": "172.30.32.3", "servers": [], "locals": ["dns://192.168.50.6", "dns://192.168.50.1"], "mdns": true, "llmnr": true, "fallback": true}}
 .result
[10:07:14] TRACE: bashio::var.true: false
[10:07:14] TRACE: bashio::jq: {"result": "ok", "data": {"version": "2023.06.2", "version_latest": "2023.06.2", "update_available": false, "host": "172.30.32.3", "servers": [], "locals": ["dns://192.168.50.6", "dns://192.168.50.1"], "mdns": true, "llmnr": true, "fallback": true}}
 if .data == {} then empty else .data end
[10:07:14] TRACE: bashio::var.has_value: 
[10:07:14] TRACE: bashio::cache.set: dns.info {"version":"2023.06.2","version_latest":"2023.06.2","update_available":false,"host":"172.30.32.3","servers":[],"locals":["dns://192.168.50.6","dns://192.168.50.1"],"mdns":true,"llmnr":true,"fallback":true}
[10:07:14] TRACE: bashio::fs.directory_exists: /tmp/.bashio
[10:07:15] TRACE: bashio::var.has_value: .host
[10:07:15] TRACE: bashio::jq: {"version":"2023.06.2","version_latest":"2023.06.2","update_available":false,"host":"172.30.32.3","servers":[],"locals":["dns://192.168.50.6","dns://192.168.50.1"],"mdns":true,"llmnr":true,"fallback":true} .host
[10:07:15] TRACE: bashio::cache.set: dns.info.host 172.30.32.3
[10:07:15] TRACE: bashio::fs.directory_exists: /tmp/.bashio
cont-init: info: /etc/cont-init.d/nginx.sh exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun chronograf (no readiness notification)
services-up: info: copying legacy longrun influxdb (no readiness notification)
services-up: info: copying legacy longrun kapacitor (no readiness notification)
services-up: info: copying legacy longrun nginx (no readiness notification)
[10:07:15] TRACE: bashio::net.wait_for 8889 localhost 9000
[10:07:15] TRACE: bashio::config: envvars|keys
s6-rc: info: service legacy-services successfully started
[10:07:15] TRACE: bashio::addon.config
[10:07:15] INFO: Chronograf is waiting until InfluxDB is available...
[10:07:15] TRACE: bashio::cache.exists: addons.self.options.config
[10:07:15] INFO: Kapacitor is waiting until InfluxDB is available...
[10:07:15] TRACE: bashio::fs.file_exists: /tmp/.bashio/addons.self.options.config.cache
[10:07:15] TRACE: bashio::cache.get: addons.self.options.config
[10:07:15] TRACE: bashio::cache.exists: addons.self.options.config
[10:07:15] TRACE: bashio::fs.file_exists: /tmp/.bashio/addons.self.options.config.cache
[10:07:15] TRACE: bashio::jq: {"auth":true,"reporting":true,"ssl":true,"certfile":"fullchain.pem","keyfile":"privkey.pem","envvars":[],"log_level":"trace"} if (.envvars|keys == null) then
            null
        elif (.envvars|keys | type == "string") then
            .envvars|keys // empty
        elif (.envvars|keys | type == "boolean") then
            .envvars|keys // false
        elif (.envvars|keys | type == "array") then
            if (.envvars|keys == []) then
                empty
            else
                .envvars|keys[]
            end
        elif (.envvars|keys | type == "object") then
            if (.envvars|keys == {}) then
                empty
            else
                .envvars|keys
            end
        else
            .envvars|keys
        end
[10:07:15] INFO: Starting the InfluxDB...

Hi, I got a very similar issue… did you ever find a solution?