Hi all,
I have just mounted my Synology NAS to my RPi where HA is running.
How I could change the configuration of InfluxDB to save data on NAS disk?
Many thanks for any support you could provide.
Hi, i cant find a way to export influxdb database. Using commands influxd from ssh terminal does not work (it give command not found error). Someone can help? Thank you
I managed it 2 years ago, but things may have changed. I now have my database on a separate PVE setup!
Hello Frenck,
I have installed and uninstalled InfluxDB several times without success.
Unfortunately, I’m now at the end of my knowledge, I only have rudimentary knowledge of programming.
But you could already incorporate a lot into your “Wonderful World of Home Assistant”.
A really great project in my opinion, thank you for your time, work and effort!
The InfluxDB doesn’t start properly, can you help me or someone from the Home Assistant group? I would be happy to find a solution, thank you!
I’ve made a few attempts with changes in the config yaml, without success.
I’ve been stuck with this problem for a few days now.
Here is the log from the start:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting
Add-on: InfluxDB
Scalable datastore for metrics, events, and real-time analytics
Add-on version: 4.8.0
You are running the latest version of this add-on.
System: Home Assistant OS 11.1 (amd64 / generic-x86-64)
Home Assistant Core: 2023.10.5
Home Assistant Supervisor: 2023.10.1
Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.
s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[19:00:14] INFO: Configuring timezone (Europe/Berlin)…
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/create-users.sh
[19:00:18] INFO: InfluxDB init process in progress…
[tcp] 2023/11/01 19:00:23 tcp.Mux: Listener at 127.0.0.1:8088 failed failed to accept a connection, closing all listeners - accept tcp 127.0.0.1:8088: use of closed network connection
cont-init: info: /etc/cont-init.d/create-users.sh exited 0
cont-init: info: running /etc/cont-init.d/influxdb.sh
cont-init: info: /etc/cont-init.d/influxdb.sh exited 0
cont-init: info: running /etc/cont-init.d/kapacitor.sh
cont-init: info: /etc/cont-init.d/kapacitor.sh exited 0
cont-init: info: running /etc/cont-init.d/nginx.sh
cont-init: info: /etc/cont-init.d/nginx.sh exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun chronograf (no readiness notification)
services-up: info: copying legacy longrun influxdb (no readiness notification)
services-up: info: copying legacy longrun kapacitor (no readiness notification)
services-up: info: copying legacy longrun nginx (no readiness notification)
[19:00:24] INFO: Chronograf is waiting until InfluxDB is available…
[19:00:24] INFO: Kapacitor is waiting until InfluxDB is available…
s6-rc: info: service legacy-services successfully started
[19:00:24] INFO: Starting the InfluxDB…
[19:00:29] INFO: Starting Chronograf…
[19:00:29] INFO: Starting the Kapacitor
‘##:::’##::::’###::::’########:::::’###:::::’######::’####:’########::’#######::’########::
##::’##::::’## ##::: ##… ##:::’## ##:::’##… ##:. ##::… ##…::’##… ##: ##… ##:
##:’##::::’##:. ##:: ##:::: ##::’##:. ##:: ##:::…::: ##::::: ##:::: ##:::: ##: ##:::: ##:
#####::::’##:::. ##: ########::’##:::. ##: ##:::::::: ##::::: ##:::: ##:::: ##: ########::
##. ##::: #########: ##…::: #########: ##:::::::: ##::::: ##:::: ##:::: ##: ##… ##:::
##:. ##:: ##… ##: ##:::::::: ##… ##: ##::: ##:: ##::::: ##:::: ##:::: ##: ##::. ##::
##::. ##: ##:::: ##: ##:::::::: ##:::: ##:. ######::’####:::: ##::::. #######:: ##:::. ##:
…::::…::…:::::…::…:::::::::…:::::…:::…:::…:::::…::::::…:::…:::::…::
2023/11/01 19:00:29 Using configuration at: /etc/kapacitor/kapacitor.conf
time=“2023-11-01T19:00:32+01:00” level=info msg=“Serving chronograf at http://127.0.0.1:8889” component=server
time=“2023-11-01T19:00:32+01:00” level=info msg=“Reporting usage stats” component=usage freq=24h reporting_addr=“https://usage.influxdata.com” stats=“os,arch,version,cluster_id,uptime”
[19:00:33] INFO: Starting NGINX…
I’m getting the exact same logs, stuck at Starting NGINX…
I had to uninstall the add-on completely since it would prevent HA from starting correctly on reboot (time out errors everywhere in my logs, CPU running very high).
I’m sure it’s just a temporary snag. I’ll be waiting for a fix since this looks like the perfect solution for the custom component I’m working on.
If there is any info I can provide I’ll be happy to help.
UPDATE: Managed to get it started after updating to latest version of supervisor (2023.11.2, from 2023.10.1). However CPU usage remains high and my host (R-pi 3) keeps crashing.
I’m too is stuck at Starting NGINX… Running latest HAOS and core.
Any update? I too have the same problem
Do you see the Open WEB UI button on the first page of the Addon?
Hello since 2 weeks my InfluxDB doesn’t work anymore. InfluxDB crasches al the time.
If uninstalled influxDB. En installed it again. Now everything works fine again. But now i want to restore data back to de InfluxDB. Becourse i have 3 years data.
I have a full backup of the influxDB.Tar.gz file. (800MB data) But i don’t no how i restore the data back to the new influxDB. I can only import a *.CSV file
Please can someone help me to restore my InfluxDB.
Regards,
Dolby
Problem solved with HA update
Franck
I appreciate all you do here, thanks.
I wonder if you could help me with a problem regarding my InfluxDB size?
After 4 years it’s over 15Gb because I never set a retention policy nor did I exclude anything. My backups are 18Gb and take 2 days to upload to Google Drive cloud.
I’ve now set a retention and have added Includes for just the stuff I want and have also set about deleting entirely all the stuff already in the database I don’t want.
But - How can I force the database file to re-size itself. I’ve done lots of searching but come up with nothing apart from others asking (no answers) and something on an Influx site talking about ‘Compacting’ that requires Unix like commands, but I don’t know how this translates to Influx as an Add-On in HA.
Grateful if you could help
Is it possible to access InfluxDB and/or the data somehow?
I have the Terminal add-on but it runs it its own docker container. Usually add-ons export everything to that container but for Influx I did not find anything…
Hi folks, I realized that more peple had the same problem in previous years randomly but I cannot find a solution. My influxDB in the container on my Synology failed, so I decided to run it again as an addon but it refuses to start with this found in the log:
tcp.Mux: Listener at 127.0.0.1:8088 failed failed to accept a connection, closing all listeners - accept tcp 127.0.0.1:8088: use of closed network connection
Reinstalling the addon does not solve it. Are there some remnants of the container that I can delete manually to have a fresh install or do you know how to solve this problem in any other way?
And how can a user change the ID? I am stuck in this bug for months, updates have not solve it
@frenck it looks a lot of people have the same problem and we do not get any answers, neither here, nor on Github, please, have a look. Thank you.
Same problem here. Says
tcp.Mux: Listener at 127.0.0.1:8088 failed failed to accept a connection, closing all listeners - accept tcp 127.0.0.1:8088: use of closed network connection
And seems to loop reboot the container and that makes high cpu usage. Load average is about 12 on a RPI4/4G
Hi !
Same problem, same message.
Some Grafana dashboards are good, others not …
I’ve checked the config of the add-on and … wow, the SSL option was checked …
Uncheck.
Restart and … all fine.
I didn’t know why the SSL option suddenly change to ON …
I’m not sure since which point in time, I assume since last update I get an error at startup:
[18:22:51] INFO: Starting the InfluxDB...
fatal error: index out of range
runtime: panic before malloc heap initialized
runtime stack:
runtime.throw(0x10efa0e, 0x12)
/usr/local/go/src/runtime/panic.go:774 +0x54 fp=0x7fcc457e20 sp=0x7fcc457df0 pc=0x3e244
runtime.panicCheck1(0x5aa00, 0x10efa0e, 0x12)
/usr/local/go/src/runtime/panic.go:21 +0xc4 fp=0x7fcc457e50 sp=0x7fcc457e20 pc=0x3c6c4
runtime.goPanicIndexU(0x2920782028202852, 0xb56604)
/usr/local/go/src/runtime/panic.go:78 +0x3c fp=0x7fcc457ea0 sp=0x7fcc457e50 pc=0x3c82c
runtime.moduledataverify1(0x2bd8760)
/usr/local/go/src/runtime/symtab.go:454 +0x510 fp=0x7fcc457fb0 sp=0x7fcc457ea0 pc=0x5aa00
runtime.moduledataverify()
/usr/local/go/src/runtime/symtab.go:432 +0x34 fp=0x7fcc457fd0 sp=0x7fcc457fb0 pc=0x5a4c4
runtime.schedinit()
/usr/local/go/src/runtime/proc.go:540 +0x64 fp=0x7fcc458040 sp=0x7fcc457fd0 pc=0x40bf4
runtime.rt0_go(0x7fcc458c75, 0x0, 0x7fcc458c7d, 0x7fcc458ca6, 0x7fcc458cc7, 0x7fcc458cda, 0x7fcc458cf9, 0x7fcc458d14, 0x7fcc458d37, 0x7fcc458d5c, ...)
/usr/local/go/src/runtime/asm_arm64.s:70 +0xb8 fp=0x7fcc458070 sp=0x7fcc458040 pc=0x6a6e8
I’ve tried with some environment variables pointing to heap usage and memory:
- name: INFLUXDB_DATA_MAX_INDEX_LOG_FILE_SIZE
value: "1m"
- name: INFLUXDB_DATA_SERIES_ID_SET_CACHE_SIZE
value: "0"
but without success. Any suggestion will be appreciated.
Hello!
I seem to be having the same issue here… Is there a way to tweak settings for the container to start?
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting
-----------------------------------------------------------
Add-on: InfluxDB
Scalable datastore for metrics, events, and real-time analytics
-----------------------------------------------------------
Add-on version: 5.0.1
You are running the latest version of this add-on.
System: Home Assistant OS 13.1 (armv7 / raspberrypi4)
Home Assistant Core: 2024.9.3
Home Assistant Supervisor: 2024.10.2
-----------------------------------------------------------
Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[21:57:18] INFO: Configuring timezone (America/Recife)...
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/create-users.sh
cont-init: info: /etc/cont-init.d/create-users.sh exited 0
cont-init: info: running /etc/cont-init.d/influxdb.sh
[21:57:20] INFO: Reporting of usage stats to InfluxData is disabled.
cont-init: info: /etc/cont-init.d/influxdb.sh exited 0
cont-init: info: running /etc/cont-init.d/kapacitor.sh
cont-init: info: /etc/cont-init.d/kapacitor.sh exited 0
cont-init: info: running /etc/cont-init.d/nginx.sh
cont-init: info: /etc/cont-init.d/nginx.sh exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun chronograf (no readiness notification)
services-up: info: copying legacy longrun influxdb (no readiness notification)
services-up: info: copying legacy longrun kapacitor (no readiness notification)
services-up: info: copying legacy longrun nginx (no readiness notification)
s6-rc: info: service legacy-services successfully started
[21:57:22] INFO: Kapacitor is waiting until InfluxDB is available...
[21:57:22] INFO: Chronograf is waiting until InfluxDB is available...
[21:57:22] INFO: Starting the InfluxDB...
runtime: out of memory: cannot allocate 8192-byte block (560922624 in use)
fatal error: out of memory
goroutine 70 [running]:
runtime.throw(0xfebdde, 0xd)
/usr/local/go/src/runtime/panic.go:774 +0x5c fp=0x4963114 sp=0x4963100 pc=0x41644
runtime.(*mcache).refill(0xb6ed9008, 0x23)
/usr/local/go/src/runtime/mcache.go:140 +0xfc fp=0x4963128 sp=0x4963114 pc=0x262ec
runtime.(*mcache).nextFree(0xb6ed9008, 0xaaf7323, 0x1, 0x191, 0xd01ac0)
/usr/local/go/src/runtime/malloc.go:854 +0x7c fp=0x4963148 sp=0x4963128 pc=0x1b0f4
runtime.mallocgc(0x100, 0x0, 0x0, 0xc9e74c)
/usr/local/go/src/runtime/malloc.go:1022 +0x7a0 fp=0x49631b0 sp=0x4963148 pc=0x1ba40
runtime.rawbyteslice(0xfa, 0x0, 0x0, 0x0)
/usr/local/go/src/runtime/string.go:272 +0x84 fp=0x49631cc sp=0x49631b0 pc=0x5f908
runtime.stringtoslicebyte(0x0, 0xbe981800, 0xfa, 0xf0, 0xbeba2510, 0x1)
/usr/local/go/src/runtime/string.go:161 +0xa4 fp=0x49631ec sp=0x49631cc pc=0x5f354
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*Cache).WriteMulti(0x4f08800, 0x491be40, 0x4963510, 0x1eb5b08)
/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache.go:343 +0x268 fp=0x49632d8 sp=0x49631ec pc=0xc9e708
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*CacheLoader).Load.func1(0x4963594, 0x688c390, 0x496358c, 0x4f08800, 0x0, 0x0)
/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache.go:747 +0x4d4 fp=0x4963568 sp=0x49632d8 pc=0xd13240
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*CacheLoader).Load(0x688c390, 0x4f08800, 0x1, 0x1)
/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/cache.go:758 +0x88 fp=0x496359c sp=0x4963568 pc=0xca00bc
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*Engine).reloadCache(0x65bfe00, 0x0, 0x0)
/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/engine.go:2408 +0x1d4 fp=0x49636ec sp=0x496359c pc=0xccfbe4
github.com/influxdata/influxdb/tsdb/engine/tsm1.(*Engine).Open(0x65bfe00, 0x448dc00, 0x1ed86d8)
/go/src/github.com/influxdata/influxdb/tsdb/engine/tsm1/engine.go:754 +0x28c fp=0x4963734 sp=0x49636ec pc=0xcc69b4
github.com/influxdata/influxdb/tsdb.(*Shard).Open.func1(0x45ac580, 0x0, 0x0)
/go/src/github.com/influxdata/influxdb/tsdb/shard.go:344 +0x298 fp=0x4963a58 sp=0x4963734 pc=0x5d12c4
github.com/influxdata/influxdb/tsdb.(*Shard).Open(0x45ac580, 0x448dc80, 0x63a9b00)
/go/src/github.com/influxdata/influxdb/tsdb/shard.go:355 +0x1c fp=0x4963a84 sp=0x4963a58 pc=0x5b9dac
github.com/influxdata/influxdb/tsdb.(*Store).loadShards.func1(0x45cc2c0, 0x47ac000, 0x4946e10, 0x45cc300, 0xfd51f0, 0x44cad80, 0x4756200, 0x4522d54, 0x9, 0x45a061e, ...)
/go/src/github.com/influxdata/influxdb/tsdb/store.go:404 +0x4c4 fp=0x4963fb4 sp=0x4963a84 pc=0x5d23e4
runtime.goexit()
/usr/local/go/src/runtime/asm_arm.s:868 +0x4 fp=0x4963fb4 sp=0x4963fb4 pc=0x73610
created by github.com/influxdata/influxdb/tsdb.(*Store).loadShards
/go/src/github.com/influxdata/influxdb/tsdb/store.go:362 +0xb64
goroutine 1 [chan receive, 1 minutes]:
github.com/influxdata/influxdb/tsdb.(*Store).loadShards(0x47ac000, 0x0, 0x0)
/go/src/github.com/influxdata/influxdb/tsdb/store.go:421 +0x115c
github.com/influxdata/influxdb/tsdb.(*Store).Open(0x47ac000, 0x0, 0x0)
/go/src/github.com/influxdata/influxdb/tsdb/store.go:221 +0x1a4
github.com/influxdata/influxdb/cmd/influxd/run.(*Server).Open(0x477a0a0, 0x46a9cac, 0x477a0a0)
/go/src/github.com/influxdata/influxdb/cmd/influxd/run/server.go:444 +0x894
github.com/influxdata/influxdb/cmd/influxd/run.(*Command).Run(0x45316e0, 0x44900f0, 0x0, 0x0, 0x0, 0x44900f0)
/go/src/github.com/influxdata/influxdb/cmd/influxd/run/command.go:149 +0x7e4
main.(*Main).Run(0x46a9f8c, 0x44900f0, 0x0, 0x0, 0x2b7a8f8, 0x448e030)
/go/src/github.com/influxdata/influxdb/cmd/influxd/main.go:81 +0x104
main.main()
/go/src/github.com/influxdata/influxdb/cmd/influxd/main.go:45 +0x140
goroutine 18 [syscall, 1 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:147 +0x130
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x14
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:29 +0x30
goroutine 8 [select]:
github.com/influxdata/influxdb/vendor/go.opencensus.io/stats/view.(*worker).start(0x46d24c0)
/go/src/github.com/influxdata/influxdb/vendor/go.opencensus.io/stats/view/worker.go:154 +0xb0
created by github.com/influxdata/influxdb/vendor/go.opencensus.io/stats/view.init.0
/go/src/github.com/influxdata/influxdb/vendor/go.opencensus.io/stats/view/worker.go:32 +0x48
goroutine 9 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0xa6c19f98, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:184 +0x44
internal/poll.(*pollDesc).wait(0x452c5b4, 0x72, 0x0, 0x0, 0xfe25d8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x30
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0x452c5a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:384 +0x1a8
net.(*netFD).accept(0x452c5a0, 0x0, 0xa8001, 0x0)
/usr/local/go/src/net/fd_unix.go:238 +0x20
net.(*TCPListener).accept(0x4746fb0, 0x4756780, 0x40000000, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:139 +0x20
net.(*TCPListener).Accept(0x4746fb0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock.go:261 +0x3c
github.com/influxdata/influxdb/tcp.(*Mux).Serve(0x4756780, 0x1ec3ff0, 0x4746fb0, 0x4746fb0, 0x0)
/go/src/github.com/influxdata/influxdb/tcp/mux.go:75 +0x64
created by github.com/influxdata/influxdb/cmd/influxd/run.(*Server).Open
/go/src/github.com/influxdata/influxdb/cmd/influxd/run/server.go:395 +0x1f0
[21:58:41] WARNING: InfluxDB crashed, halting add-on
s6-rc: info: service legacy-services: stopping
[21:58:41] INFO: InfluxDB stopped, restarting...
[21:58:41] INFO: Kapacitor stopped, restarting...
[21:58:41] INFO: Chronograf stopped, restarting...
[21:58:41] INFO: NGINX stopped, restarting...
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service base-addon-timezone: stopping
s6-rc: info: service base-addon-log-level: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service base-addon-timezone successfully stopped
s6-rc: info: service base-addon-log-level successfully stopped
s6-rc: info: service base-addon-banner: stopping
s6-rc: info: service base-addon-banner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
Anyone would know to to move away from influx and use MySQL in this state?