Home Assistant Community Add-on: InfluxDB

I ended up using netdata. It was easy to install on my router and it worked immediately. Also there is built in sensor integration into Home Assistant for it: https://www.home-assistant.io/integrations/netdata/
And as I remember I only had to make one rule in the router firewall to enable inter-vlan communication so that Home Assistant can get the data from the router as they are on different vlans.
And the other thing is setting up the netdata sensor which can seem tricky at first but it is well documented in the HA docs.
Using the netdata sensors automatically adds the data to influxdb and grafana for vsualization.
Netdata is said to be more resourse hungry but I noticed that only while using it’s UI. When the data is read from HA it the stats seem normal :slight_smile:
Hope this helps.

Hello,
I installed grafana and infuxdb (v 3.7.5) via supervisor and followed the instructions given in the documentation. But I don’t see my sensors and I have the following message in the log: "Cannot connect to InfluxDB due to ‘HTTPConnectionPool (host =’ aod7b954-influxdb ‘, port = 8086): Max retries exceeded with url: / write? db = homeassistant (Caused by NewConnectionError (’<urllib3.connection.HTTPConnection object at 0x99f749a0>: Failed to establish a new connection: [Errno -3] Try again’)) '. Please check that the provided connection details (host, port , etc.) are correct and that your InfluxDB server is running and accessible. Retrying in 60 seconds. "
What can I do ?
thanks for the help

Thanks for this! It looks like an interesting alternative, would you mind sharing your hassio config file for its integration (one of the nice thing I saw with collectd is that you could just reuse existing configs and grafana template suited for openwrt routers - but maybe this also exists for netdata)?

Yeah, sure. Here is my netdata config in sensors.yaml:

# Netdata
- platform: netdata
  host: '192.168.1.1'
  port: '19999'
  name: OpenWrt_CPU
  resources:
    system:
      data_group: system.cpu
      element: system
    steal:
      data_group: system.cpu
      element: steal
    softirq:
      data_group: system.cpu
      element: softirq
    irq:
      data_group: system.cpu
      element: irq
    user:
      data_group: system.cpu
      element: user
    nice:
      data_group: system.cpu
      element: nice
    iowait:
      data_group: system.cpu
      element: iowait
      
- platform: netdata
  host: '192.168.1.1'
  port: '19999'
  name: OpenWrt_Load
  resources:
    load1:
      data_group: system.load
      element: load1
    load5:
      data_group: system.load
      element: load5
    load15:
      data_group: system.load
      element: load15

- platform: netdata
  host: '192.168.1.1'
  port: '19999'
  name: OpenWrt_Uptime
  resources:
    uptime:
      data_group: system.uptime
      element: uptime
      
- platform: netdata
  host: '192.168.1.1'
  port: '19999'
  name: OpenWrt_Ram
  resources:
    free:
      data_group: system.ram
      element: free
    used:
      data_group: system.ram
      element: used
    cached:
      data_group: system.ram
      element: cached
    buffers:
      data_group: system.ram
      element: buffers

- platform: netdata
  host: '192.168.1.1'
  port: '19999'
  name: OpenWrt_Net
  resources:
    Received:
      data_group: system.net
      element: InOctets
    Sent:
      data_group: system.net
      element: OutOctets
      invert: true
      
- platform: netdata
  host: '192.168.1.1'
  port: '19999'
  name: OpenWrt_Memory
  resources:
    Available_Memory:
      data_group: mem.available
      element: MemAvailable

- platform: netdata
  host: '192.168.1.1'
  port: '19999'
  name: OpenWrt_Disk
  resources:
    avail:
      data_group: disk_space._
      element: avail
    used:
      data_group: disk_space._
      element: used
    reserved_for_root:
      data_group: disk_space._
      element: reserved_for_root

- platform: netdata
  host: '192.168.1.1'
  port: '19999'
  name: OpenWrt_Disk_Temp
  resources:
    avail:
      data_group: disk_space._tmp
      element: avail
    used:
      data_group: disk_space._tmp
      element: used
    reserved_for_root:
      data_group: disk_space._tmp
      element: reserved_for_root

Hi there!
I’m trying to migrate my current home_assistant DB from InfluxDB running on a different server to the InfluxDB running now on HA OS.

According to InfluxDB documentation, one needs to use the influxdb backup and influxdb restore, which both require the two servers to allow remote connections (see here).
I have successfully setup remote connection on my original server and I have now a backup of my database.

The problem is: how can I activate the remote setup in this Add-on, which relies on port 8088 once the bind-address is correctly set in InfluxDB configuration? It seems not possible in the addon configuration, thus making impossible any restoration.

EDIT
Of course, I have tried to add

envvars:
  - name: INFLUXDB_BIND_ADDRESS
    value: '0.0.0.0:8088'

in the add-on configuration, but I can not forward port 8088 from the container to the host as only ports 80 and 8086 are possible to configure through the UI.

Thanks for yor help!

Did you ever get this to working? I looked at your github issue and it referred to an older issue that was also closed. The other issue’s owner did post that he got it working though…but not how.

Hello

I am trying to start the InfluxDB with no success. The error has something to do with the kapacitor as you can see on the logs i include below. Of course i tried to reinstall the addon as well as hassio. I am using it in a raspberry pi 3 b. The config is very basic but i had same results no matter what i tried

auth: false
reporting: false
ssl: false
certfile: ‘’
keyfile: ‘’
envvars: []
log_level: error


Add-on version: 3.7.9
You are running the latest version of this add-on.
System: Home Assistant OS 5.9 (armv7 / raspberrypi3)
Home Assistant Core: 2020.12.1
Home Assistant Supervisor: 2020.12.7

[cont-init.d] 00-banner.sh: exited 0.
[cont-init.d] 01-log-level.sh: executing…
Log level is set to ERROR
[cont-init.d] 01-log-level.sh: exited 0.
[cont-init.d] create-users.sh: executing…
[cont-init.d] create-users.sh: exited 0.
[cont-init.d] influxdb.sh: executing…
[cont-init.d] influxdb.sh: exited 0.
[cont-init.d] kapacitor.sh: executing…
[cont-init.d] kapacitor.sh: exited 0.
[cont-init.d] nginx.sh: executing…
[cont-init.d] nginx.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
2020/12/22 16:53:15 Using configuration at: /etc/kapacitor/kapacitor.conf
ts=2020-12-22T16:53:15.454+02:00 lvl=error msg=“encountered error” service=run err="create server: invalid UUID length: 0"
run: create server: invalid UUID length: 0
[cont-finish.d] executing container finish scripts…
[cont-finish.d] 99-message.sh: executing…
[cont-finish.d] 99-message.sh: exited 0.
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

Thanks
George

Hey @sylar, did you sort this out?
I am looking to do the same, but can;t seem to get it working. I have a backup from my previous install but can’t find a way to get it imported/restored in the new influxdb add-on.

also running in issues:

Failed to install addon

404 Client Error for http+docker://localhost/v1.40/images/hassioaddons/influxdb-amd64:3.7.9/json: Not Found (“no such image: hassioaddons/influxdb-amd64:3.7.9: No such image: hassioaddons/influxdb-amd64:3.7.9”)

Hi,
are there any plans on upgrading to InfluxDB 2.0 soon?
BR

2 Likes

Hi @aetjansen.
Yep, I have completed the migration quite long ago now. The main idea I was mentioning in my post is the correct way to go, at least for the original database export, but since one can not open the correct port in the influxdb add-on for performing the migration, I had to perform the import directly from inside the add-on docker container.
The main difficulty is to find a way to upload the original backup folder to the container. As far as I remember, I have uploaded it to the share folder which is accessible from all add-on.

Basically, I did, from a client of the original influxdb server:

influxd backup -portable -database home_assistant -host INFLUXDB_SERVER_IP:8088 /path/to/folder_you_want/backup_home_assistant

And then, from inside the add-on container:

influxd restore -portable -database home_assistant -newdb home_assistant_old -host localhost:8088 ./backup_home_assistant

You have just to find the path to the folder you uploaded your backup from inside the container, which is bit tricky (sorry, I don’t have it mind anymore).
With this last command, your backup is sent to the influxdb server add-on to a database named home_assistant_old. Of course, you have to adapt it to whatever you want.

I hope this helps!

2 Likes

Did someone experience full data loss when upgrading the addon to latest v4.0.0?

1 Like

yes, and trying to start fresh, I cannot even create a new database. 401: unable to parse authentication credentials

Oh dear, something really screwed up here… still no clue what to do. Guess a snapshot restore would be possible but all data from snapshot time to addon update time would be lost… DAMN IT! :frowning:

+++++ UPDATE / WARNING +++++

Stay away from v4.0.0, you might experience the same issue. In case you update, make sure to create at least a partial snapshot for InfluxDB addon before. Be smarter than me :wink:

+++++ UPDATE / WARNING +++++

1 Like

Well, you lived up to your name😎 Sadly, I only found your warning after the same experience. A nice gap in my data will be a reminder going forward!

Me too! Have the 401 error too. Spent the last hour troubleshooting on a fresh installed system. “Glad” I’m not the only one (and I did not do anything wrong).

Problem has been solved in 4.0.1.

Awesome response time! Can confirm that 4.0.1 works. Much obliged!

Wondering how we can avoid such situations in future. How can we / normal users actively participate in tests to work around such faulty releases?

Lessons learned:

  1. Don´t auto-update (important/critical) addons (by @mbuscher)
  2. Don´t update instantly if it´s not a security bugfix update or no urgently awaited feature bug fix (well, that “just wait and let others test and maybe fail” strategy is not the best one…)
  3. Create a (partly) snapshot before updating, just in case…
1 Like

Another lesson learned: turn off Auto-Update, at least on critical add-ons!

1 Like