Unreliable InfluxDB size sensor

Sorry, I am not sure if I understand… I use a home assistant user to connect to mosquitto.


Do I need to make like above? Why?

And in the addon docs:
Option: logins (optional)
A list of local users that will be created with username and password. You don’t need to do this because you can use Home Assistant users too, without any configuration. If a local user is specifically desired:

logins:

  • username: user
    password: passwd

What am I mssing?

Yes you can but you specifically can not use the username “homeassistant”.

https://github.com/home-assistant/addons/blob/5d21f08245dd0fac4183169fe43848c05547c401/mosquitto/DOCS.md

Thanks for the retain flag suggestion (adding -r flag)

I did not use that username…

Hi!
I can’t make the Terminal&SSH work. I have tried the file creation mode and the mqtt, but it seems, that nothing happened.
I don’t have an option to turn off Protection mode

config:

log

Thank you for your help!

Turn on advanced mode in your profile.

That add-on won’t work. In the documentation it says

Known issues and limitations

  • This add-on will not enable you to install packages or do anything as root. This is not working with Home Assistant.

Use the SSH & Web Terminal add-on found here GitHub - hassio-addons/addon-ssh: SSH & Web Terminal - Home Assistant Community Add-ons

BTW, @tom_l and @erkr thank you for the tips for running the MQTT publish as part of the ssh add-on, works great!
image

1 Like

Doh. Good catch. I missed that they were not using the Web terminal SSH add-on version.

Hi,
Previously in this thread I proposed to publish the InfluxDB size via MQTT. I just want to update how to publish it via a command line sensor, for those that don’t want to run a MQTT server. Please note this procedure is more advanced.

  • The SSH & Web Terminal is still needed (with protection mode off)
  • We now don’t create the init command in that add-on!
  • You first need to create authorization keys to enable password less SSH access. This is explained in: SSH'ing from a command line sensor or shell command
  • the procedure doesn’t mention you have to also have add the public key to the authorization_keys in the SSH & Web terminal config!
  • If SSH logon works without a password, you can create a command line sensor to read the InfluxDB size like this:
  - platform: command_line
    name: InfluxDB Size
    unique_id: influxdb_size
    command: ssh <user>@192.168.178.47 -i /config/.ssh/id_rsa 'sudo docker exec addon_a0d7b954_influxdb du -s /data/influxdb/data/homeassistant'
    value_template: "{{ (value.split('\t')[0]|int(0)/1000)|round(1) }}" 
    unit_of_measurement: 'MB'
    scan_interval: 300  

Check the log of your SSH & Web terminal add-on if there are issues

2 Likes

Each addon has a hostname which is shown on the info tab of the config panel for that addon which can be used to talk to it from HA and other addons. You don’t have to expose the ssh on the host and go out over your LAN to talk to it from HA. Just use this instead and talk to it directly without leaving the docker network:

ssh <user>@a0d7b954-ssh ...

Plus then your yaml can simply be copied and pasted by other users that want to do the same since the hostname of an addon is the same on all systems. Whereas everyone’s LAN subnet and chosen IP for HA is different.

3 Likes

@CentralCommand Thanks for the tip, didn’t realize that, but it indeed works fine. Changed it into:

command: ssh <user>@homeassistant -i /config/.ssh/id_rsa 'sudo docker exec addon_a0d7b954_influxdb du -s /data/influxdb/data/homeassistant'
1 Like

I found this thread very helpful in setting up my DB size sensor.

Perhaps just a few notes on what I had to do to get it working as a noob:

In the SSH & Web Terminal addon, you have to put the init_commands into the init_commands field. You don’t use the ‘- >-’ part if doing it like this:

    while [ 1 = 1 ]; do docker exec addon_a0d7b954_influxdb du -s
    /data/influxdb/data/homeassistant | mosquitto_pub -t
    home-assistant/sensor/dbsize -r -u <user> -P <password> -l && sleep
    300; done &

Remember to change the username and password with your MQTT broker username and password. Also make sure the db name matches what you have in the command. In my case the db was called home_assistant not homeassistant.

The whole entry that goes into configuration.yaml looks like this:

mqtt:
  sensor:
    - name: InfluxDB DB Size
      unit_of_measurement: "MB"
      icon: hass:chart-line
      state_topic: "home-assistant/sensor/dbsize"
      value_template: "{{ (value.split('\t')[0]|int(0)/1000)|round(3) }}"

That worked for me.

3 Likes

So are you actually SSH-ing from your HA back into your HA? Why ist that necessary if everything is local already?

I had this which never worked, now finally following up and trying to keep the CL sensor path instead of writing things to a file or MQTT (keep it as simple as possible!):

sensor:
  - platform: command_line
    name: InfluxDB size (homeassistant)
    unique_id: xxx-xxx
    scan_interval: 3600
    command_timeout: 30
    command: "docker exec addon_a0d7b954_influxdb du -shm /data/influxdb/data/homeassistant | cut -f1"
    unit_of_measurement: MB
    value_template: "{{ value }}"

Sensor output is: (empty/nothing)

Running the command manually in the SSH addon with protection mode off gives:
5621.

Makes sense, as the CL sensor is run in the HA container - which does not see the folder (same for /backup which is not accessible for HA).

So how to make this work?

  • …without SSH-ing into localhost (giving HA access to the whole system without authentication still does not feel like a good idea from a security perspective - this way it can “break out of its jail”).
  • …without using ✔️🏃Run On Startup.d
  • …without constantly running a cron which stores the output of the du -sh command to a file which is then read by HA.

All of that is way to complicated. Sounds like there is one pill I have to swallow - unless there’s another way. Maybe?

1 Like

As soon as I use the init_commands, after restarting the addon I get constant and plenty of these:

Error: The connection was refused.
Connection error: Connection Refused: not authorised.
Error: The connection was refused.
Connection error: Connection Refused: not authorised.
Error: The connection was refused.
Connection error: Connection Refused: not authorised.
Error: The connection was refused.
Connection error: Connection Refused: not authorised.
Error: The connection was refused.
Connection error: Connection Refused: not authorised.
Error: The connection was refused.
Connection error: Connection Refused: not authorised.
Error: The connection was refused.
Connection error: Connection Refused: not authorised.
Error: The connection was refused.
Connection error: Connection Refused: not authorised.
Error: The connection was refused.
Connection error: Connection Refused: not authorised.

:arrow_right::question::question::question:

@sender I saw in Unreliable InfluxDB size sensor - #39 by sender you ran into the same issue. How did you fix this?

@erkr might have an idea too I guess.

UPDATE:
Found the error. I can strongly recommend to put and for MQTT in ', so make sure to use ...-u '<username>' -P '<password>'... :white_check_mark:


What would the SSH & Web Terminal addon insert need to look like if I want to to it other stuff too (by always also sending the output to MQTT)?

Like:

  • du for /data/influxdb/data/_internal (also in InfluxDB addon)
  • a simple du -shm /backup (for getting the size of the backup folder - not related to InfluxDB at all)?

Is this supposed to work?

while [ 1 = 1 ]; do du -shm /backup | cut -f1 | mosquitto_pub -t homeassistant/sensor/System/foldersize/backup -r -u <username> -P <password> -l && docker exec addon_a0d7b954_influxdb du -shm /data/influxdb/data/homeassistant | cut -f1 | mosquitto_pub -t homeassistant/sensor/InfluxDB/dbsize/homeassistant -r -u <username> -P <password> -l && docker exec addon_a0d7b954_influxdb du -shm /data/influxdb/data/_internal | cut -f1 | mosquitto_pub -t homeassistant/sensor/InfluxDB/dbsize/_internal -r -u <username> -P <password> -l && docker exec addon_a0d7b954_influxdb du -shm /data/influxdb/data/chronograf | cut -f1 | mosquitto_pub -t homeassistant/sensor/InfluxDB/dbsize/chronograf -r -u <username> -P <password> -l && sleep 900; done &

Because: not sure

  • if multiple commands are working
  • what the -l does

Update:
I managed to realize this too. It’s possible to set multiple init_commands (using the UI or the YAML editor). I decided to set 4 individual ones (3 for InfluxDB sizes and 1 for /backup (another use-case)). Will need to monitor system utilization but based on @tom_l 's intensive testing back then I guess it should be fine.

Hi all, after updating a lot of addons last evening (mqtt, ssh, etc.) I found this morning my entire entity (mqtt) was missing from configuaration.yaml.

This was totally gone:

mqtt:
  sensor:
    - name: InfluxDB DB Size
      unit_of_measurement: "MB"
      icon: hass:chart-line
      state_topic: "home-assistant/sensor/dbsize"
      value_template: "{{ (value.split('\t')[0]|int(0)/1000)|round(3) }}"

Someone knows how or why this could have happened?!

I had to use the bucket ID instead of the bucket name to get it to work with InfluxDBv2, in case it helps anyone else.

docker exec addon_47c55538_influxdbv2 du -s /data/influxdb/data/f83e928d4e035d3e

InfluxDB 2.x has scrapers which could be used for this purpose.

A scraper which pulls data from the internal metrics endpoint (that is: http://${influxdb}/metrics) stores the shard size in the storage_tsm_files_disk_bytes metric. The tricky part is that is this metric has the bucket and shard IDs as labels, so in order to get the total size of a bucket, you have to sum up all the respective shard sizes on a given timestamp.

The data seems reliable to me and here’s how it looks when with a “Reduce row” transformation in Grafana (the values correspond to the actual file sizes on disk):

The same approach could probably also be used for a sensor with a query like this: How to monitor the disk space used by influxdb databases? - #3 by fercasjr - influxdb - InfluxData Community Forums

This query might help someone

from(bucket: "scraper_influxdb")
  |> range(start: -30s)
  |> filter(fn: (r) => r["_measurement"] == "storage_tsm_files_disk_bytes")
  |> filter(fn: (r) => r["bucket"] == "bdd22d00441cd90b")
  |> drop(columns: ["_field", "_measurement", "engine", "id", "path", "walPath"])
  |> aggregateWindow(every: v.windowPeriod, fn: sum, createEmpty: false)
  |> group(columns: ["bucket", "_measurement"], mode:"by")
  |> yield(name: "sum")

Being fussy, there is probably a small correction to some of the code above related to using du -s xxxxxxxxx to measure the disk usage. ie :

"{{ (value.split('\t')[0]|int(0)/1000)|round(1) }}"

Should probably be corrected to :

"{{ (value.split('\t')[0]|int(0)/1024)|round(1) }}"

Unless you have something non standard in your environment variables, du -s will give a number in number of “blocks” that are used on the disk. But default, a block will usually be 1024 bytes (but not always). So effectively, this is a measure of the KB of disk usage. In addition to that, when measuring MB for disk usage, it is usually a binary measurement where there are 1024 bytes in a KB and 1024KB in a MB. So to get the du -s number to MB we need to divide by 1024. To give a number in binary MB.

Being fussy, even if you want your MB to by measuring bytes where there are 1,000,000 bytes in a MB (which is less conventional but sometimes referred to as decimal notation), you would still NOT divide by 1000 to get MB. You would still need to multiply the number of blocks by 1024 (to get it in bytes), then you would need to divide by 1,000,000 to get it into decimal MB which is a different answer again. Which gives a slightly different answer.

Note : decimal MB is most commonly used by HDD manufacturers which makes their capacity look as big as possible in the marketing marterial. But on the computer, mostly when referring to KB, MB, GB or TB they are binary numbers where 1024 is the multiple. This is one of the reasons why a 1TB drive come up short of 1TB when you put it in your computer and they give you an indication in the OS. On the computer they measure in binary 1024x1024x1024 MB definition, but the HDD manufacturers like the bigger 1,000,000 MB definition.

If your environment is setup a little differently and the default “block size” for du is different, you can check this with :

du -sb

which will give you the number of bytes of disk used. If you divide this number by the number from

du -s

it will confirm the block size and thus the number you will need to convert to bytes. And if it is anything other than 1024, you will need to adjust your maths a bit to better understand the actual disk usage.

1 Like

I get 12167138/14888 = 817.24 which does not seem right.