VictoriaMetrics Add-on for long-term storage and data source for Grafana

Yes :slight_smile:

What I meant in my comment is that Home Assistant has two integrations: an InfluxDB integration, and a Prometheus integration. I had a better result using the Prometheus integration with VictoriaMetrics, compared to using the InfluxDB integration. Prometheus has a nicer data model than InfluxDB.

I’m a bit puzzled by the instructions. I’ve installed Victoria Metrics and the instructions then say:

Add influxdb integration to your homeassistant config (using the option measurement_attr: entity_id is recommended)

Firstly, there’s no such integration, if you try you get the message:

This device cannot be added from the UI
You can add this device by adding it to your ‘configuration.yaml’. See the documentation for more information.

However, there is an influxdb add on? Which should I use and how? Also, why do I need influxdb, isn’t Victoria Metrics its own database?

Lots of good info here and I am thinking to switch to VictoriaMetrics from Influxdb.
I wonder someone can share prometheus.yml?
thanks

@PedroKTFC

The addons are always the databases, you only need one of them (in this case the victoriametrics). But you need an integration that pushes the data to the addon, and for both it’s the influxdb integration.
The message you see in the ui means, this one cannot be setup from the ui, you have to manually add it to your configuration.yaml.

Here is an example, you find it also in the docs of the victoriametrics addon:

influxdb:
  api_version: 1
  host: <<<ADD-ON HOSTNAME FROM ADD-ON PAGE>>>
  port: 8428
  max_retries: 3
  measurement_attr: entity_id
  tags_attributes:
    - friendly_name
    - unit_of_measurement
  ignore_attributes:
    - icon
    - source
    - options
    - editable
    - min
    - max
    - step
    - mode
    - marker_type
    - preset_modes
    - supported_features
    - supported_color_modes
    - effect_list
    - attribution
    - assumed_state
    - state_open
    - state_closed
    - writable
    - stateExtra
    - event
    - friendly_name
    - device_class
    - state_class
    - ip_address
    - device_file
    - unit_of_measurement
    - unitOfMeasure
  include:
    domains:
      - sensor
      - binary_sensor
      - light
      - switch
      - cover
      - climate
      - input_boolean
      - input_select
      - number
      - lock
      - weather
  exclude:
    entity_globs:
      - sensor.clock*
      - sensor.date*
      - sensor.glances*
      - sensor.time*
      - sensor.uptime*
      - sensor.dwd_weather_warnings_*
      - weather.weatherstation
      - binary_sensor.*_smartphone_*
      - sensor.*_smartphone_*
      - sensor.adguard_home_*
      - binary_sensor.*_internet_access

I highly recommend not to include all your sensor but pick the ones (or domains) you want to have for a long term storage, so the database grows not so fast and you don’t store any garbage that you never need anymore in the future.

2 Likes

Is it possible to secure the VM with a login and password?

Lots of good info here and I am thinking to switch to VictoriaMetrics from Influxdb.
I wonder someone can share prometheus.yml?
thanks

+1, would love to see an example of a prometheus.yaml file. Thanks in advance! :slight_smile:

I’d love to discover VictoriaMetrics as well, so far I configured everything as advised in the docs (VictoriaMetrics and InfluxDB integration) but I can’t see any data showing up in the VictoriaMetrics add-on UI after a restart.

I suspect a misconfiguration in the influxdb config in configuration.yaml. Should I simply put my HA IP address in the host field, or is it something else?

Here is my influxdb config:

influxdb:
  api_version: 1
  host: 192.168.XXX.XXX
  port: 8428
  max_retries: 3
  measurement_attr: entity_id
  tags_attributes:
    - friendly_name
    - unit_of_measurement
  ignore_attributes:
    - icon
    - source
    - options
    - editable
    - min
    - max
    - step
    - mode
    - marker_type
    - preset_modes
    - supported_features
    - supported_color_modes
    - effect_list
    - attribution
    - assumed_state
    - state_open
    - state_closed
    - writable
    - stateExtra
    - event
    - friendly_name
    - device_class
    - state_class
    - ip_address
    - device_file
    - unit_of_measurement
    - unitOfMeasure
  include:
    domains:
      - sensor
      - binary_sensor
      - light
      - switch
      - cover
      - climate
      - input_boolean
      - input_select
      - number
  exclude:
    entity_globs:
      - sensor.clock*
      - sensor.date*
      - sensor.glances*
      - sensor.time*
      - sensor.uptime*
      - sensor.dwd_weather_warnings_*
      - weather.weatherstation
      - binary_sensor.*_smartphone_*
      - sensor.*_smartphone_*
      - sensor.adguard_home_*
      - binary_sensor.*_internet_access

Thanks in advance


Everything looks okay except the host, it should be copied from your addon page.

1 Like

Thanks @chintito4ever I was missing that piece of info. I updated my influxdb config and restarted my Pi but now I get the following error in HA logs:

Logger: homeassistant.components.influxdb
Source: components/influxdb/__init__.py:487
Integration: InfluxDB (documentation, issues)
First occurred: 08:46:11 (1 occurrences)
Last logged: 08:46:11

Cannot connect to InfluxDB due to 'HTTPConnectionPool(host='8f49de54-victoria-metrics', port=8428): Max retries exceeded with url: /write?db=home_assistant (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f653eb410>: Failed to establish a new connection: [Errno -5] Name has no usable address'))'. Please check that the provided connection details (host, port, etc.) are correct and that your InfluxDB server is running and accessible. Retrying in 60 seconds.

Here is the top part of my influxDB config:

influxdb:
  api_version: 1
  host: 8f49de54-victoria-metrics
  port: 8428
  max_retries: 3

VictoriaMetrics is started and listening to port 8428 according to logs:

2023-08-29T06:46:37.309Z	info	VictoriaMetrics/lib/httpserver/httpserver.go:96	starting http server at http://127.0.0.1:8428/

Did I put the wrong hostname again?

@vbnin As you have VictoriaMetrics running on the same computer than HomeAssistant, try this:

host: localhost

So I tried to migrate my InfluxDB to VictoriaMetrics using vmctl tool (thanks @madface!), and even if it looks like the import is successful, I cannot find any imported data in VictoriaMetrics, only the data that is recorded by HA after switching from InfluxDB to VictoriaMetrics.

Here’s the vmctl log:

C:\>vmctl influx --influx-addr "http://x.x.x.x:8086" --influx-user "xxx" --influx-password "xxx" --influx-database "xxx" --vm-addr "http://x.x.x.x:8428"
InfluxDB import mode
2023/09/06 22:01:08 Exploring scheme for database "xxx"
2023/09/06 22:01:08 fetching fields: command: "show field keys"; database: "xxx"; retention: "autogen"
2023/09/06 22:01:08 found 144 fields; skipped 206 non-numeric fields
2023/09/06 22:01:08 fetching series: command: "show series"; database: "xxx"; retention: "autogen"
2023/09/06 22:01:08 found 1114 series
Found 14262 timeseries to import. Continue? [Y/n] y
        worker 0:←[0m↖ 51597 samples/s
        worker 1:←[0m↖ 50333 samples/s
] 99.99%cessing series:←[0m 14261 / 14262 [█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
2023/09/06 22:26:55 Import finished!
2023/09/06 22:26:55 VictoriaMetrics importer stats:
  idle duration: 46m53.708345s;
  time spent while importing: 25m7.1906588s;
  total samples: 163153265;
  samples/s: 108249.92;
  total bytes: 3.6 GB;
VM worker 0:↑ 51597 samples/s
VM worker 1:↑ 50333 samples/s
Processing series: 14262 / 14262 [████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████] 100.00%
2023/09/06 22:26:55 Total time: 25m46.6366962s

But then in Grafana I cannot see the imported data - only the most recent data recorded directly into VM by HA:

I switched from InfluxDB to VM by updating the influxdb yaml information and restarting HA.
Any idea what did I miss?

Thanks!!

No, you missed nothing, all data should be imported. But i had the same problem to find my data.
For you old data, the metric is not the name of the entity but the measurement unit.
To find your old data, don’t enter anything in the metrics field, leave it empty.
At label filters, select entity_id in the select label field and then enter the name of your entity in the select value field.
When done open the metrics field, and you should see something like kwh_value. This is the imported data and should end at the date of the import. All new imported data has the name of the sensor in the metric, that was a little disadvantage from vmctl.
So just add 2 data sources, the xxx_value and the sensor.xxx to each grafana dashboard and then you have one graph.

2 Likes

Here’s the bits of mine that are relevant to Home Assistant:

scrape_configs:
  - job_name: "hass"
    scrape_interval: 10s
    metrics_path: /api/prometheus

    authorization:
      credentials: "<removed>"

    scheme: http
    static_configs:
      - targets: ['hass.int.d.sb:8123']

I’ve got other scrape configs in there too, for Netdata and a few other things.

In Home Assistant, you need to enable the Prometheus support by adding prometheus: to configuration.yaml.

1 Like

just using a similar config as well but binary_sensor values were not published to the victoriametrics db. At my point of view these are only 0/1 values (=numeric). So they should show up? right?

Just wanted to says thanks!
Replaced my prometheus add-on in minutes and all my grafana dashboards are still working (after changing the datasource ofc!) :heart_hands:

1 Like

I use ssl for my homeassistant, even within my local network. I can’t get the victoria addon to serve over ssl. Other addons like Mosquito MQTT have configs for using the certs from the /ssl directory in Home Assistant. I have tried to use the command line args for victoria metrics via the additional arguments like this
-tls -tlsCertFile=/ssl/fullchain.pem -tlsKeyFile=/ssl/privkey.pem but the addon fails to start saying it can’t find the cert or key file.
2024-01-27T16:31:01.009Z fatal VictoriaMetrics/lib/httpserver/httpserver.go:102 cannot load TLS cert from -tlsCertFile="/ssl/fullchain.pem", -tlsKeyFile="/ssl/privkey.pem", -tlsMinVersion="": cannot load TLS cert from certFile="/ssl/fullchain.pem", keyFile="/ssl/privkey.pem": open /ssl/fullchain.pem: no such file or directory
Does the victoria metrics addon not have access to the ssl directory like other addons do?

Edit: I was able to make this work by coping the ssl directory to the home assistant directory since this addon container as access to share and updating the additional arguments like this.

-tls -tlsCertFile=/share/ssl/fullchain.pem -tlsKeyFile=/share/ssl/privkey.pem

That said it would be nice if the home assistant ssl directory was available to the container like it is for other addons.

1 Like

Why go through the hassle of the cronjob? You can just make the command be /usr/bin/du -s /usr/share/hassio/share/victoria-metrics-data and the rest of it works perfectly.

1 Like

Yeah, so easy. I never came to the idea to put the du in the command_line sensor (as i adepted the way from the influxdb sensor i had before).
But i works perfectly and is really neat. Post above is edited with an example of the yaml of my sensor.
Thank you very much!

Access to the SSL folder is available now.

As I see, this addon is for people using Home Assistant OS. I’m using HA on a VM (Vmware workstation on X86_64 Windows 10) installed via OVA from the official site (Windows - Home Assistant)

If it is the right OS for using this addon, do I need to add a new disk to the VM? I’m using the default one that is 32 GB total.
And after I 've installed the addon, all the datas will be migrated to VictoriaMetrics, or will I have the statistics from today? And is the past lost?

I’m sorry for this noob question but I didnt find an answer