Ha! …I’m using the config for influx v2 - still works!
Hi!
+1 to your question.
Looks like it is not needed, see GitHub - fuslwusl/homeassistant-addon-victoriametrics: VictoriaMetrics Add-on for Home Assistant OS is the perfect solution for long term data storage of your smart home sensor data and visualization with Grafana.
Do I understand the configuration correct and I can use InfluxDB (I use InluxDB2) and VictoriaMetrics parallel.
So I could test if VM suits my needs and have written all data also to my old InfluxDB instance in case I would stay with InfluxDB after my tests?
Yes, you can!
Sorry. I have tried to get both (InfluxDB2 and VM) running at the same time. For me with HA OS 2023.02 it was not possible. Home Assistant could only have a connection to one InfluxDB instance. If I try to configure a second instance I get HTTP connection errors from InfluxDB integration.
I’m using HA container (docker) so maybe that’s the difference, but only have 1 InfluxDb instance - not sure why you are trying to configure a 2nd.
One InfluxDB and one VictoriaMetrics. So I could test Vm while using InfluxDB as main storage until test and validation of VM is finished.
OK, must be that I am using docker then.
Hi there, I ran across this thread while looking for the best way to store long term data from my HA installation and I’m considering victoriametrics after some reading.
For the time being, I increased the retention period for HA’s recorder, but I want to migrate asap as my DB keeps on growing. Preferably, without losing historical data.
Does anyone have a recommendation for getting the HA data into victoriametrics? Is there a direct export possibility from HA to victoriametrics or do I need to migrate to InfluxDB first and then use this script: GitHub - Maaxion/homeassistant2influxdb: Migration of Home Assistant's log database to InfluxDB ?
I am not an expert, I researched this enough to have everything running and have been happy since. I have been running victoriametrics now about a year, because InfluxDB do not have arm port for my QNAP.
The only configuration I have in homeassistant is this:
influxdb:
api_version: 1
host: localhost
port: 8428
measurement_attr: entity_id
So no need for InfluxDB in between.
AFAIK this working because VictoriaMetrics supports InfluxDB line protocol
P.S. I am also using victoriametrics to scrape data directly from servers, routers, and containers with prometheus configs. And currently considering scraping metrics directly from ESPHome devices without routing them via HomeAssistant (If I can figure out how to prevent sending them over Native API … setting them internal will remove them from metrics endpoint too)
I am using the default configuration for VM in the configuration.yaml and using the VM Add on. I have configured VM as prometheus data source.
It seems to be working because for some sensors it is possible to plot data in grafana.
However, for other sensors and numbers for which I can plot graphs in HA dashboard, there is no data showing in grafana.
Anyone experiencing this issue? Is there a way to check if all sensors are stored in VM?
Can anyone help someone who knows nothing about databases? My HA install is containers. I can start a docker container running Vm fine, without errors in its log using something simple like
victoriametrics: # http://192.168.1.4:8428
container_name: victoriametrics
image: victoriametrics/victoria-metrics
ports:
- 8428:8428
environment:
- TZ=America/Chicago
volumes:
- /media/virtual/victoriametrics:/victoria-metrics-data
restart: unless-stopped
The last line of the log is
2023-06-06T19:26:49.711Z info VictoriaMetrics/lib/httpserver/httpserver.go:97 pprof handlers are exposed at http://127.0.0.1:8428/debug/pprof/
which looks OK to me.
But when I add the influxdb integration to configuration.yaml, the docker container for Vm crashes.
I have tried the same integration configuration that is in the add-on, but it just gives the same crash as the simpler one:
influxdb:
api_version: 1
host: 192.168.1.4
port: 8428
measurement_attr: entity_id
Then the logs add the following:
2023-06-06T19:31:50.736Z info VictoriaMetrics/lib/storage/partition.go:221 creating a partition "2023_06" with smallPartsPath="/victoria-metrics-data/data/small/2023_06", bigPartsPath="/victoria-metrics-data/data/big/2023_06"
2023-06-06T19:31:50.737Z info VictoriaMetrics/lib/storage/partition.go:230 partition "2023_06" has been created
2023-06-06T19:31:59.757Z panic VictoriaMetrics/lib/fs/reader_at.go:136 FATAL: cannot mmap "/victoria-metrics-data/indexdb/17662878C5238BB6/17662878C75D2F2B/index.bin": cannot mmap file with size 4096: no such device
panic: FATAL: cannot mmap "/victoria-metrics-data/indexdb/17662878C5238BB6/17662878C75D2F2B/index.bin": cannot mmap file with size 4096: no such device
goroutine 94 [running]:
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logMessage({0xbb6863, 0x5}, {0xc0009fbef0, 0x90}, 0x2?)
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:281 +0xa96
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logLevelSkipframes(0x1, {0xbb6863, 0x5}, {0xbc40ca?, 0x4d3e74?}, {0xc000c6f980?, 0x4a?, 0x4a?})
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:138 +0x1a5
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logLevel(...)
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:130
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.Panicf(...)
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:126
github.com/VictoriaMetrics/VictoriaMetrics/lib/fs.MustOpenReaderAt({0xc000ac0370, 0x4a})
github.com/VictoriaMetrics/VictoriaMetrics/lib/fs/reader_at.go:136 +0x28b
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset.mustOpenFilePart({0xc005c241c0, 0x40})
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset/part.go:79 +0x185
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset.(*Table).openCreatedPart(0xc005c241c0?, {0xc0000a0060?, 0x40?, 0x1?}, 0xc0002222b8?, {0xc005c241c0?, 0x3?})
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset/table.go:1223 +0xfa
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset.(*Table).mergeParts(0xc000146500, {0xc0000a0060, 0x3, 0x4}, 0xc0000a0060?, 0x1)
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset/table.go:1118 +0x5d5
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset.(*Table).mergePartsOptimal(0xc118004c29fd47bb?, {0xc0000a0060?, 0x3, 0x4})
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset/table.go:571 +0xd9
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset.(*Table).flushInmemoryParts(0xc000146500, 0x0)
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset/table.go:664 +0x2d8
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset.(*Table).inmemoryPartsFlusher(0xc000146500)
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset/table.go:620 +0x8a
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset.(*Table).startInmemoryPartsFlusher.func1()
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset/table.go:599 +0x25
created by github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset.(*Table).startInmemoryPartsFlusher
github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset/table.go:598 +0x6c
Does anyone know if there’s a way to migrate old data from InfluxDB to VictoriaMetrics? I was using InfluxDB on HA OS with Frenc’s Addon.
Yes, but it crashes (in the same way) even before I use the influxdb integration inside HA.
Also, I cannot find a way to get the token, etc. that is in his influxdb integration.
Just a quick search I found this that may help… github - Git for windows version 2.5.3 not able to push changes - Stack Overflow
…but you may need to reach out to the vm community.
Not sure on the influx part either. I took a different approach using prometheus integration + vm agent.
That doesn’t seem to be relevant for my case (my disk is far from full, and using linux). Could you point to a way to set it up the way that you did?
This entry above pretty much sums up how I’m running it. I’ve made some tweaks based on some discussion within this thread so for clarity my current configs are below.
BUT it seems like something is up with your disk and vm can’t write to it for some reason… what is at /media/virtual/victoriametrics
…a usb drive?
my docker-compose.yml
(just the relevant entries)
version: "3"
services:
vmagent:
container_name: vmagent
image: victoriametrics/vmagent:v1.80.0
depends_on:
- "victoriametrics"
volumes:
- $HOME/docker/vmagentdata:/vmagentdata
- $HOME/docker/vmprometheus/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- "--promscrape.config=/etc/prometheus/prometheus.yml"
- "--remoteWrite.url=http://127.0.0.1:8428/api/v1/write"
network_mode: host
restart: always
victoriametrics:
container_name: victoriametrics
image: victoriametrics/victoria-metrics:v1.80.0
volumes:
- $HOME/docker/vmdata:/storage
- /etc/localtime:/etc/localtime:ro
command:
- "--storageDataPath=/storage"
- "--httpListenAddr=:8428"
- "--retentionPeriod=3y"
- "--selfScrapeInterval=60s"
#- "--downsampling.period=30d:5m,360d:1h"
network_mode: host
restart: always
Then in ~/docker/vmprometheus/
I have prometheus.yml
global:
scrape_interval: 60s
scrape_timeout: 20s
scrape_configs:
- job_name: "hass"
scrape_interval: 60s
metrics_path: /api/prometheus
bearer_token: !!!see HA promethues integration on how to generate!!!
scheme: https
static_configs:
- targets: ['localhost:8123']
…the instructions for generating bearer_token
is on home assistant’s prometheus intergration page (see link below).
finally in home assistant’s configuration.yaml
I’ve setup the prometheus integration like this…
prometheus:
namespace: hass
filter:
include_domains:
- sensor
- climate
- binary_sensor
Thanks for your configuration. No, it is not USB, but rather a RAID with several free terabyte. When starting the container, it writes a configuration just fine to the directory. It is only when connecting HA to the container that it fails.