Wow, I appreciate the write up and referral from the other post. This is exactly what I need with my 20Gb database…
Mainly filled with 3s interval power/solar information. That perfectly fits into the retention schedule you have provided.
But, and it is a big but… I am no DBA at all and have no experience at all with messing with my influxdb at this level. Yes I can query it, delete records with InfluxDB Studio and do some basic cleanup. But making changes in the docker container with cronjobs .
Is there any way to achieve the same by using a GUI
Since i use a debian install and then installed the ha container version i don’t know exactly. You would need to ssh into the debian HA is running on. If you use the ssh add-on this does not work since you are getting into a container but not the host.
There are ways to get to the host but this is untested for me:
If you get this to work you can create the cronjob which does the database actions according to my tutorial.
I’m glad it works for you, it’s more a workaround for the missing time clause for the continous queries. But it works really stable if you can do the cronjob and it’s saving 80% space.
So until anybody finds a better method …
Thanks for this great tutorial how to manage the Influx DB size.
I’m running on Hass OS and it works there as well. Really nice. ca 400mb of data became 15mb. Not really convinced yet it is complete, so not changed the retention of the raw data yet…
I mainly used a shell command that can be called from automations instead of a cron job:
Installed the community SSH & Web Addon running with protection mode off
Looks really good, seems to be a reliable way for using it with Hass OS.
When using docker you normally assign your user the rights to docker with usermod with something like this:
sudo usermod -aG docker youruser
Without it you have to sudo the commands, i think this is the reason. But then it works in the same way.
Many of the data without compressing the data is from the attributes, if your sensor has attributes they are stored also. Maybe they can be used by grafana, but i did not find a way and never used the attributes. If you store a weather sensor like dwd you have a lot of attributes stored every time when the sensor changes.
I read in another thread that you can exclude attributes. I now use victoriametrics for the long time storage, but since it uses the influx integration it should work the same way:
Add this to your influxdb config entry:
This should extremely shorten the data that is written to the autogen policy. When compressing the data these attributes are deleted anyway and only the mean value is written.
Maybe it helps…
@madface thanks for the extra insights. The attributes not being copied probably explains the factor 30 compression. I also never used them in analysing long term trends, so that is perfectly fine.
Hi @ralong
I’m running on hass OS and all info in this thread is applicable to set up the retentions policies. I only approached the automation slightly different (so an addition tutorial for that part):
I added a sudo before docker in influx_query.sh (‘sudo docker …’ instead of ‘docker …’):
#!/bin/sh
sudo docker exec -t addon_a0d7b954_influxdb influx -execute 'SELECT mean(value) AS value,mean(crit) AS crit,mean(warn) AS warn INTO "homeassistant"."y2_5m".:MEASUREMENT FROM "homeassistant"."autogen"./.*/ WHERE time < now() -26w and time > now() -26w6h GROUP BY time(5m),*' -precision rfc3339 -username '<username>' -password '<password>' -database 'homeassistant'
sudo docker exec -t addon_a0d7b954_influxdb influx -execute 'SELECT mean(value) AS value,mean(crit) AS crit,mean(warn) AS warn INTO "homeassistant"."inf_15m".:MEASUREMENT FROM "homeassistant"."y2_5m"./.*/ WHERE time < now() -104w and time > now() - 104w6h GROUP BY time(15m),*' -precision rfc3339 -username '<username>' -password '<password>' -database 'homeassistant'
I created a shell_command to enable to call influx_query.sh by any automation (instead of using crontab for that):
For this example it should be set to 6 months and one day. So you have in the first 6 months all data you stored, after that it it cumulated all 5 minutes and for data that is older than 2 years it it cumulated all 15 minutes. Before setting up this retention policy double check everything is working, or you will loose all data older than 6 months!
If I understood correctly the documentation, your CQ didn’t work because they run against new data.
The where clause is ignored in CQ so if you want disjointed RP, you need to do it manually.
In fact the best practice is to have your RP cover the period from now to “what you want”. The downside is that you have parts of the period covered multiple times, but it’s not much data. On the plus side, you can simplfy your queries a lot.
Before limiting my raw data retention policy, I intend to automate a backup of my influx database to my PC first.
So in Home Assistant I export the Database to the share
On the PC I import the database from the share into a temp database tmpdb
So far so good, but I fail doing the last step:
only add the new data from the tmpdb to the homeassinstant DB.
I noticed that overlapping data simply duplicates.
This is the query: SELECT * INTO "homeassistant"."autogen".:MEASUREMENT FROM "tmpdb"."autogen"./.*/
Same result with this variation (addition group by): SELECT * INTO "homeassistant".autogen.:MEASUREMENT FROM "tmpdb".autogen./.*/ GROUP BY *
If I run that query twice, the destination database homeassistant more then doubles in size.
I thought duplicated datapoints would overwrite. Maybe you have a suggestion (I’m very new unfamiliar with database queries). A different way to perform the query, or some merging / cleanup of duplicate data.
Blockquote
Every time it runs it will copy all data older than 6 months to older than 6 months and 6 hours to the retention policy “y2_5m” and grouping them again by 5 minutes, and data older than 24 months to 24 months and 6 hours to the policy “inf_15m”, grouping them by 15 minutes.
@dmartens
I did this for safety, if the script does not run due to maintenance or something like that, you will not loose the data. I looked in the database and did not see double data points for the overlapping ranges, so i thought it would be a good idea.
@erkr
Good question, i think this is even beyond my database knowledge. As i created this before a year i looked in the db after executing a few times and did not see any double data points.
Since the script ran every hour it would have to be 5 overlapping points. What i don’t know is, aren’t there any or is only that they are not shown.
Can you test with a select statement like mine in the picture, just to look if you will see double data points? Maybe you have to replace the measurement to something you have.
EDIT:
I dived a little bit deeper into, but i don’t understand it. As per docs, selecting into with same measurement, timestamp etc. should replace the values. So it should not make the database grow. I made a test setup, but i see the same problem as you, the database grows exactly by the size it has with every new select into.
But if you count the data points they stay the same. Just try
select count(value) from "W"
And the amount of data points stays the same independant how often you select into. So i really don’t know where the problem is at the moment. I know influxdb compacts the database every few days, maybe it shrinks when it runs over a time?