No I haven’t done that. I wasn’t sure if it was needed or not. I’ll give it a shot this evening. Thanks for your reply!
When I moved from the embedded sqlite to postgresql, I renamed the db file before I restarted, that way I could revert back in case I was wrong. Since I am running on a pi with Rasbian, I just used the mv command (mv ./home-assistant_v2.db home-assistant_v2.old-db)
I am also seeing that the DB does not purge over time… I’m using mysql, and when it’s new it’s fast and awesome, but after a week or so…
recorder:
purge_days: 2
db_url: mysql://hassuser:*redacted*@127.0.0.1/hass_db?charset=utf8
DB is Huge in MYSQL:
+--------------------+---------------+
| hass_db | size in MB |
+--------------------+---------------+
| hass_db | 4660.00000000 |
| information_schema | 0.15625000 |
| mysql | 2.42501831 |
| performance_schema | 0.00000000 |
| sys | 0.01562500 |
+--------------------+---------------+
5 rows in set (0.04 sec)
Anyone know why the purge function is not working?
I believe it was working in postgresql, and it appears to be working on sqlite. I am set to a 3 day purge and sqlite is only 13Mb. I’ll check back and see where it is tomorrow.
Purging is not immediate but after few days is working… I don’t know why and how it works, but would be a nice feature if you could du a surge command from the configuration side!
Purge
is fixed in 0.40. Please note that purge
has a timedelta
of 2 days, meaning it will delete every two days (and the clock resets every time you restart HA).
Do you have instructions for Synology NAS (in a Docker container) by any chance? I am hesitating to run influxDB/Grafana on Pi to minimize the overhead.
So my db will never get purged.
Well today I learned purge is useless to me lol
docker run --net=host --restart=always --name mysql -d \
-e 'DB_USER=hass' -e 'DB_PASS=pass' -e 'DB_NAME=homeassistant' \
sameersbn/mysql:latest
Thanks for the commands.
I ran these commands.
How do I run MySQL queries inside docker container? Do I have to stop HomeAssistant?
pi@rpi3:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e30442c73e3 sameersbn/mysql:latest "/sbin/entrypoint...." 5 hours ago Restarting (1) 2 hours ago mysql
Just curious about Docker container.
Depends if you have a MySQL client installed on your host machine.
Assuming you don’t, you can install something like PhpMyAdmin if you want a user interface for it. You can probably use
docker run --name myadmin -d -p 8080:80 phpmyadmin/phpmyadmin
I haven’t used that myself, so you might need to tinker with the command to point phpmyadmin to your MySQL server.
Otherwise, you can execute commands from the command line. First you’ll need to get access to the dockers command line
docker exec -it mysql /bin/bash
Now you’re inside the docker container, let’s access the MySQL client by command line
mysql -u hass -p
Enter your password, then switch to your home assistant database
use homeassistant
Ok. I have done a lot of read and setup mysql on my system. I have set the purge of the data base for two days but it don’t seem to be working. Noticed that my disk usage keeps climbing every day.
I have tried to find the data base on my system but cant seem to locate where it is. Is there a way to manually purge it to see if that is whats eating up my disk?
Hi there, i made a little script to manually purge my HA MySQL DB.
#!/bin/bash
HOST='mysql_server'
DB='HA_DB'
USER='username'
PW='password'
# Delete older than 7 days
BEFORE=$(date --date="7 days ago" +%Y-%m-%d)" 00:00:00"
query="DELETE FROM states WHERE created < '$BEFORE'"
echo "$query" | mysql -h$HOST -D$DB -u$USER -p$PW
query="DELETE FROM events WHERE created < '$BEFORE'"
echo "$query" | mysql -h$HOST -D$DB -u$USER -p$PW
Use at your own risk.
Purges run - as @arsaboo said - every 48 hours or so. I say “or so” because in testing this I discovered it to actually happen somewhere towards the end of hour 48 and hour 49. This is going to change at some point soon according to this discussion:
https://github.com/home-assistant/home-assistant/issues/6302
But there’s no PR associated so I can’t tell you when this will happen.
If you’re using MySQL and still running low on space, even after purges run, it might be due to the size of your ibdata1 file not resizing after the purge. The file /var/lib/mysql/ibdata1 is the system tablespace for the InnoDB infrastructure. If you’re using InnoDB as your engine, this is default behavior - annoying as it is - to store everything on one file. But it can be fixed.
The only way to shrink this is to dump the tables into a backup, shutdown MySQL, delete the ibdata1 file and then restore the backups. Before doing so, I would advise reading this article as it lays out not only the steps to do this but why this is the default behavior in the first place and how to fix it.
I’ve been in tuning mode lately and this is something I plan on trying soon although I am not in a hurry as I have a lot of empty space still on my Pi. But it may be another consideration for me moving to influx depending on how it handles this type of thing.
Hope this helps!
Hello @rpitera. What is the benefit to split one file on many. It’s not clear will these files reduce their size after data deleting. I guess not.
I’m still researching the answer to that myself. The dump and restore has it’s obvious advantage of reclaiming the space but I’m still trying to figure out what the advantage is going forward.
Initial research has led me to this in the MySQL docs, which does seem to list some clear advantages to this approach:
https://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablespaces.html
To be clear, I am not doing anything until I read enough to feel comfortable in doing this and would probably do a full image backup before attempting anything. I’d strongly suggest others do the same.
FWIW, MySQL 5.6 enables innodb_file_per_table by default.
I have nothing in that dir. Hmmm
At all? You should at least see the db dirs. Maybe you installed it to another directory?
Here’s what mine looks like:
Yes I have nothing in there.
I did install Hass in Virtualenv maybe I did the same with mysql.
Is there a way to find it?