However, I haven’t been able to figure out the query for the “events” table. If anyone knows what we should use for “events”, please let me know and I’ll update my guide. I’m also open to any other suggestions to my guide. Thanks!
I get notified about this post so I just had a re-read of the OP.
I was wondering if there might be a way to compare the list of entities produced by the script print_entities_to_file with those in the file notify.entity_log?
That would make it less onerous if you get ‘slack’.
Just a thought, that I haven’t had time to think through properly myself yet.
This version uses tmpfs to store MariaDB databases in-memory.
Even this is an in-memory database, it can automatically export (from memory to SD card) the database’s content during backup, update, restart or even periodically, and can automatically import (from SD card to memory) the content when the add-on starts again. The database dump is gzip-ed before written to the storage to minimize SD-card wear.
Though it won’t protect you from power failures completely. After a power failure, when the add-on is restarted, it will import the last known exported database content. So when eg. daily periodic export (from memory to SD card) is enabled, you will loose the latest sensory data within that day, but your long term statistics information will remain mostly intact:
But first you have to really minimize the amount of data you want to store in the in-memory database!
I store only the necessary entities for graphs/charts, I turned on the regular, daily export (memory->SD) to protect against power failures, and I turned on to delete the old statistics that I don’t want to display. The best is to keep the DB size only a few 10MB.
Is there a faster way of removing errant records from the database (MySQL) - i’ve got a series of sensors that are incredibly chatty (> 3 million records in a week) - removing them via the purge entities service is glacially slow.
Is there a reason you need to keep those 3 million records per week? If my math is correct, that’s five per second. I don’t think the HA database is optimized for that sort of volume. And if you’re using an SD card (as in the title of this thread) then you’re really asking for trouble.
Best practice would be to exclude them. If you need to keep historical data, create templates which sample them less frequently. Keep those and exclude the source data.
Doing so gives me a black page. And an error in the log:
Logger: homeassistant.components.http.security_filter
Source: components/http/security_filter.py:66
Integration: HTTP (documentation, issues)
First occurred: 15:19:15 (5 occurrences)
Last logged: 15:22:58
Filtered a request with unsafe byte query string: /api/hassio_ingress/4na91HttrXMhiO4gszMnRJBZUmR7gruQGT1Dl1UWCws/states/query/?ordering=&export_ordering=&sql=--+Updated+query+Dec+2023%0D%0ASELECT%0D%0A++COUNT%28*%29+AS+cnt%2C%0D%0A++COUNT%28*%29+*+100+%2F+%28SELECT+COUNT%28*%29+FROM+states%29+AS+cnt_pct%2C%0D%0A++states_meta.entity_id%0D%0AFROM+states%0D%0AINNER+JOIN+states_meta+ON+states.metadata_id%3Dstates_meta.metadata_id%0D%0AGROUP+BY+states_meta.entity_id%0D%0AORDER+BY+cnt+ASC
Filtered a request with unsafe byte query string: /api/hassio_ingress/4na91HttrXMhiO4gszMnRJBZUmR7gruQGT1Dl1UWCws/states/query/?ordering=&export_ordering=&sql=SELECT%0D%0A++COUNT%28*%29+AS+cnt%2C%0D%0A++COUNT%28*%29+*+100+%2F+%28SELECT+COUNT%28*%29+FROM+states%29+AS+cnt_pct%2C%0D%0A++states_meta.entity_id%0D%0AFROM+states%0D%0AINNER+JOIN+states_meta+ON+states.metadata_id%3Dstates_meta.metadata_id%0D%0AGROUP+BY+states_meta.entity_id%0D%0AORDER+BY+cnt+ASC
Filtered a request with unsafe byte query string: /api/hassio_ingress/4na91HttrXMhiO4gszMnRJBZUmR7gruQGT1Dl1UWCws/states/query/?ordering=&export_ordering=&sql=SELECT+%0D%0A++++entity_id%2C%0D%0A++++COUNT%28*%29+AS+cnt%0D%0AFROM+states%0D%0AGROUP+BY%0D%0A++++entity_id%0D%0AORDER+BY%0D%0A++++COUNT%28*%29+DESC%3B