In my case, I have a postgresql database installed on a different Raspberry Pi that I use as my database.
On that Pi, I also installed DBeaver, and I can look at the database.
I can do the following commands to see that purge works, it also gives me the entities with the most entries, so I can add those to the purge:
select entity_id, count(*) from states group by entity_id order by count(*) desc;
select count(*) from states;
select pg_database_sze('hass_db');
OK folks, after 41 posts the original questions seems to still be unanswered.
So how to detect a called service: recorder.purge has been finished?
I propose there should at least be an event sent e. g. by the supervisor stating the job has finished.
Iâm stuck without knowing this. In my use-case I have a system pressure monitoring automation which shall only trigger in unregular situations (surprising load situations), therefore I excluded all well-known and expected, not interesting load situations (like backups, purge and repack). Now I discovered that this condition used so far is of course not working as the automation is flagged as ânot runningâ but the purge still is running.
condition: not
conditions:
- condition: numeric_state
entity_id: automation.system_data_purge_recorder_db
attribute: current
above: 0
- condition: numeric_state
entity_id: script.system_purge_recorder_db
attribute: current
above: 0
I gave up on this. There are no events fired for start and stop purging so, when purge starts, it run async without possibility to detect the end.
Maybe someone can fill feature request?
Sometimes life brings up surprising coincidences. Yesterday asking for this to silence my CPU overload monitoring, today when my recorder repack automation ran, for the very first time HA Core seems to have had some kind of a significant issue why it automatically restarted (which NEVER happened before, system very stable so far).
So now I have an even more important use-case to reliably detect the end of a recorder task as I need to rule out the restart was related to the recorder repack (usually it doesnât run that long but nothing else was scheduled right before the restart time). HA log (.1) does not help here unfortunately. Ran the recorder repack manually again after which finished without any issues (now of course with a significant smaller database).
The very last log entries in `home-assistant.log.1` prior the restart (no idea what initiated it as "shutdown" does not appear before that):
2023-10-22 06:25:51.013 WARNING (MainThread) [homeassistant.helpers.entity] Update of sensor.home_assistant_supervisor_cpu_percent is taking over 10 seconds
2023-10-22 06:26:00.610 WARNING (MainThread) [homeassistant.core] Timed out waiting for shutdown stage 1 to complete, the shutdown will continue
2023-10-22 06:26:00.614 WARNING (MainThread) [homeassistant.core] Shutdown stage 1: still running: <Task pending name='Task-2498141' coro=<Recorder._async_shutdown() running at /usr/src/homeassistant/homeassistant/components/recorder/core.py:418> wait_for=<Future pending cb=[_chain_future.<locals>._call_check_cancel() at /usr/local/lib/python3.10/asyncio/futures.py:385, <1 more>, Task.task_wakeup()]> cb=[set.remove()]>
2023-10-22 06:26:00.618 WARNING (MainThread) [homeassistant.core] Shutdown stage 1: still running: <Task pending name='Task-2498869' coro=<KodiEntity._async_connect_websocket_if_disconnected() running at /usr/src/homeassistant/homeassistant/components/kodi/media_player.py:443> wait_for=<Future pending cb=[BaseSelectorEventLoop._sock_write_done(11, handle=<Handle BaseS...0.86', 9090))>)(), Task.task_wakeup()]> cb=[set.remove()]>
2023-10-22 06:26:00.657 WARNING (MainThread) [homeassistant.core] Shutdown stage 1: still running: <Future pending cb=[_chain_future.<locals>._call_check_cancel() at /usr/local/lib/python3.10/asyncio/futures.py:385, <1 more>, Task.task_wakeup()]>
Today, I examined the code of the recorder component home assistant core component on github, and I couldnât identify any signs of a mechanism that would signal the completion of a purge. It appears that the concept of such a mechanism might not have been considered at all. Unfortunately, my familiarity with home assistants is insufficient to guide me on how to implement such a feature. Iâm optimistic that a member of the development team might come across this and consider adding this functionality in future updates.
Open /dev-tools/services ( in another tab ) " run purge "
Wait for it
Wait for it
wait for it
Have a bear
Wait for it
Wait for it
Take a pee
Actually it should not take that long, depending upon what you âPurgeâ, and whether you âRepackâ
This should be the final result
EDIT: As i did another full purge-repack, just to see the âresultâ i forgot to "set back the âlogger debugâ
Below was my result for the nightly Purge
2023-10-25 04:13:59.646 DEBUG (Recorder) [homeassistant.components.recorder.purge] Purging states and events before target 2023-10-15 02:13:59+00:00
2023-10-25 04:13:59.647 DEBUG (Recorder) [homeassistant.components.recorder.purge] Purge running in new format as there are NO states with event_id remaining
2023-10-25 04:13:59.684 DEBUG (Recorder) [homeassistant.components.recorder.purge] Selected 0 state ids and 0 attributes_ids to remove
2023-10-25 04:13:59.684 DEBUG (Recorder) [homeassistant.components.recorder.purge] After purging states and attributes_ids remaining=False
2023-10-25 04:13:59.699 DEBUG (Recorder) [homeassistant.components.recorder.purge] Selected 0 event ids and 0 data_ids to remove
2023-10-25 04:13:59.700 DEBUG (Recorder) [homeassistant.components.recorder.purge] After purging event and data_ids remaining=False
2023-10-25 04:13:59.707 DEBUG (Recorder) [homeassistant.components.recorder.purge] Selected 0 statistic runs to remove
2023-10-25 04:13:59.720 DEBUG (Recorder) [homeassistant.components.recorder.purge] Selected 0 short term statistics to remove
2023-10-25 04:13:59.733 DEBUG (Recorder) [homeassistant.components.recorder.purge] Deleted <sqlalchemy.engine.cursor.CursorResult object at 0x7f59a70bd010> recorder_runs
I know, the time (less than 100ms) doesnât tell much , so i guess i have to wait another week, to get an idea of how long âaverageâ time My Nightly-Purge takes, and how many states/events and IDs processed
it should be beer, right,
If you run tail -F,and watch it , while hitting run service-Purge, you wouldnât have to wonder what to look for, i just pasted last 3 lines, after a bunch of infos ( it tells everything itâs doing, i didnât find any reason to post my purge ) , but you need to set debug , , and ofcause " homeassistant.components.recorder.purge_entities " , instead, if thats what you feel a need to follow the processing of
Thatâs a neat workaround, as itâs something which can be achieved with todayâs abilities. At the same time it
is hacky
increases the HA log size
is not easy to detect (now I would need a CL sensor to parse the HA log to finally have a ârecorder task finishedâ information e. g. as an entity)
I still strongly vote for getting this information natively, out of the box. How to get some traction on this? Is there already a feature request over at GitHub?
True, this was definitely not mend as a " get a notification " message, which i believe should be a fairly easy âfeatureâ to implement, same as i.e âbackup finishedâ ⌠thou still just an example off âHowtoâ detect
PS: If you have tried âsuggestedâ method, you will notice, that there is basically 2 âentries to pay attention 2 ( start & finished )
âââ
Purging states and events before target 2023-10-15 02:13:59+00:00
Deleted <sqlalchemy.engine.cursor.CursorResult object at 0x7f59a70bd010> recorder_runs