Is it possible to find cause(s) of slow shutdown time?

Is there a way to find out why it takes >30seconds for HAss to stop running?

Startup time is now much-shorter compared to a year ago, and the report of how long each integration took to start is handy. No way to measure the reverse, so far.

i.e. is it busy flushing the Recorder database, some integration being slow, etc?

My shutdown times on a RPi4 have been approx. 1 minute. I think this is reasonable to ensure a controlled & safe shutdown.

Is there a specific reason you need it to be faster? I think the hardware being use will also be a factor. For example, a SSD should shutdown faster than an SD card; etc.

Have you tried watching the logs in a separate window while you shutdown?

I have watched the logs, nothing suspicious is reported.
The machine is a RPi4, SSD via USB3. Python 3.12.3, compiled from source on the machine.
Takes about 10 seconds to start, but at least 3x that much to stop.

The only issue from this is inconvenience if, e.g. I have to reboot the machine (it has other work to do). If I stop HAss manually, it takes the >30S time, and then rebooting the OS takes only a few seconds (this tells me it’s not some other process that’s slowing down the reboot).

Mainly, it just seemed odd that a process should take so much longer to stop than to startup. I’m expecting it to just have to kill off threads, flush all the buffered writes it might be holding, close the SQLite DB, and end. Perhaps this service has more to do than most other procs.

Shutdown should generally take less time than startup. Usually its should only be a few seconds.

Do you have any interesting in your home-assistant.log.1 from the shutdown?

1 Like

Thanks for the idea - I hadn’t (duh) thought of checking the last log for a clue.
Sadly, though, nothing unusual.
It reports (from an automation of my own creation, to advise me of any sensors that go missing during normal operation) that a few ESPHome sensors became unavailable during shutdown, which seems benign.

I’ll definitely increase the logging detail, restart, then restart again and take a look at the .1 log.

It sounds like typical timeout waits.
A clean shutdown requires subscriptions to be cancelled, which can take time.
Sometimes it is cloud push subscriptions, sometimes it is local ones.
The cloud push subscriptions can be due to slow servers.
The local ones can be due to a overloaded hardware, especially when running on the same hardware as HA, but it can also be because the service/addon that should receive the cancellation has been shutdown first, so it has to timeout.

1 Like

With logger set to ‘debug’, a shutdown spans about 4 seconds in the log, then it stops logging, but shutdown isn’t complete for another 30 seconds or so (based on return to shell prompt after issuing ‘sudo systemctl stop hass’ since it runs core in a venv).

Python must still be doing some cleanups after logger got killed, so the HA log seems not to be useful for this issue.
Perhaps a debug param on the python cmdline that launches HA will be useful. I’ll try that, too.

Please PM a copy of the logs and I’ll take a look to see if there is anything obviously wrong

The log you sent looks pretty clean. I suspect its something running in a thread or executor that its waiting for to fully shutdown.

You can use the profiler.log_thread_frames Profiler - Home Assistant service to log whats going on in other threads right before shutdown. You might be able to see something running in there that its going to wait for.

I’m not sure its relevant, but before 2024.5.x, we would wait for new discoveries to finish, and config entries that were retrying setup before shutting down. We now cancel them in 2024.5.x to avoid waiting for them.

1 Like

Happy to report that I believe I’ve isolated the cause: Tuya

Through selectively disabling various integrations and restarting, Tuya (the official one, not tuyalocal) is the one responsible for the long shutdown delays. With Tuya disabled, it shuts down in ~5 seconds - a reasonably normal-seeming duration for a system like this.

Thanks to all for the ideas and suggestions.

1 Like