FWIW, I have about four dozen MQTT entities and all are configured via MQTT Discovery (so I am unaffected by the recent changes to the format of YAML-based MQTT configuration). According to the System Health page, the MQTT integration took 2.65 seconds to load.
That’s pretty neat. I always used Mqtt discovery for almost everything so I never had a perf comparison.
Another cool thing with that route is you could put that script in a separate folder that folder watcher watches. Then you could make an automation that triggers when that script is modified and does:
check config
reload scripts
run the script
Then you’ve got automatic reloading of Mqtt entities after a config change without a restart as well (another issue others have faced above from what I’ve been reading). And fully automated!
Setting up from YAML is usually slower for any integration. Setting up a lot of yaml platforms can painfully slow (as observed above).
That’s partially driven by python overhead (will be much better in python 3.11), and the yaml parser being slow because of the enhanced error reporting (we have to keep that). The new python version will improve the yaml performance as well but we are still likely 16-20 months away from being able to upgrade since it’s not even released yet.
Problems here with mqtt integration. I have a dozen devices that are in the mqtt integration through discovery and work well. There are three that are not. They are all 3-way switches with rules and use of EVENT message instead of power1.
I posted on this site in another area. It shows my configuration.yaml, tasmota setup, and the rules.
The problem is that HA will not send ON payloads via the UI until HA wakes up (for lack of a better term). I have to use an mqtt tool (like HA’s mqtt integration publish page or mqtt_lens). Once this happens, HA ‘wakes’. As a fix for now, I have an automation at HA startup that sends ON and OFF payloads to wake up HA.
The set up was working well for a couple of years under the old platform: mqtt setup but is now broken. Control of the switches works as expected through an mqtt tool.
Yes it is a compelling reason, there doesn’t seem to be a good “yaml is best” argument for not using UI based helpers.
But doesn’t this create a problem with sharing code?
I have a package that I share and as far as I can tell has quite a few (by which I mean probably at least ten, maybe a lot more) people using it and it relies on a lot of helpers.
I don’t believe I could sensibly share it if I moved to UI based helpers.
That’s way too long for those. Something is either I/O bound (your disk is too slow), cpu bound (your cpu is too slow), blocking your event loop at startup (usually a bug in an integration), or you just have so many integrations the overhead of loading the python code slows the system down.
using an external SSD (only reads the startup from SD)
RPi4 4 Gb so that could be…?
that is logged isnt it if so? Not the case .
that might be it
I do also have a lot of entities, and even though I drastically cut my yaml config on them (eg template sensors) HA keeps adding them back because of all attributes turning into standalone sensors…
Anyways, other than global debug level (which isnt good for system performance here) could I up a single logger component to check any of the above?
Or send in a Profiler snapshot maybe?
to give you some more insight, the list is currently topped by template (yaml obviously) and Threshold (14 in all), the latter are all UI configured now…
Well maybe the last update/compress did the trick. It’s been running for 48 hours now with no problems. But to answer your question, yes the data was making it to the DB and once it goes weird it pretty much stayed weird. Will keep an eye on it. Thanks for being interested.