Watch multiple logbook entries concurrently

Is there a good way to monitor multiple Logbook entries in parallel ? when troubleshooting (automations usually), I like to watch log entries for a number of entities concurrently - watch the motion sensor AND the Contact sensor AND the … etc. I can do this using multiple tabs with Logbook filtered to what I want in each (but still have to refresh each in turn). But is there an easier/better way ? I submitted a feature request for ability to add multiple selects to the Logbook Entity filter but no one so far has shown interest. I could do it with SQL queries against the MariaDB that holds the logbook entries - or maybe someone can tell me either a better way or a way to do those SQL queries in a “logbook” style auto-updating web front end to make it easier.

or, if the recorder could be setup to record to a log file instead of a DB, I could easily squirt that into something like New Relic which I can use for free and do live queries against that. Similar to New Relic could be squirt logbook entries into Influx locally

set the logger to debug in the yaml, for the domains / components / entities.

yeah, that would work but is a little cumbersome . I was hoping to find something that could be done very quick on the fly

why do? i see no reason to describe it in such a way? rather simple to setup and easy to use

It just means going into the config yaml, adding the specific entities to logger config, then restart HA. Then, as soon as I discover during my troubleshooting I want to ad somethign else or remove something it is repeating the steps again.

I just think it would be easier if the Logbook had the ability to multi-select a few entities for its filter (which no one else seems to want as no votes on my Feature Request), or some other way to get what I am trying to do. Currently looking at some pre-setup SQL queries that I can quickly plug in entity names into as a workaround

1 Like

Ah, that is what you mean by dynamic I agree, it would be a great idea although I have to say I have lived with the current implementation.

The root of the “problem” - if indeed that is what the population would refer to it as - is the underlying SQL database that is populated and maintained both in terms of configuration and execution.

With SQL experience it is fairly simple to interrogate the recorder (history) entity state traffic (updates and refreshes [a refresh doesn’t necessarily update a value!]). Of course the recorder needs to be setup for the domains/components/entities but therein lies a paradox:- Still requires yaml changes which still involve HA restarts each time you enable the recording then disable it, assuming you do disable it otherwise the recording will consume disk space. I have not gone into the SQL for yeears but it may be able to dynamically change a table row/column value(s) with the recorder setup - I don’t know - if it can then an update query executed on the table could achieve this. I am unfamiliar with the HA core execution code (not gone into it - never needed to) which may well read tables at startup and cache these so then you’re still in a situation where, after modifying a table row/column, you would need to restart HA or reload cache. Aside all this might be considered bad practice at best, and at worst a source of corruption - i.e. when applying updates to a running system and applying them outside of the system (i.e. your external SQL update/script)

Another option is using the supervisor APIs, however this way changes to core settings still require a HA restart so then you’re back to 1st base in the sense that one might as well use the YAML which requires a restart of the core.

In summary, all ifs and buts, bit of investigation into SQL and HA core code needed, trial and error, bad/good practice, and crafting your SQL readonly queries. So surely, is it not just “safer” and quicker to amend the yaml and restart. Takes seconds.

You could always create a lovelace view with multiple logbook cards on it. Just requires a lovelace reload, not an HA restart.