Better logs and history for Hass.IO setups PLEASE

The current implementation of logs and history is archaic. It works if there is not that many entities connected, but quickly becomes unusable as entities grows.
I’ve almost completely given up on this, even having the logbook on local SD card is unusable.

  • Off loading of DB and logbook to USB disk - this will prevent the SD card dying quickly
  • Better usage of DB’s, it’s clear that it doesn’t utilize DB functions optimally if it’s running on an external MySQL / MariaDB, I’ve NEVER seen this slow response from a DB before.
  • Logbook to RAM, with the current support for larger memory systems such as RPi4 and others, there is sufficient RAM to put a logbook in, this can be replicated in the background to disk if needed

One potential solution would be to restrict what’s recorded to the db by setting a whitelist and/or blacklist in recorder:

It’s a good idea anyway since you probably don’t need/want history for every single entity.

That is not a solution, that’s a workaround. Furhtermore I’ve already restricted it to the point of it being unusable.
I’ve given up on exclusions, and working with inclusions now, and not even our automations are logged.

2 Likes

Yes, whitelisting from recorder: is the way to go. When you include/exclude from logbook: you’re only managing presentation

Ah gotcha. Figured I’d share that in case you weren’t aware.

1 Like

These are my restrictions, but I still see stupid entries like sun… and time… which are of no use at all, except in specific debuggin states, and if you want to react on ‘golden hour’ or something like that.

recorder:
  db_url: !secret mysqlconnection
  include:
    domains:
      - light
      - input_boolean
      - media_player
      - sensor
      - switch
      - person
      - climate
      - notify
  purge_keep_days: 5
  purge_interval: 5

logbook:
  include:
    domains:
      - light
      - input_boolean
      - media_player
      - sensor
      - switch
      - person
      - climate
      - notify

There should be a default filter setup that is adequate and not slow as melasses, and then you can add more items to be logged if you need to and with the risk of it being slow as s…

You shouldn’t need to duplicate recorder: settings with your logbook: settings. logbook will only show what is being recorded.

You also should not be seeing any sun history with that config, so I wonder if you should purge or delete your current db and start a fresh one?

Hi Cogneato

I’ve started over with db’s twice, as they just grew (despite the purge settings).
I have to specifically exclude sun and time stuff (which I’ve done of course).

The logbook is a reflection of my testing, and moving the settings to the recorder :slight_smile:

Ah ok.

I have

recorder:
  purge_keep_days: 4
  purge_interval: 3
  include:
    domains:
      - light
      - switch
      - person
      - sensor
    entities:
      - binary_sensor.remote_ui
      - cover.garage_door

# Enables support for tracking state changes over time.
history:


# View all events in a logbook
logbook:

and I never see history data for sun.sun (or anything not in that list)

This is not relevant to the feature request of course, but is it possible to exclude with patterns. All my ESPHOME entities reports ssid, uptime, voltage, wifi signal, esphome version, ip etc., it would be great to exclude by something like sensor.*_uptime

What a coincidence, I have been working the last couple of evenings on a small custom component to cache the logbook in RAM.

I didn’t test it much yet but I thought I would put it up now, in case anybody wants to play with it: https://github.com/amelchio/logbook_cache

This is just for the logbook, not the history views.

3 Likes

Hi Amelchio, I looked for it in HACS, but couldn’t find it, is it a plugin?

For now, you first need to add the above URL as a custom repository (in HACS settings). If it works out I will submit it to HACS proper.

Ahh, ok, I see, and how will a success / failure manifest itself?

I have reduced my logbook loading time to low single digit seconds (however, it does need some time to warm up in the background after each restart). Other people seeing similar results would be a success.

Let me make it more user friendly and get back, hopefully tonight or tomorrow.

Hi Amelchio, wow that made the logbook usable, it seems to work perfectly so far.
You are right about the first time lookup, that takes some extra time, but subsequently it helps a LOT.

That’s a usable logbook!!

Thanks guys, that is good to hear. I have submitted it for default inclusion in HACS.

@fribse Just to be clear: it will automatically warm up in the background. The first lookup is only slow if you access it before it has warmed up.

BTW, your “golden hour” logs are probably from Deconz, it has a builtin daylight sensor that is in the sensor domain.

1 Like

I also struggle with history, logbook, and recorder. It would be useful if we could offload those to an external USB drive or USB thumb drive. This would save the SD.

Filtering by wildcard would also be awesome!

One additional request: specifying history and recorder below default_config should override the default config. Below is my config:

default_config:

history:
  include:
    domains:
      - sensor
      - switch
      
recorder:
  purge_keep_days: 7
  purge_interval: 1
  exclude:
    domains:
      - automation
      - group
      - cover
      - device_tracker
      - input_boolean
      - input_datetime
      - input_number
      - person
      - script
      - updater
      - sun
      - calendar
      - time_date
      - weather
      - yr
      - zone
    entities:
      - sun.sun
      - sensor.kitchen_lamp_power
      - sensor.front_wall_lights_power

but sadly I see all the unneeded entries in History tab:


date, last boot, time, yr_symbol.
I’d like to remove all of them to keep the database to a minimum. This will speed up making snapshots and they will be much smaller.

As you can boot from USB drives now it doesn’t make sense to only move the db. You can flash an SSD and move the entire thing off of SD.
(Though we are still waiting for boot support from the pi foundation for the pi4)