Use an external logging service in HA OS

I was hoping to send all my logs for supervisor, HA and all my addons to Loki. Then I could add that as a data source in Grafana and use that for analyzing and monitoring my logs. But I ran into a major challenge with this. There’s really no way to actually get to the logs for any of the containers besides homeassistant (and that’s only because it logs to disk in /config).

It looks like HA OS is set up to use journald by default. So the logs are on the host under /var/log/journal but that’s inaccessible in HA OS. Besides the preferred way to collect these logs would be to install a logging driver plugin. Grafana provides one for Loki called Promtail, you configure it as described here. There’s other alternative options as well like Fluentd and Fluentbit that provide their own logging plugins for docker.

I tried installing Promtail as the default logging driver to see what happened in a test system. It put it in a permanent bootloop and bricked it so clearly this isn’t supported right now. Would be great if it was possible to use a different logging driver to send logs off to an external aggregator for analysis and monitoring.

I was able to wire up Grafana Loki + Promtail in Supervisor based Home Assistant installation. No need for any customization in HA end.

I had to use Portainer though for managing loki and promtail docker images. If you need, I can elaborate how I got it working.

1 Like

Sure that would be great! If you could share details I’d love to see it. I’ve actually been working on addons for those two to make this easier going forward. Loki I was able to get working pretty easily, struggling a bit with Promtail right now.

If you’re running Promtail as a container though is it just looking at the logging files it can see like /config/home-assistant.log? I was really hoping to feed all logs from across supervisor into Loki but couldn’t figure out a way to do that since most of the containers don’t log to any file I can see. They simply log to stdout which is only captured as a file on the actual OS in a place not visible to any of the containers. At least to my knowledge, if you cracked this problem then definitely please share as I’m very interested.

But either way would definitely like to see anything you were willing to share about your setup with those two.

In short, I’m using Promtail to tail journald logs. Those contain everything, from Home Assistant core to all addons and even AppDaemon if you happen to use that.

The gist is to expose journald for docker, and then scrape that from promtail. Something like this:

This in docker compose:

    image: grafana/promtail:2.2.0
      - /usr/share/hassio/share/loki:/etc/loki/
      - /var/log/journal:/var/log/journal
    command: -config.file=/etc/loki/promtail.yaml
      - loki

And this in your promtail config:

  - job_name: journal
      json: false
      max_age: 12h
        job: systemd-journal
      path: /var/log/journal        
      - source_labels:
        - __journal__systemd_unit
        target_label: unit
      - source_labels:
        - __journal__hostname
        target_label: nodename
      - source_labels:
        - __journal_syslog_identifier
        target_label: syslog_identifier
      - source_labels:
        - __journal_container_name
        target_label: container_name

Let me know if this helps you to get started.

1 Like

That completely makes sense. So if you create the container directly via normal docker setup then you can map /var/log/journal in. Which unfortunately is not something you can do as an addon since addons are limited to mapping in config , ssl , addons , backup , share or media.

Hm ok I’ll have to think about this. Thanks!

Yeah, that’s why I built that using Portainer addon. It’s still fully based using what Home Assistant provides.

Admittedly you can shoot yourself with that, but given you know what you are doing, you can work around some supervisor limitations such as this.

1 Like

Well I liked this idea so much I decided to make a PR so people could make addons that provide logging services. It’s in draft right now since I want to see what the maintainers think of the concept before investing more time.

Good news! The PR was merged! I’m not sure exactly when it will be generally available but I’d expect the 2021.04.1 release. That’s how core works anyway, point releases are only for bugfixes and enhancements wait for the next major version. Although I do see Supervisor 2021.03.6 has an enhancement in it so perhaps supervisor works differently.

Anyway think that closes this out. Going to get cracking on my Promtail addon. Thanks for the help!

@CentralCommand That’s brilliant. I’m looking forward to seeing your addon. Happy to beta test it if you need volunteers.

Btw. are you planning to implement an addon only for Promtail, or for both: Promtail and Loki?

Both. Actually I’ve kind of made them already, they’re over here: . The Loki one seems to work fine but its hard to tell without the Promtail one feeding it logs. I just got the Promtail one done last night and it does start up but obviously until the next version of supervisor comes out it can’t really do much.

I did put in a way to add additional scrape configs though if you want to pull in log files created by add-ons that don’t do all their logging to stdout. Was planning to test the full setup today by using that option and pointing it at the HA logs to see how it goes.

Just fair warning though, if you do install it at this point expect issues and potentially rapid fire releases as I test and get things going. My hope is to get it stable enough to do a formal launch post around when that feature comes out in supervisor (so probably early April).

Nice work! I’ll be following the development.

I just found this thread because i’m struggling with the logs all over HA OS myself. This looks like it will tame that problem, so i’m gonna give it a go now thanks! :slight_smile:

1 Like

@CentralCommand Nice job!
Based on your work, I created a Filebeat add-on.

It’s my first add-on and I’m neither a Home Assistant expert nor an Elastic expert, but it works for me. In case someone is interested, it’s here available on GitHub.