Grafana Alloy Add-on: Ship HAOS Logs to Loki (Promtail Replacement)
If you’ve been using the official Promtail add-on to ship your Home Assistant logs to Loki, you may have noticed it silently stopped working sometime after upgrading to HAOS 11+. I built a drop-in replacement using Grafana Alloy and published it as a community add-on.
Repository: GitHub - ecohash-co/ha-addon-alloy: Home Assistant add-on: Ship HAOS logs to Loki using Grafana Alloy
Why Promtail is Broken on Modern HAOS
The official Promtail add-on (v2.2.0) ships Promtail 2.6.1, which was released in 2022. Since then:
- systemd 252+ introduced a compact journal format that Promtail 2.6.1 can’t read
- HAOS 11+ ships with this newer systemd version
- The add-on hits
"received error during sdjournal follow: Timeout expired"in a loop - Promtail itself was deprecated by Grafana — EOL is March 2, 2026
- The official HA add-on repository hasn’t been updated to address this
The result: your HAOS instance generates logs, but nothing makes it to Loki. No errors in the HA UI — it just silently fails.
What is Grafana Alloy?
Grafana Alloy is the official successor to Promtail, Grafana Agent, and Grafana Agent Flow. It’s a single binary that can collect logs, metrics, and traces using a component-based pipeline architecture (think modular building blocks connected in a DAG).
For our purposes, the key advantage is simple: it can read modern systemd journals.
Installation
1. Add the repository
- Settings > Add-ons > Add-on Store
- Click the overflow menu (three dots, top-right) > Repositories
- Paste:
https://github.com/ecohash-co/ha-addon-alloy - Click Add, then Close
2. Install and configure
- Find Grafana Alloy in the store and click Install
- In the Configuration tab, set your Loki URL:
loki_url: "http://<your-loki-ip>:3100/loki/api/v1/push"
log_level: info
- Start the add-on
- Enable Start on boot and Watchdog (recommended)
3. Verify
Check the add-on log — you should see:
Grafana Alloy for Home Assistant
Loki URL: http://192.168.1.45:3100/loki/api/v1/push
Journal path: /var/log/journal
And in Grafana, query {job="systemd-journal"} — your HAOS logs should start appearing.
What Gets Shipped
The add-on reads the full HAOS systemd journal and forwards everything to Loki with useful labels:
| Label | Example | Description |
|---|---|---|
job |
systemd-journal |
Static identifier |
unit |
docker.service |
Systemd unit |
hostname |
homeassistant |
Machine hostname |
syslog_identifier |
addon_core_whisper |
Process/add-on name |
container_name |
addon_a0d7b954_spotify |
Docker container |
level |
info, error |
Log priority |
transport |
journal, stdout |
How the log entered journald |
This means you can filter in Grafana like:
{job="systemd-journal", level="error"}— all errors across HAOS{job="systemd-journal", syslog_identifier="homeassistant"}— just HA Core logs{job="systemd-journal", unit="docker.service"}— all add-on container logs
There’s also a debug UI at http://<haos-ip>:12345 where you can inspect the pipeline, see component health, and troubleshoot.
Don’t Have Loki Yet? Here’s Why You Should
If you’re running a homelab and don’t have centralized logging yet, you’re flying blind when things break. Here’s a minimal setup to get started.
Why Loki?
- It’s free and lightweight. Loki is designed to be cost-effective — it indexes labels, not the full log content, so storage requirements are modest.
- You already have the dashboards. If you run Grafana (and many HA users do for energy dashboards, sensor history, etc.), Loki plugs right in as a data source. Same query interface, same alerting.
- “What happened at 3 AM?” — When Home Assistant restarts unexpectedly, an automation misfires, or a Z-Wave device drops off, the journal has the answer. But HAOS only keeps a limited journal buffer. Loki gives you weeks or months of searchable history.
- Multi-node visibility. If you run other Docker hosts, NAS boxes, or network gear alongside HAOS, Loki can ingest logs from all of them in one place. One Grafana dashboard, every device.
Quick Loki Setup (Docker)
On any machine with Docker (a NAS, a Raspberry Pi, a VM — doesn’t need to be your HA host):
# docker-compose.yml
services:
loki:
image: grafana/loki:latest
container_name: loki
ports:
- "3100:3100"
volumes:
- loki-data:/loki
- ./loki-config.yml:/etc/loki/local-config.yaml:ro
command: -config.file=/etc/loki/local-config.yaml
restart: unless-stopped
volumes:
loki-data:
# loki-config.yml
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
analytics:
reporting_enabled: false
docker compose up -d
Then add Loki as a data source in Grafana: Connections > Data Sources > Add > Loki > URL: http://<loki-ip>:3100.
Grafana Starter Query
Once logs are flowing, try this in Grafana’s Explore view:
{job="systemd-journal"} |= "error" | line_format "{{.syslog_identifier}}: {{.__line__}}"
This shows all error-containing lines with the originating service name — great for a quick health check.
Technical Details
- Alloy version: 1.13.1
- Base image: Debian Bookworm (required — the Alloy binary is glibc-linked and won’t run on Alpine/musl)
- Architectures: amd64, aarch64
- Source: GitHub - ecohash-co/ha-addon-alloy: Home Assistant add-on: Ship HAOS logs to Loki using Grafana Alloy
- License: MIT
The add-on uses s6-overlay to manage the Alloy process. On startup, a oneshot init script reads your add-on options, detects the journal path, and generates an Alloy config. The longrun service then starts Alloy with that config.
For advanced users, the additional_config option lets you append arbitrary Alloy config blocks — for example, to scrape the home-assistant.log file in addition to the journal.
Feedback Welcome
This is a v1.0 born out of personal frustration with the broken Promtail add-on. If you run into issues or have feature requests, please open an issue on the GitHub repo.
Happy logging!