I’d like to send Home Assistant metrics to a Grafana cloud instance. From what I can tell, HA has a built-in exporter but I need to run my own Prometheus instance somewhere to scrape it.
Grafana actually has a guide for this on their website, but it does not explain where to run the Grafana Agent. I don’t think it’s possible with Hass.io.
I found this addon on Github but I can’t find it in the addon store:
Does anyone have Prometheus running successfully as an add-on?
If I understand the Grafana Cloud doc you referred to, you don’t need Prometheus.
The Grafana Agent will use the prometheus endpoint provided by the HA exporter to pull the data.
I didn’t get where that Grafana Agent is running, either, but I assume it’s part of the Grafana Cloud solution, somehow.
Yeah the docs are pretty confusing. From what I can tell, Grafana Agent is not part of their cloud service, it has to be installed somewhere in your own environment.
I believe Grafana Agent is meant to be an easier install that includes Prometheus, Loki, and a few other services wrapped up into one package.
The project is archived, so basically abandoned. That explains it has been removed from the addon store.
If you want to use it, you’ll have to create your own store and add the addon to it.
I’ve just done this but I have unraid running so I installed prometheus on there using the prometheus-docker app. That is a requirement, you need prometheus setup somewhere with access to HA.
Chucked my prometheus.yml in the config dir with my home assistant scrape config inside
# my global config
global:
scrape_interval: 60s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 60s # Evaluate rules every 15 seconds. The default is every 1 minute.
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
scrape_configs:
- job_name: "hass"
metrics_path: '/api/prometheus'
bearer_token: "FROM_USER_SECURITY_IN_HA"
scheme: 'http'
static_configs:
- targets: ['HA_URL_INC_PORT_IF_NEEDED']
On the home assistant side, I just have this in my configuration.yaml
To get host metrics, you need to use the System Monitor, you can add this via Devices in HA.
Once that’s working the config above with - sensor.system_* will let prometheus scrape those metrics.
There’s a lot of junk to filter in your queries, check this prom query as a starting point to give you cpu usage:
homeassistant_sensor_unit_percent{entity=“sensor.system_monitor_processor_use”}