Time Series Databases and Stacks in 2025: InfluxDB+Telegraf vs Prometheus+Exporters (Node and cAdvisor)

I didn’t get much traction on this thread about DBMS in 2025, so maybe SQLite is good enough for most folks. But I’m hoping this topic sparks more discussion! :blush:

Bottom Line - I want to be able to generate nicer time series graphs than what Home Assistant natively provides for HA data as well as host and container stats. Grafana will be my visualization layer, but I want to get started with the right backend stack for those three sources.

My research keeps coming back to these two time series stacks:

  • InfluxDB+Telegraf
  • Prometheus+Exporters (Node and cAdvisor)

What are your experiences with either/both of these and why would you recommend one over the other?

Here’s a ChatGPT summary of a longer chat session in which I bounced ideas from various other sources off it:

Feature / Capability InfluxDB + Telegraf Prometheus + Node Exporter + cAdvisor
Primary Purpose General time series + telemetry data collection Metrics collection and monitoring for infrastructure
Best At Home Assistant integration and general-purpose time series logging. Can also monitor host and Docker via Telegraf. Monitoring containerized infrastructure and host metrics. Prometheus excels in system metrics scraping.
Home Assistant Compatibility :white_check_mark: Native integration (via InfluxDB integration) :warning: Not natively supported; requires custom exporters or MQTT-to-Prometheus bridges
Supports Numeric & String Data :white_check_mark: Yes (floats, ints, booleans, strings, timestamps) :x: Numeric (float64) only — no strings or complex types
Push vs Pull Model Push: Home Assistant pushes to InfluxDB. Telegraf also pushes host and container metrics. Pull: Prometheus scrapes exporters like Node Exporter and cAdvisor at set intervals.
Data Collection Agent telegraf (plugin-based: CPU, mem, docker, mqtt, etc.) node_exporter (host) + cAdvisor (Docker containers)
Docker Container Monitoring :white_check_mark: Via Telegraf’s Docker input plugin :white_check_mark: Via cAdvisor
Ease of Setup :warning: Medium (config files required, more tuning) :white_check_mark: Easy (especially with Prometheus + exporters pre-baked)
Data Retention & Downsampling :white_check_mark: Built-in (retention policies, continuous queries) :white_check_mark: But requires more manual config
Grafana Integration :white_check_mark: Excellent :white_check_mark: Excellent
Resource Usage (Lightweight) :white_check_mark: Lightweight (esp. Telegraf agent) :warning: Slightly heavier with multiple exporters
Suitability for Smart Home / IoT Projects :white_check_mark: Excellent (rich support for sensors and MQTT) :warning: Less ideal unless used strictly for system monitoring
Use Without Internet Access :white_check_mark: Fully offline capable :white_check_mark: Fully offline capable
Long-Term Storage / Scalability :white_check_mark: Scalable (esp. InfluxDB 2.x) :warning: Needs external long-term storage (e.g. Thanos, Cortex)

Without mentioning (yet) which way I’m leaning, I’d love to hear from those of you who have implemented either (or both) of these stacks. What challenges did you face, what worked well, and what would you do differently?

Thanks!

Edit 1 - I added a related poll to the Facebook Group here.

Personalny I’m using Prometheus and Thanos with minio as long term storage and Vector as data pipeline tool. I already had this setup before HA. I have experience with those tools on high scale at work. I would not recommend that approach for beginner. Maintenance can be complicated.
For fresh start I would try influxdb (I never used it). Configuration looks much easier.
Maybe I will spinup my own instance, currently have more than 1 year of data i Tahnos (recently extended retention form 1 to 3 years).

This is good info. I’d only suggest an edit to the “compatibility” row: HA’s influxdb integration can both export and import data, while the core Prometheus integration can only export HA data. I believe there are a few HACS integrations to import Prometheus data as sensors.

I’ve used InfluxDb since I first installed HA (back then there was little to no historical storage so we had to do it ourselves). It’s super easy to setup and requires basically no maintenance. I also setup Chronograf as a GUI for influxdb which, in addition to graphs and dashboards, helps with composing quick queries, and configuring schemas and retention periods.

Telegraf is also an amazing little tool. Note it’s not needed if you only want to export your HA data to influxdb. But I have it installed on all my Linux hosts, even my OpenWrt router, to gather additional data. Occasionally I create HA sensors from telegraf-collected data because I need to trigger automations (like disk space low). Also note telegraf has MQTT and Prometheus output options as well — meaning you can make HA sensors from telegraf without Influxdb, and can ship both HA and telegraf data to Prometheus without influxdb.

Thanks! I’m trying to minimize my use of one-trick tools with this new build in favor of more core functionality. So, knowing that the core InfluxDB integration can also create sensors for use back in HA (like host/container data from Telegraf) is very helpful. You’re right though, it may make more sense to pipe those stats direct to HA.

I’ll checkout Chronograf too. Is there a reason you didn’t go with Grafana for visualization?

Chronograf was most useful with InfluxDB 1.x where it could be used to administer databases. It still works with 2.x, but all the admin functionality is disabled. The only reason I can think of for choosing this over (or probably in addition to) Grafana are the pre-created dashboards for specific Telegraf input plugins, but there are probably equivalents for all of them on the Grafana gallery or can be recreated quickly.

I’m using InfluxDB 2.x myself and have recently evaluated several paths forward, given that v2 is now in maintenance mode. Unfortunately, there’s v3 OSS seems less mature (e.g. no Homebrew package on macOS, UI is still in beta, no unix socket support, …) and there are no migration tools which can make transferring large existing databases difficult.

I’ve also looked at QuestDB as a drop-in replacement for InfluxDB (it uses the same wire protocol). It looked really good, but one needs to be careful to feed it with dense data. When supplied with sparse data (such as with an InfluxDB export), the storage volume can be huge and query speed suffers. One way to achieve that is to have a Telegraf installation which merges data by timestamp, but I don’t fancy adding another infrastructure component just for that.

One thing to note about Prometheus and VictoriaMetrics (it’s also mentioned in the table): they only support float values. In the HA context, that means no support for entities with string values (think “on”, “charging”, …). That’s deal-breaker for me.

I use InfluxDB for long-term storage of my HA data and Prometheus + exporters for monitoring and alerting for all my homelab components. I think they complement each other well.