We’ve all been there—relying on a basic “Disk Usage %” sensor only to have a drive “ghost” us without warning because we weren’t monitoring the underlying S.M.A.R.T. data. While there are ways to track this currently, I felt there was a middle ground missing between “too simple” and “too complex.”
Why smart-sniffer?
If you’ve looked into disk monitoring for Home Assistant before, you’ve likely seen these common paths:
System Monitor (Built-in): Great for “Is my disk full?”, but it doesn’t see S.M.A.R.T. data or health status. It won’t tell you if your SSD is wearing out or if an HDD has reallocated sectors.
Scrutiny: An amazing tool with a beautiful UI, but it can be heavy for some setups (requiring an InfluxDB backend). While it has an HA integration, you still have to manually build out your own automation logic for alerts.
CRON Jobs & MQTT Scripts: The “old reliable” method. It’s flexible but high-friction. You have to write the scripts, schedule them, manage MQTT topics, and handle the parsing yourself.
smart-sniffer aims to be the “Set it and Forget it” alternative. It provides a lightweight Go agent that does the heavy lifting and talks directly to Home Assistant.
Key Features:
Proactive Health Monitoring: Supports ATA, SATA, and NVMe drives with early warning indicators.
Zero-Config Notifications: Unlike other tools where you have to build the “Alert” automation yourself, this integration automatically creates, escalates, and dismisses persistent notifications in the HA UI based on drive health.
Auto-Discovery: Uses mDNS/Zeroconf to find your drives on the network automatically. No manual IP/Port configuration needed.
Multi-Machine Support: Just drop the lightweight Go agent on any machine you want to monitor. Each drive shows up as its own device in HA with full sensors and diagnostics.
Secure by Design: Supports bearer token authentication and uses SHA256-verified binary downloads.
Getting Started:
You can find the setup instructions and the Go agent over on GitHub: DAB-LABS/smart-sniffer
I’d love to hear your thoughts or any features you think are missing! Hopefully, this helps a few of you save your data before a drive goes dark.
Thanks for this! This is something i could use to replace scrutiny on my HA.
I’ve been testing the HA app and integration for some time now, here are some notes:
mdns/zeroconf does not seem to work (on the ha app), as i don’t see the mdns service on my network. The HA integration does not pick it up automatically, though adding manually 0449a086-smart-sniffer-agent works.
I guess smartmontools is bundled with the docker container (while i understand it’s external if the agent is used). It would be nice, if the drivedb.h could be updated . There are multiple ways to go about this. First of all, you could use a bind mount to persist /var/lib/smartmontools/drivedb/drivedb.h. The file could be updated either at the continer startup, or periodically. Or you could just provide a way to run a user customised startup script, like scrutiny , so that a user could run update-smart-drivedb with his own preferred parameters or sources at startup.
If bearer token is used, the sidebar shortcut doesn’t seem to be able to connect, as it stays inactive.
My disk has a lot of custom fields, i’ll see how they show up in the integration after every smart field is properly detected with an up to date drivedb.h. Currently i can see the wear leveling is obviously wrong. The raw value is not percentage used, that’s for sure. You should re-check how to interpret 177.
edit: I’ve seen that 0.4.29 fixes this.
Hey devzor ~ thanks for the thorough testing, really appreciate you digging in like this.
Bearer token + sidebar ~ fixed in v0.2.9, just pushed. HA’s ingress proxy wasn’t passing the auth header through, so the Web UI couldn’t talk to the agent. The proxy now injects the token on localhost requests automatically. Update and it should just work.
drivedb.h ~ also in v0.2.9. The app now runs update-smart-drivedb at startup and persists the updated database to /data. Your Transcend should get proper attribute definitions without any manual steps. On our test box it jumped from revision 5706 to 6114.
mDNS ~ this one I’d like to understand better if you have a minute. A couple things that would help:
Are you running any VLANs or separate IoT network?
Can you check Settings → System → Network ~ there’s a built-in mDNS browser in HA at the bottom. Do you see any _smartha._tcp entries there?
What does your HA hostname look like? (Settings → System → General)
The mDNS advertisement is working on our test environments so I’m curious what’s different about yours. It can be fickle, tho…
Yes, my bare metal haos has 1 adapter, with an untagged vlan, and a tagged vlan.
The .99 vlan(eno1.99) is used for a couple of tuya stuff, the main lan(default eno1) is my primary lan, with matter and everything else basically. Something like this:
Thanks for the detailed info — this is really helpful and confirms what we suspected.
Your agent is doing everything right. The logs show it’s broadcasting _smartha._tcp on the container’s bridge interface (eth0 / 172.30.33.6). The issue is on the receiving side ~ HA’s Zeroconf listener appears to not be monitoring the hassio Docker bridge in your setup, likely because the multicast route is going out one of your physical interfaces (eno1 or eno1.99) instead.
This is a known edge case with VLAN configurations and not something we can fix from the app side ~ it’s how HA’s mDNS routing works.
The good news: manual setup with the container hostname works reliably regardless of mDNS. We just pushed v0.2.10 which now prints the integration host right in the startup log:
Update the app and you’ll see it. Use that hostname (0449a086-smart-sniffer-agent) and port 9099 when adding the integration manually under Settings → Devices & Services → Add Integration → SMART Sniffer. That hostname is stable and won’t change.
We’re going to keep investigating the mDNS-on-VLAN issue, but for now manual setup is the way to go on your network.
I was already using that since the beginning, it works ok that way.
The mdns _smartha._tcp.local. definetly does not go out on the eno1 interface, as i’m not getting it on my windows pc. While i can see all the other homeassistant mdns announcements, like _matter._tcp.local. or _meshcop._udp.local. or _home-assistant._tcp.local. . Just checked, it’s not visible on the eno1.99 network either.
That’s right, the hostname connection is the intended path and you’ve had it working from day one, so you’re all set.
On the mDNS visibility: what you’re seeing is expected. The services your PC picks up (_matter, _meshcop, _home-assistant) are broadcast by HA Core from the host network. Our agent broadcasts from inside the Docker bridge, which is a different network layer, so it won’t be visible to your PC or HA’s mDNS browser regardless of VLANs. It only needs to reach the HA instance it’s running on, and the hostname connection handles that reliably.
Appreciate all the detailed debugging! It’s genuinely helpful for us to understand how things behave on different network setups.
Hey all ~ quick update! v0.5.0 just shipped with some features you’ve been asking about:
What’s new:
Disk usage monitoring ~ The agent now reports filesystem usage (total/used/available) per mountpoint. The integration creates sensors for each mounted filesystem, so you get drive health AND capacity tracking in one place.
Agent version checking ~ The integration now detects outdated agents and shows a repair notification in HA with a one-click link to update.
Improved mDNS interface filtering ~ Auto-skips Docker bridges and VPN interfaces so your agent advertises on the right network.
For HAOS App users, the app is at v0.2.x with NVMe permission fixes, automatic drivedb.h updates, and the Agent Control Center dashboard.
If you’re running Smart Sniffer, I’d love to hear how it’s going. And if you have drives that show “UNSUPPORTED” ~ please share your smartctl -a --json output in a GitHub issue so I can improve detection!
Hey @SiriosDev ~ you’re not wrong, HA OS add-ons are containers all the way down.
Speaking of Docker though ~ check out PR #8 on the SMART Sniffer repo. fireinice has been putting together a standalone Docker deployment with auto-generated compose files and everything. He even stood up his own image on Docker Hub.
So for anyone running in Docker ~ that door might be open soon…
No, my usecase is nothing special—I just have a home server where I prefer to run everything in containers to avoid potential incompatibilities, optimize backups, etc.
That’s a perfectly good reason; containers-for-everything on a home server is the way most people are heading, and it makes sense for us to support it properly.
Docker support is actively being worked on. We’ve got a community PR in review right now that adds a Dockerfile and auto-generated compose files. Still ironing out a few details, but it’s moving in the right direction.