You can find disk temps in the disk sensor attributes
@ruaandeysel seeing issue with syslog getting “spammed” and running out of space on the unRAID side repeatedly from this integration.
unRAID logs filling up with these many per second:
Feb 12 05:26:09 Tower sshd-session[11597]: Starting session: command for root from 192.168.100.143 port 55868 id 1
Feb 12 05:26:09 Tower sshd-session[11597]: Close session: user root from 192.168.100.143 port 55868 id 0
It is resulting in Docker install failures due to lack of disk space.
See my thread here:
https://forums.unraid.net/topic/187346-docker-fail-install-no-space-left-on-device-unknown/#comment-1529309
I opened this bug issue on HA integration side: unRAID Logs Full From Too Many Sessions · Issue #66 · domalab/ha-unraid · GitHub
Try this
Hi
I had an issue where the system would slowly use up all my ram
I removed the integration and the system usage stablized
Could this been same issue?
Thanks for the response! I enabled diagnostic logging and reloaded the integration. Here’s what I found; My cache pool is ZFS-based, not Btrfs or XFS. The integration reports 0.0% usage, while the Unraid UI correctly shows usage (~17%). The log shows the following error:
Logger: custom_components.unraid.api.disk_state
Source: custom_components/unraid/api/disk_state.py:86
Integration: UNRAID
Error: Could not find device path for /mnt
Running df -h /mnt/cache from SSH incorrectly reports 1% usage, likely due to how ZFS handles mounted datasets. btrfs filesystem usage -T /mnt/cache fails with: “ERROR: not a btrfs filesystem”, confirming it’s a ZFS pool. My ZFS pool consists of two NVMe drives in RAID1 (nvme0n1p1 and nvme1n1p1), both labeled as cache and identified as zfs_member in blkid.
I suspect the integration doesn’t properly detect ZFS-based pools. Instead of using df or btrfs commands, ZFS users need:
zfs list -o name,used,available,refer,mountpoint cache (to get usage stats)
zpool list cache (for overall pool info)
Would it be possible to add ZFS detection and use the appropriate commands for retrieving cache usage? Let me know if I can provide any further logs or testing!
Thanks again for your work on this integration!
I will add this issue / info to the githuib page.
You are correct! The integration doesn’t support ZFS yet. That is still on my to do list.
If you are able to add the zfs functionality that will be a lot of help as I don’t use ZFS.
And if you get it to work submit a PR and I’ll merge it.
You can test out this alpha version of the unraid integration that uses the still work in progress unraid graphql api.
I spoke to the Unraid API developers and lots of features not available yet so I’m waiting until they open source the API in a few weeks.
VM and Docker controls don’t work yet. API issues
You can test out this alpha version of unraid integration that uses the work in progress unraid api.
The unraid api will be open sourced in a few weeks so will see what that looks like.
VM and Docker controls shows but don’t work yet
Wow this looks very promising!
I have connected everything and all data is flowing in nicely! Always good to have these things via an official API
this thing is freaking cool. particularly impressive how you’re doing it without using an official unraid API. thanks OP
This integration initiates SMART read out from drives all the time preventing them from spinning down
The API will be open sourced soon on GitHub
Noted. Will continue working on improving it when API is open sourced.
@ruaandeysel, I spoke too soon! The issue is still occurring for me.
The user script you provided is not filtering-out the HA spamming. I realized this when I went to restart one of my dockers and it gave me the out of space error.
After shutting down all my dockers and restarting unRAID completely, I’m back in business.
BUT, this is going to continue happening until you can figure out a way to stop spamming unRAID with all these sessions and maybe instead close out the sessions instead of them keep accumulating:
The user script was not previously set to schedule and may not have been resident since last reboot of unraid. I changed it to daily as a test.
Hi, I came across this integration today and wanted to thank you for it. Looks neat!
As a UX designer myself I have a few questions/suggestions
-
During setup I apparently hadn’t enabled SSH on my Unraid installation (I thought I did) and got a generic “failed to connect” (of sorts) message. Is it perhaps possible to make this more specific? Would’ve save me a good while trying to figure out what went wrong.
-
Is there a reason why updates can’t be polled faster than once per minute?
-
Every entity gets the name of my server (The Vault) prefixed to it which makes it really cluttered to read what’s what. Perhaps entities could be named to what they are?
f.e.
Thevault Array disk1 Health
Unraid Server (Thevault) Thevault Array disk1 Health
binary_sensor.unraid_server_thevault_thevault_array_disk1_health
could be
Array disk1 Health
Array disk1 Health
binary_sensor.unraid_array_disk1_health
or
Thevault Docker Plex
Unraid Server (Thevault) Thevault Docker Plex
switch.unraid_server_thevault_thevault_docker_plex
could be
Plex
Docker Plex
switch.unraid_docker_plex
etc.
Would love to hear what you think, if you have any question or remarks, let me know!
Thanks for the feedback. I’ll look at improving the integration when I get time. Have you looked at the wiki on GitHub?
@kevinconsen The next release will contain improvements to naming conventions for entities and better messages when something doesn’t work.
I found polling every 60 seconds for sensor updates is good without overloading ssh connections to unraid. Any reason why you want polling less than a minute?
Next release should address the SSH Sessions issue.
As a temporary workaround, I have a user script restarting the sshd service every night.
Awesome! Will it update existing entity names or would I need to reinstall?
No particular reason, just being an UX designer in IT makes it feel like 1 minute updates are fairly slow haha. Especially compared to other entities in HA. But I can’t speak to the particulars of the implementation and it’s (possible) shortcomings not sure what the load would be on, lets say, 5 second intervals.