Would love to see this working for the new media browser stuff in 0.115 as I have many GBs of music on a server that I’d like to be able to play through HA.
Same problems here…
Just trying to get nas music folder mounted for the new media browser
I’m on HassIO Supervised (using Docker containers that are tightly controlled by the Supervisor) and the below shell_command works well for me and I call it as a HA startup automation:
shell_command: mount_nas_video_folder: mount 192.168.0.99:/_NASVIDEO /mnt/_NASVIDEO
So the goal is to have an NFS share with full permissions. On my QNAP everyone has full access to that share. Check out the key setting for this to work in this screenshot:
I’m pulling my hair here. Using Hassio VM image.
NFS definitely has full access for everyone. It mounts everywhere(tested on Debian server), except HA.
In HA, connecting to docker HA instance, getting Permission denied error, just like @markcocker. I suspect, it’s because /media is mounted from host /mnt/data/supervisor/media. Thought I should get to this /mnt/data/supervisor/media.
When connected to SSH Add-on (SSH & Web Terminal), through web ui, and after getting to HA CLI, and
login command. Trying to mount (mount -t nfs …) getting error that I need helper program. It is missing /sbin/mount.cifs.
When connected to SSH Add-on (SSH & Web Terminal), without
login command, -
➜ ~ mount -t nfs 192.168.8.18:/export/public /mnt/media mount: mounting 192.168.8.18:/export/public on /mnt/media failed: Connection refused
I really don’t get how to mount external share… can’t mount it in HA docker container because permission denied, can’t mount in ssh docker because connection refused(can’t imagine how this would help anyways), can’t mount on host because missing mount.cifs.
Might be that docker container somehow blocking ports for NFS?
Another option is to create NFS as volume in docker, but I can’t add this volume to HomeAssistant, as supervisor manages all volumes, and my edit to docker container get lost.
If I understand well your post (sorry for my english misundestanding), you have succeeded to mount a folder on a HA contained supervised.
Could you just give us a little more detailed information about the action that you have made on the configuration.yaml and automation.yaml
Thank you very much for your help !
/etc/fstab contents in container HomeAssistant (login as root through Portainer add-on).
192.168.8.18:/public_m3 /mnt/media nfs4 rw,async,hard,intr,noexec 0 0
Make sure you create /mnt/media folder before mounting.
I should point out, that /export/public_m3 and nfs share type did not work for me, although it works everywhere else. This was the last step to get it working, so all other things that I did, might be unnecessary.
My NFS share is on openmediavault, in there I set allowed clients to be * instead of 192.168.8.0/24
Extra options that I set to the NFS share, I set anonuid to be root
homeassistant: media_dirs: public_m3: /mnt/media shell_command: mount_nas_media_folder: mount -a
- id: ha_start_event alias: HomeAssistant Start trigger: - event: start platform: homeassistant action: - delay: 00:02:00 - service: shell_command.mount_nas_media_folder
While trying to mount from ha container console using command ‘mount -a’, these errors occurred:
- Permission denied - worst case. Seems I had permission issue with Client config on OMV.
- Connection refused - next error I had after fixing permission issue. It was due to using nfs instead of nfs4.
- File directory not found - last error I had. It was due to using /export/public_m3 with nfs4, dropping /export directory fixed it all.
Now writing this comment, checked that I can omit fstab configuration, and mount using shell command
shell_command: mount_nas_media_folder: mount -t nfs4 192.168.8.18:/public_m3 /mnt/media
'mount -a' and fstab combo.
I believe our solution here is the only one that exists which is not a great thing because it’s pretty hacky. Maybe it’s a good idea to ask @pvizeli and @balloob to see if they know of any other way of persistently mounting a remote share (NFS and SMB) — and possibly consider enhancing HA to make mounting of remote folders just a bit more user friendly rather than a startup hack. Hoping it’s already somewhere on the product road-map
Excellent to hear that all is working @insajd! This was indeed challenging when I switched over from a direct PIP install onto the Supervised solution. My reason & purpose for this mount is to dump motion triggered videos from my Ring doorbell and Blink cameras.
Once I enabled NFS4 on my QNAP then mine worked with both NFS (old) and the NFS4 mount type that you have working. Interesting that you can’t get the old NFS to work.
BTW, I also wanted to go the fstab route but changed by mind. I want my hack to be more “portable” and not impact any config files on the container — if possible. After every HassIO (supervised) upgrade, I click a button in Node RED (runs on host Ubuntu) which basically runs docker command asking the container to create a folder and then reboots the HA docker container. Otherwise HA does not start correctly if I’m listing it under allowlist_external_dirs but the folder doesn’t exist — and then I simply have the same startup like you’re doing though it’s just the full mount command rather than an fstab mount all like you do @insajd. This is what I run after every Supervisor upgrade via Node RED:
sudo docker exec -i homeassistant mkdir /mnt/_NASVIDEO && sudo docker container restart homeassistant
Glad I could help!
Somehow I had cifs installed in HA supervised (I am using open media vault with HA installed in docker), but after the upgrade to 115, I was unable to mount cifs share, so I switched to mounting the folder in another rpi and sending the information via mqtt… I am using this for a recording folder also, and have an automation in HA that deletes the content once I get to 100Gb of recordings.
I might try @insajd version because I might use this for other automations.
I agree that more elegant mounting option would be good to have.
Regarding the portability you’re right @JZhass. I am not using allowlist_external_dirs option, so for simple playing the media folder, I changed my shell command to be
mount_nas_media_folder: mkdir -p /mnt/media;mount -t nfs4 192.168.8.18:/public_m3 /mnt/media
That should allow me to upgrade HA from Supervisor without any worries.
@insajd does the mkdir actually succeed for you? Try to create some folder name that you don’t already have. I had to do it from the host where the host calls a docker command to execute the mkdir on the child container.
@JZhass yes, I just checked new folder creation and it does succeed.
Wow, you just made my day @insajd … tested it and it works wonderfully. Keep in mind some admin level tasks like creating folders used to fail. Looks like that limitation is lifted… You have eliminated my need for the Node RED automation that I trigger after each HA upgrade. Thank you!
For the fun of it checked CIFS mounting, it’s working with shell command number 2.
shell_command: #mount NFS share mount_nas_media_folder: mkdir -p /mnt/media;mount -t nfs4 192.168.8.18:/public_m3 /mnt/media #mount CIFS/Samba share mount_nas_media_folder2: mkdir -p /mnt/media2;mount -t cifs -o username=anonymous,domain=WORKGROUP //192.168.8.18/public_m3 /mnt/media2
Yep @insajd cifs worked for me as well. Had to have explicit permissions for a user (couldn’t use anonymous) but it certainly worked.
I have to take it back, if the folder is listed under allowlist_external_dirs and it does NOT exist, it still unfortunately causes the error below during HA startup and only loads into safe mode HA. So I’m still having to run my Node RED -> docker on Ubuntu host -> mkdir in the HA container then restart the HA docker container.
Oh I see… I am not using this option that’s why the only worry I had is existence of the folder during the mount.
Does allowlist_external_dirs need full path? Can you put there /mnt and leave it like that?
Trying to mount my CCTV recordings to Home assistant from a NAS. however I cannot for the life of me get CIFS or NFS to work. Just keep getting permission denied, even though I can access these shares on other machines.
to simplify I am trying to just mount them via the CLI just to get a visual on errors.
mount -t nfs4 192.168.1.20:/nvr-cctv /mnt/media2 mount -t cifs -o username=USER,password=PASSWORD,domain=WORKGROUP //192.168.1.20/nvr-cctv /mnt/media2
Both comeback with
failed: Permission denied
Alternatively, how else do you add a directory to the media folder browser if you are running HassOS?? Baffled and hairless
Intel NUC running HassOS 4.13
Home Assistant 0.116.2
When I was getting permission denied, because permissions on NAS were too restrictive. I was restricting to my local network IP range (192.168.8.0-255), but HassOS when connecting, showing it’s docker IP (something like 172.30.32.2). Maybe same thing with your NAS setting?
If not, try sharing security configuration of your NAS - we’ll brainstorm from there on.
Thanks for the reply @insajd
Good idea, but just tried to check the IP restrictions and I do not think i have this on my NAS. At least for SMB.
I use unRaid for my NAS, none of the usual restrictions I can see, and everyhting I have lifted just to try get this thing working at the baselevel to then build complexity on top of.
I dont see any attempts in my NAS logs either from HA trying to connect. I am fresh out of ideas now.
Screens below of the SMB settings in the admin portal
No new ideas after seeing your settings…
I’ll shoot some ideas and questions:
Is unraid and hassos on the same machine?
Can you ping nas from hassos? Ping from other machines in local network?
Does nas has some firewall in it?
Try adding -v option to mount command, it should add verbose output.
I was getting the same thing when trying to connect my Synology NAS. What ended up working for me was actually running the command through an automation. I had the commands setup as a
shell_command and ran that
shell_command from the automation.
I could not see the mount point from the CLI after running from the automation. I’m not too experienced with docker yet so my assumption is that the mount point is in a different location than what is accessible from the CLI.
Either way, I would love a better solution because if you need to change the command in any way or unmount it, you have to create new
shell_command to unmount and then change settings, each requiring a HA restart between each.