Mount remote smb share on hassio

Same problems here…
Capture

Just trying to get nas music folder mounted for the new media browser

Brent

1 Like

I’m on HassIO Supervised (using Docker containers that are tightly controlled by the Supervisor) and the below shell_command works well for me and I call it as a HA startup automation:

shell_command:
  mount_nas_video_folder: mount 192.168.0.99:/_NASVIDEO /mnt/_NASVIDEO

So the goal is to have an NFS share with full permissions. On my QNAP everyone has full access to that share. Check out the key setting for this to work in this screenshot:

2 Likes

I’m pulling my hair here. Using Hassio VM image.

NFS definitely has full access for everyone. It mounts everywhere(tested on Debian server), except HA.
In HA, connecting to docker HA instance, getting Permission denied error, just like @markcocker. I suspect, it’s because /media is mounted from host /mnt/data/supervisor/media. Thought I should get to this /mnt/data/supervisor/media.
When connected to SSH Add-on (SSH & Web Terminal), through web ui, and after getting to HA CLI, and login command. Trying to mount (mount -t nfs …) getting error that I need helper program. It is missing /sbin/mount.cifs.
When connected to SSH Add-on (SSH & Web Terminal), without login command, -

➜  ~ mount -t nfs 192.168.8.18:/export/public /mnt/media
mount: mounting 192.168.8.18:/export/public on /mnt/media failed: Connection refused

I really don’t get how to mount external share… can’t mount it in HA docker container because permission denied, can’t mount in ssh docker because connection refused(can’t imagine how this would help anyways), can’t mount on host because missing mount.cifs.

Might be that docker container somehow blocking ports for NFS?

Another option is to create NFS as volume in docker, but I can’t add this volume to HomeAssistant, as supervisor manages all volumes, and my edit to docker container get lost.

Hello @BloodRave,
If I understand well your post (sorry for my english misundestanding), you have succeeded to mount a folder on a HA contained supervised.
Could you just give us a little more detailed information about the action that you have made on the configuration.yaml and automation.yaml
Thank you very much for your help ! :pray:

I finally got it working!
Basically @Christianb233 message made me carefully redo all @BloodRave steps and keep trying until it got working.

/etc/fstab contents in container HomeAssistant (login as root through Portainer add-on).

192.168.8.18:/public_m3 /mnt/media nfs4 rw,async,hard,intr,noexec 0 0

Make sure you create /mnt/media folder before mounting.
I should point out, that /export/public_m3 and nfs share type did not work for me, although it works everywhere else. This was the last step to get it working, so all other things that I did, might be unnecessary.

My NFS share is on openmediavault, in there I set allowed clients to be * instead of 192.168.8.0/24
Extra options that I set to the NFS share, I set anonuid to be root

subtree_check,insecure,no_root_squash,anonuid=1000,anongid=1000

configuration.yaml

homeassistant:
  media_dirs:
    public_m3: /mnt/media
shell_command:
  mount_nas_media_folder: mount -a

automations.yaml

- id: ha_start_event
  alias: HomeAssistant Start
  trigger:
  - event: start
    platform: homeassistant
  action:
  - delay: 00:02:00
  - service: shell_command.mount_nas_media_folder

While trying to mount from ha container console using command ‘mount -a’, these errors occurred:

  • Permission denied - worst case. Seems I had permission issue with Client config on OMV.
  • Connection refused - next error I had after fixing permission issue. It was due to using nfs instead of nfs4.
  • File directory not found - last error I had. It was due to using /export/public_m3 with nfs4, dropping /export directory fixed it all.

Now writing this comment, checked that I can omit fstab configuration, and mount using shell command

shell_command:
  mount_nas_media_folder: mount -t nfs4 192.168.8.18:/public_m3 /mnt/media

instead of 'mount -a' and fstab combo.

Thanks @BloodRave and @JZhass for commenting your solutions!

9 Likes

I believe our solution here is the only one that exists which is not a great thing because it’s pretty hacky. Maybe it’s a good idea to ask @pvizeli and @balloob to see if they know of any other way of persistently mounting a remote share (NFS and SMB) — and possibly consider enhancing HA to make mounting of remote folders just a bit more user friendly rather than a startup hack. Hoping it’s already somewhere on the product road-map :slight_smile:

Excellent to hear that all is working @insajd! This was indeed challenging when I switched over from a direct PIP install onto the Supervised solution. My reason & purpose for this mount is to dump motion triggered videos from my Ring doorbell and Blink cameras.

Once I enabled NFS4 on my QNAP then mine worked with both NFS (old) and the NFS4 mount type that you have working. Interesting that you can’t get the old NFS to work.

BTW, I also wanted to go the fstab route but changed by mind. I want my hack to be more “portable” and not impact any config files on the container — if possible. After every HassIO (supervised) upgrade, I click a button in Node RED (runs on host Ubuntu) which basically runs docker command asking the container to create a folder and then reboots the HA docker container. Otherwise HA does not start correctly if I’m listing it under allowlist_external_dirs but the folder doesn’t exist — and then I simply have the same startup like you’re doing though it’s just the full mount command rather than an fstab mount all like you do @insajd. This is what I run after every Supervisor upgrade via Node RED:

sudo docker exec -i homeassistant mkdir /mnt/_NASVIDEO && sudo docker container restart homeassistant

3 Likes

Glad I could help!

Somehow I had cifs installed in HA supervised (I am using open media vault with HA installed in docker), but after the upgrade to 115, I was unable to mount cifs share, so I switched to mounting the folder in another rpi and sending the information via mqtt… I am using this for a recording folder also, and have an automation in HA that deletes the content once I get to 100Gb of recordings.

I might try @insajd version because I might use this for other automations.

I agree that more elegant mounting option would be good to have.
Regarding the portability you’re right @JZhass. I am not using allowlist_external_dirs option, so for simple playing the media folder, I changed my shell command to be

  mount_nas_media_folder: mkdir -p /mnt/media;mount -t nfs4 192.168.8.18:/public_m3 /mnt/media

That should allow me to upgrade HA from Supervisor without any worries.

1 Like

@insajd does the mkdir actually succeed for you? Try to create some folder name that you don’t already have. I had to do it from the host where the host calls a docker command to execute the mkdir on the child container.

@JZhass yes, I just checked new folder creation and it does succeed.

Wow, you just made my day @insajd … tested it and it works wonderfully. Keep in mind some admin level tasks like creating folders used to fail. Looks like that limitation is lifted… You have eliminated my need for the Node RED automation that I trigger after each HA upgrade. Thank you!

1 Like

For the fun of it checked CIFS mounting, it’s working with shell command number 2.

shell_command:
#mount NFS share
  mount_nas_media_folder: mkdir -p /mnt/media;mount -t nfs4 192.168.8.18:/public_m3 /mnt/media
#mount CIFS/Samba share
  mount_nas_media_folder2: mkdir -p /mnt/media2;mount -t cifs -o username=anonymous,domain=WORKGROUP //192.168.8.18/public_m3 /mnt/media2
3 Likes

Yep @insajd cifs worked for me as well. Had to have explicit permissions for a user (couldn’t use anonymous) but it certainly worked.

I have to take it back, if the folder is listed under allowlist_external_dirs and it does NOT exist, it still unfortunately causes the error below during HA startup and only loads into safe mode HA. So I’m still having to run my Node RED -> docker on Ubuntu host -> mkdir in the HA container then restart the HA docker container.

image

Oh I see… I am not using this option that’s why the only worry I had is existence of the folder during the mount.

Does allowlist_external_dirs need full path? Can you put there /mnt and leave it like that?

1 Like

Hey guys,

Trying to mount my CCTV recordings to Home assistant from a NAS. however I cannot for the life of me get CIFS or NFS to work. Just keep getting permission denied, even though I can access these shares on other machines.

to simplify I am trying to just mount them via the CLI just to get a visual on errors.

mount -t nfs4 192.168.1.20:/nvr-cctv /mnt/media2
mount -t cifs -o username=USER,password=PASSWORD,domain=WORKGROUP //192.168.1.20/nvr-cctv /mnt/media2

Both comeback with failed: Permission denied

Alternatively, how else do you add a directory to the media folder browser if you are running HassOS?? Baffled and hairless

System details:
Intel NUC running HassOS 4.13
Home Assistant 0.116.2

Hey,

When I was getting permission denied, because permissions on NAS were too restrictive. I was restricting to my local network IP range (192.168.8.0-255), but HassOS when connecting, showing it’s docker IP (something like 172.30.32.2). Maybe same thing with your NAS setting?

If not, try sharing security configuration of your NAS - we’ll brainstorm from there on.

1 Like

Thanks for the reply @insajd
Good idea, but just tried to check the IP restrictions and I do not think i have this on my NAS. At least for SMB.

I use unRaid for my NAS, none of the usual restrictions I can see, and everyhting I have lifted just to try get this thing working at the baselevel to then build complexity on top of.

I dont see any attempts in my NAS logs either from HA trying to connect. I am fresh out of ideas now.

Screens below of the SMB settings in the admin portal

No new ideas after seeing your settings…
I’ll shoot some ideas and questions:
Is unraid and hassos on the same machine?
Can you ping nas from hassos? Ping from other machines in local network?
Does nas has some firewall in it?
Try adding -v option to mount command, it should add verbose output.

Hey,

I was getting the same thing when trying to connect my Synology NAS. What ended up working for me was actually running the command through an automation. I had the commands setup as a shell_command and ran that shell_command from the automation.

I could not see the mount point from the CLI after running from the automation. I’m not too experienced with docker yet so my assumption is that the mount point is in a different location than what is accessible from the CLI.

Either way, I would love a better solution because if you need to change the command in any way or unmount it, you have to create new shell_command to unmount and then change settings, each requiring a HA restart between each.

@insajd cheers for the brainstorming!

-v didnt output anything new, does that work with the mount command?

As to the setup details. Networking wise:

  1. HASSOS is a separate machine (Intel NUC)
  2. UnRaid (NAS) is a different server all together, basically custom built
  3. Both devices connect to the same switch and also same subnet (its a flat network)
  4. HASSOSS and NAS connect to eachother and i have my HA smb shares mounted on the NAS for back up purposes so no restirctions there, ping runs fine network is fine
  5. NAS has a simple firewall, but that isnt the issue as it allows smb connections to the server from other machines

Overall the NAS works fine, its only when I try to connect to the SMB share from HA that I get this issue, never had any propblems with this share on anyother device.

As a test I booted up my Ubuntu VM on my laptop and was able to mount the SMB share to Ubuntu. So everything works, apart from HA refusing to play! Command used for test:

mount -t cifs -o username=USER,password=PASSWORD,domain=WORKGROUP //192.168.1.20/nvr-cctv /mnt/media2

@Cyberfighter - How do you know that the shell_command mounted the share? how do you confirm this in the HA GUI?
I’ve created the shell_command and when i run the service there is nothing that lets me to believe that its mounted?
This is why i tried to see if it even mounts from the CLI running as root?

Cheers guys!!