Mount remote smb share on hassio

So my NAS is a synology running DSM 7.0 and with “SMB” enabled.
I mention this because I had to use vers 3.0, and not 1.0 as most topics here suggest for the command below

Then the following in configuration.yaml:

shell_command:
  mount_plex_folder: mkdir -p /media/plex;mount -t cifs -o vers=3.0,noserverino,username=<user>,password=<pass>,domain=WORKGROUP //nas/media/folder/here /media/plex

and this in the automations.yaml

- id: "1623784141143"
  alias: Mount Media on Start
  description: ""
  trigger:
    - platform: homeassistant
      event: start
  condition: []
  action:
    - service: shell_command.mount_plex_folder
  mode: single

But this only shows the folder in the media browser.
Now I need to find a way so it shows up in the plugins media folder

1 Like

I don’t think it is possible: shell commands run in home assistant container, but Plex lives in its own, isolated, container. Home assistant container doesn’t have privileged access, so you can’t even use docker to run commands in Plex context …
We are pretty stuck wrt Plex …

I would think that too, but I also use the frigate integration, and that does show up in my plex server :confused:

This is within a shell executing in Plex container. I had to apt update; apt install cifs-utils in order to be able to try a CIFS mount, but then container’s security kicked in

  root@a0d7b954-plex:~# mkdir -p /media/plex
  root@a0d7b954-plex:~# mount -t cifs -o vers=2.0,noserverino,username=<userr>,password=<pass> //server/share /media/plex
  Unable to apply new capability set.

I’m afraid that this will never be possible if the mantainer doesn’t change something or HAOS gets modified to allow mounting file shares on add-ons.

1 Like

Hi Glenn, I’ve tried to replicate your example to suit my install but I’m getting an ‘error 255’ when HA tries to run the shell command. Google suggests this is an authentication issue but I don’t really know what I should try changing to get it working. Any ideas?

error:
2021-06-23 09:42:15 ERROR (MainThread) [homeassistant.components.shell_command] Error running command: mkdir -p /media/music;mount -t cifs -o vers=3.0,noserverino,username=homeassistant,password=REDACTED,domain=WORKGROUP //192.168.0.15/music /media/music, return code: 255

shell_command:
  mount_nasmusic_folder: mkdir -p /media/music;mount -t cifs -o vers=3.0,noserverino,username=homeassistant,password=REDACTED,domain=WORKGROUP //192.168.0.15/music /media/music
1 Like

How to mount smb share to media folder on HASSOS:

Please be so kind and put a vote on this topic, thank you <3

1 Like

I’m running Hass.io directly on odroid hardware. No intervening OS or virtualization.

  • The shell commands to mount CIFS/NFS typed directly into an ssh terminal don’t work because these are executed outside of the containers running the HA system. These can be typed into a shell running inside the HA container, using Portainer, for e.g. (haven’t tried).
  • The mount points should be under /media, not under /mnt in order to be visible to the media player. Create the mount points and mount them under /media and everything will magically update. Mounting them under /mnt will cause all kinds of problems and you’ll end up chasing your tail to make sure these folders are created at the right time and don’t cause errors when they’re declared in the configuration to get the media player to recognize them when they don’t exist yet.
  • If you’re running HA in an environment other than hass.io directly on hardware, then your docker configuration, volume mapping, permissions, users, etc. are likely to be different enough that this may not work.

in /config/configuration.yaml:

shell_command:
  mount_music: mkdir -p /media/music; mount -t cifs -o 'ro,username=media,password=fubar' //192.168.0.10/music /media/music
  mount_video: mkdir -p /media/video; mount -t cifs -o 'ro,username=media,password=fubar' //192.168.0.10/video /media/video

in /config/automations.yaml:

- id: ha_start_event
  alias: mount media on start
  trigger:
  - event: start
    platform: homeassistant
  action:
  - delay: 00:01:00
  - service: shell_command.mount_music
    data: {}
  - service: shell_command.mount_video
    data: {}
  mode: single

Thanks for sharing this, I will test it out as soon as I can.

Your instructions are very similar to mine, however I have used secrets.yaml to store the SMB username/password.

If you’re running HASSOS that’s great for the homeassistant container but any other addons won’t be about to see the mount, which in my case (frigate) defeats the purpose of an external mount in the homeassistant container. Frigate will quickly fill up your storage with clips.

There’s a way to get it working with cifs only without messing with squashfs but it results in manually running a script everytime HassOS is rebooted and starting addons from a delayed automation.

End result don’t run HassOS , run a supervised HA on an OS you have control of… Unless anyone can offer any only hints.

If I remember good 2 years ago I found that would be possible make cifs mount accessible to hassio and any container if bind-propagation of addon container options was changed to rshared (if I remember good) to make possible do a mount cifs inside a container in bind subfolder and make visibile on all container but when I propose to add optional bind-propagation setting to hassos/hassio developers rejected it.
Accepted only cifs on kernel as module to make possibile mount it on single addon itself (that load it and add also cifs-utils).
Sorry for my bad english and hope this can help if someone want try to do the change and repropose to developers.

There’s actually no need to mount within a container. And it won’t work unless --cap_add is at sys_admin or the container is run with --privileged. This is why people are getting permission denied when trying to mount in the ssh_addon container. homeassistant container is built with privilege and label disabled. Most other containers are not.

That being said it’s fairly easy to mount over a network when running HassOS but you’ll have to manually run a script each time you boot HassOS as there is no mechanism for running a command after it boots. From what I can see the only way to run a command after HassOS boots is to “unsquash” the root file system and modify with then rebuild it like urko did in this thread.
This is a lot of effort and it’ll be blown away next time HassOS gets updated. I think there is no control over updating HassOS, it’s all done automatically and in good faith. Really people should run their own supervisor and HA core on a distro/whatever

HassOS also has no NFS support available but does have CIFS tools and module. With that you can mount a CIFS/SMB share on a HassOS writable mount that is already passed to containers.

Example vmware image running HassOS.
/dev/sda8 is mounted on /mnt/data as RW
containers are built with /mnt/data/supervisor/media passed
mount cifs share to /mnt/data/supervisor/media
restart containers that need cifs mount

Example script located in /mnt/data:

#!/bin/bash

mount -t cifs -o username=someuser,domain=somedomain,password=somepassword //1.1.1.1/NVR_Storage /mnt/data/supervisor/media

CIFSCONTAINERS=“hassio_cli addon_ccab4aaf_frigate addon_core_ssh homeassistant”
for x in $CIFSCONTAINERS ; do
docker restart $x
done


Then tada cifs mount in HA and all your addons that use the mount point. Pretty clunky but it works.

You can find containers that you want to add the mount with:
docker ps --format “table {{.ID}}\t {{.Names}}”

also probably want to stop addon containers at boot and manually start them as to not clobber the mount point.

The script has to be manully run each time HassOS boots. I looked in to using UDEV but udev runs very early and kills any scripts by the time docker starts. I couldn’t find any other way (apart from unsquash) to run a script after booting.

When HassOS updates the shell script may get overwritten in /mnt/media or not.

I also looked in to using a nfs/cifs mount in a docker volume and trying to run that to a container with no joy and messing around with containers hostconfig.json files made me lose motivation.

1 Like

i made a script, to make it easier. But still after core/supervisur update sometimes it gets overwritten. Sometime no. Still i’m updating HASS 1x/2-3month. so not a big deal for me.
But there is an issue on github where guyz are working on mounting remote share and have some GUI for it. So in the future it will be, but not now.

Yeh it is what it is at the moment… I’ll be moving to a dedicated VM at some point in the future.

A little off topic but I wonder how often hassos gets updated. i couldn’t see a release cycle.

A little more off topic, if your environment like mine, is behind a restrictive firewall and doesn’t allow external ntp requests (udp/123) i.e. you run your own internal time server, you can decrease the boot time of hassos by putting your internal time server in /etc/systemd/timesyncd.conf

This knocked off ~90 seconds off boot time as the script will not timeout trying to connect to cloudflares ntp server. Look for where it sits at “waiting for kernel time sync” or something like that

Nice if you reboot it frequently :wink: Hack on :+1:

There is a cycle like every week or two. Nevertheless no need to update to have the latest fancy. Just do once per 2 month and its fine ))))
Personaly i never had problem with boot time :-X

hi!

you can try this ( for HassOs users only ), work great!

https://community.home-assistant.io/t/new-addon-samba-nas-mount-external-disk-and-share-it/172193/205?u=olivier974

does this work with a remote cifs mount? Or just a local attached disk?

Hey all,

i am running Homeassistant OS as VM on my ESXi. Tried all ways (except the one described by urko and reedy to “unsquash” the root file system)

Thats of course a solution i would like to avoid. My overall target would be to use mount a folder which i can use in the HomeAssistant media browser.
Do anyone else see another chance to do that?

p.s. already upvoted the feature request

Thank you for the upvote! <3

But now with the script it is much easier.
And as from what i can see,that on one partition the fstab stayes and is not overwriten.
So i can maybe change the script to check if exists already and just inject to another partition.Will see next update.