[SOLVED] HASSOS mount NAS network share

Hi @denilsonsa , would you mind helping me fix my config?

Currently I have a NAS share at 192.168.0.15 called music and another called movies. I’d like to link both of those to the HA media browser but my config below doesn’t work.

shell_command:
  mount_nasmusic_folder: mkdir -p /media/music; mount -t cifs -o 'ro,username=MY_USER,password=MY_PASSWORD' //192.168.0.15/music /media/music

I don’t have much Linux knowledge so the above is purely cope/paste from others and substituting my details.

This is what I’m using Dave, hopefully it helps:

shell_mount_movies:  'mkdir -p /media/movies && mount -t cifs -o username=xxx,password=xxx //192.168.60.177/movies /media/movies'

From my previous experience the problem is that this shell mount was only for homeassistant docker, not for Frigate. Can you confirm that realy the recordings are saved on nas?

@sparkydave @denilsonsa I’m sorry, I made some mistakes by obfuscating and simplifying the configuration. I edited the post with correct version

1 Like

No I can’t. I think you are right, this is only for HA Docker.
I didn’t catch at all the point of the post, sorry.

@sparkydave, your command-line seems sane. But maybe the issue is in quoting. Unless you use weird characters in your password, you don’t need the quotes on the parameter after -o. Instead, what I would suggest is to add quotes around the whole command.

# Try replacing this:
shell_command:
   mount_my_folder: mkdir … ; mount … -o 'ro,…' //192… /media/…
# With this:
shell_command:
   mount_my_folder: 'mkdir … ; mount … -o ro,… //192… /media/…'

In the second example above, the quotes are around the entire command, which explicitly tells the YAML parser this should be treated as a string. Well, YAML can automatically detect strings even without quotes, but YAML syntax rules are weird and can sometimes lead to unexpected results.

If you have weird characters in your password, try limiting to letters, numbers, and a few “safe” symbols: ._-+=:/@%^. If you insist on using other symbols, then you need to figure out how to safely escape those characters both in the shell syntax and in YAML.


But maybe your issue is somewhere else…

The whole setup is made of three parts (or four parts, depending how you count):

  • media_dirs configuration, which tells HA where to look for media files. If you’re running HA inside a container (and a HAOS install does run the HA core inside a container), please be aware this path is inside the container filesystem. See the documentation: Media Source - Home Assistant
  • shell_command that would create and mount the shared folders.
    • To avoid accidentally exposing the credentials if/when you share your main configuration.yaml file, you can move the entire command to secrets.yaml. See the documentation: Storing secrets - Home Assistant
    • We are actually chaining two commands in a single command line:
      • mkdir to create a new directory. The -p means “create any parent directories as needed, and don’t complain if the directory already exists”.
      • mount to actually mount the network share into that directory.
    • We can chain commands using:
      • ; → the second will run regardless of the state of the first one
      • && → the second will only run if the first one succeeded
      • || → the second will only run if the first failed
      • For our purposes here, both && and ; would work just fine.
    • This overview is enough for our purposes. If you want to learn more about this subject, look for tutorials and lessons about UNIX shell, sh, and bash.
  • An automation that would run the shell_command during the HA startup. See the documentation: Shell Command - Home Assistant and Automation Trigger - Home Assistant

If all these parts are correctly set up, the whole thing should work. If it doesn’t, look for clues and error messages in Open your Home Assistant instance and show your Home Assistant logs.. You can also manually trigger the automation Open your Home Assistant instance and show your automations. or manually call the shell_command service Open your Home Assistant instance and show your service developer tools. and then look for new error messages in the logs. Finally, you can also use ssh to open a shell inside your HA container and then you can inspect the directories and see what’s happening. Explaining this is beyond the scope of my reply, but this requires just some basic UNIX/Linux skills.

I do have a ! in there but I can happily change the password to something more simple.

I’ll try going through your info over the weekend. Thanks for a such a detailed response.

Thanks @srk23 . Any chance of slightly more detailed instructions?
I’m stuck as to how actually mount (and access) the secondary hard drive in my other VM.
Thanks

I presume you are using home assistant in a Proxmox VM?

If so, you need to make sure that the home assistant VM is shut down before starting.

You will need to create a second VM from which to mount the home assistant disk - I just use a generic ubuntu server image.

Note the ID of the HA and Ubuntu VMs in proxmox - for me this is 100 for HA, and 103 for Ubuntu.

The open a console to your Proxmox server, and edit the following file (substitute “103” for the number of your Ubuntu VM:

sudo nano /etc/pve/nodes/brain/qemu-server/103.conf

In the file, add a new line to allow you to access your HA disk (substitute the “100” for the ID of your HA VM). Use a free SCSI disk ID (I already have SCSI0, so went to 1):

scsi1: local-lvm:vm-100-disk-1

Save the file, and then boot the Ubuntu VM. SSH into the Ubuntu VM. If you type:

fdisk -l

You should see your HA disk and it’s partitions listed.

You can then follow the instructions at the top if this post to unsquash, edit and resquash the filesystem - for me the partitions are sdb3 and sdb5.

Once complete, remove the temporary files and shutdown the Ubuntu VM. Edit the Proxmox config file above to comment out the HA disk (not strictly necessary, but will prevent issues if you accidentally boot the VM whilst HA is running). Boot the HA VM, and you should be good.

1 Like

In case it’s useful to previous posters or anyone who gets here from a search, I wrote a detailed post describing how I got Frigate recording to a SMB share on Home Assistant OS on a Home Assistant Blue. I think this approach would work for anyone who wants to stick with Home Assistant OS / HASSOS. This strategy survives host restarts and upgrades. The compromise is that it involves giving HA core SSH access to the host. This is working well for me so far - at least until an official way of mounting shares into the media directory comes along.

2 Likes

Thank you for tutorial, thank you for link to my Feature request there! <3
One remark for people reading, as it is waiting for ping and as you wrote that you need to manually turn on Frigate addon.
But if you use my solution and if you have local host only network on proxmox lets say, then Frigate Addon can be automaticaly started. Ofcourse the negative part of mine is that you need to run the image to change fstab every time you do an upgrade/update :frowning:
Maybe it is possible to run addon automaticaly from automation after the mount?

1 Like

Thanks 100 times for this awesome tutorial… thanks to you all my recordings now go on my nas and i can save 90 days and more :slight_smile:

2 Likes

Strange the script doesn’t work for me …I have manually to mount the drive two times in SSH add-on. First attempt tells me is no mointpoint and second run tells me is mount point .any idea ?

thanks this solved my 3 month-long problems, I am very grateful!., I’ve been running around in circles for months on end, trying to find a solution and this ultimately solved my Frigate problem and other use-case relating to large storage in HomeAssistant. The binary sensor command is not working for me though “[ -f /media/thisisnas ] && echo ‘on’ || echo ‘off’” my nas is at //192.168.1.244/homeassistantmedia
so I created my version of binary_Sensor of

  • platform: command_line
    name: NAS Mount
    command: “[ -f /homeassistantmedia ] && echo ‘on’ || echo ‘off’”
    payload_on: “on”
    payload_off: “off”

however, it is always showing off. thanks in advance , appreciate your work and the community contributing.

1 Like

You need to place a file in your share with the text “1” in the body. The binary sensor points to that file to look for the contents of the file and detect whether it is there or not.

In my example the file is called “nasup”

2 Likes

yep it works , thanks!

By the way , Im running HassOS as VM in proxmox, I noticed that my backup before this solution was around 68GB but after mounting the shared the backup file is now 75GB. I checked my Frigate share in NAS it has 7GB of clips/recordings there. It seems backing up the clips too . Can I double-check with someone if those clips are being backed up as well?
Is there a way for me to disable the mount ,then I will remount it with no issues after I backup ?

If your HAOS vm is offline/stopped and you backup VM from proxmox, it definetly not taking the NAS data as the machine is offline and proxmox have no way to read and mount it from stopped VM

What i am thinking is the problem that you have started frigate before you setup the network mount and you have some recordings left there.

make sense, I tried your recommendation my backup is now 90GB even though it is shutdown and I backed up as Stopped as opposed to Snapshot. I was wondering if those frigate files are parked somwhere.I can definitely confirm that they are in the NAS as I accessed the files whilst the HomeAssistant VM is shutdown. Is there a shell or CLI command that will allow me to harmlessly unmount the NAS , then after I backup I can re-mount without breaking anything.