I found a solution to mounting a NAS share on HassOS system. My system is running HassOS 2021.4.6
I could not get shell_command to work for using mount. I tried run a shell script from shell_command with mount in it but the mount part always failed.
Using the SSH & Web Terminal add-on from Community add-ons the mount command worked as did the shell script. So I expect the shell_command and the terminal are in different containers as discussed in posts above. (I’m not great on the underlying architectures)
So, following an example in the documentation for the add-on, I created an Automation triggered on HA Start.
alias: Mount Images at boot
description: Mount the Reolink folder at boot.
trigger:
- platform: homeassistant
event: start
condition: []
action:
- delay:
hours: 0
minutes: 1
seconds: 0
milliseconds: 0
- service: hassio.addon_stdin
data:
addon: a0d7b954_ssh
input: >-
mount -t nfs4 10.1.1.16:/volume1/surveillance/Reolink /config/www/images/Reolink
mode: single
In this example I am mounting an NFS share but CIFS also works.
I spent the weekend trying to get the shell_command working without success. This worked first try
Nice! Gonna try that when I get a chance. I’d love to give HA the ability to access my NAS.
Better still would be to put the recorder database there, as well as have snapshots go there directly (rather than the SD card), but one step at a time.
With SQLite, you can put a custom db_url pointing to a path somewhere else. However, I don’t know how the automation will behave if the path is not yet mounted when HA starts. It can theoretically also cause HA to freeze (blocked on I/O) if the network share becomes slow or unavailable. Feel free to try and report your experience.
If you are storing a database somewhere else, you might as well just try to run MariaDB or PostgreSQL on your NAS.
Instead of saving snapshots directly to NAS, what I’ve been doing is using samba-backup to automatically create a daily snapshot that gets automatically copied to a SMB share, and old snapshots are automatically deleted. Simple enough to setup, and does not depend on the folder being mounted. You may as well decide to copy snapshots to other places.
It works for me, here’s a snippet of my configuration.yaml:
homeassistant:
media_dirs:
mnt: /mnt
# Configuration is invalid if the directories don't exist.
#music: /mnt/Music
#video: /mnt/Video
I have a shell_command that runs mkdir followed by mount. That means those directories are not available at the beginning of the HA startup, which means I can’t use them in media_dirs. My solution is to use the parent directory instead.
# secrets.yaml
# Warning! These credentials may and will leak into HA logs.
# On the bright side, this SMB user has read-only access anyway.
shell_mount_music: 'mkdir -p /mnt/Music && mount -t cifs -o username=foo,password=bar //192.168.12.34/Music /mnt/Music'
shell_mount_video: 'mkdir -p /mnt/Video && mount -t cifs -o username=foo,password=bar //192.168.12.34/Video /mnt/Video'
Yes, I’ve used that add-on, as well as the Google Drive Backup add-on. Both excellent tools.
However, they don’t get around the fundamental flaw with HA snapshots; the snapshot is first stored on the same drive as the HA OS. The “getting started” beginner’s guide for HA suggests using a Raspberry Pi and an SD card.
Then, after browsing these forums for a while, we learn that writing too much data to the SD card will kill the whole system.
My ideal would be for HA to direct the snapshots directly to an external storage location like a NAS, USB memory stick or whatever.
True, I agree with you. Writing snapshots to SD card first means more wear to the flash storage.
However, that is minimal, compared to how much (and how often) recorder writes. A snapshot of a few megabytes every day isn’t much when compared to a maximum of one write on every second (that’s 86400 times more often than a daily snapshot). Sure, each write is only a few kilobytes (but on different regions of the same file, which means different sectors), and not all seconds require something to be written…
Still, I suggest making sure the recorder database size is under control, and reducing how often it writes to storage. Also because, if you keep having many writes, moving the database to a network drive will just move the wear from one device to another.
(I don’t want to discourage you, I’m just pointing to what could give the best results for the amount of effort.)
You’re very right. I’ve mentioned that on this forum, too. Apparently I’m the only one with this concern, since the discussion has never been picked up. It seems crazy to me that the recorder defaults to saving every event and state change to the database. Worse, properly configuring it is a convoluted process requiring knowledge beyond most beginners. But this is getting WAY off topic, sorry.
Hi guys,
please help me out with Docker.
I understand that the question is not entirely on the topic, as far as I understand - this topic discusses the problem of connecting a remote media server to Hassio.
My task is not much easier:
There is Hassio running on Debian.
Debian himself works on a laptop. The laptop has two SSD disks on which the server is running and the HDD mounted as / home.
I was able to set up the Reolink camera so that it would upload (via FTP) the .mp4 video to the / home / scorpionspb / ftp / files folder.
How to configure Hassio so that I can see .mp4 files in the media browser.
Search lead me to this topic just now when googling for if anyone has ran into same problems and yep, still the same. Can’t mount cifs and seems nfs neither now, even with all these shell command tricks. I fail to see the purpose of media browser on hassos if you can’t mount anything on it - have the devs thought we’d copy everything on the hass hard drives? Seems very odd.
Yeah it’s strange that they maintain the Motion eye and Plex media server addons which would hugely benefit from NAS storage, and make it so hard to mount storage inside the containers.
Even if you do get it mounted, I found the snapshot tool would then start trying to snapshot my NAS!
I’ve successfully mounted the NAS in my media folder (and this shows up in the media browser), but anybody got any idea on how to get it showing in an addon?
Because I can successfully see the folder I created in that drive via the shell command, but the mount doesn’t show up. If I try to run it manually (e.g. in vscode terminal) it shows: mount: /media/plex: cannot mount //nasa.local/media read-only.
I tried the above, because If I create a text file in the /media/plex folder it does show in the other plugins, so it just feels like the /media link is somewhat selective on where it’s configured
So my NAS is a synology running DSM 7.0 and with “SMB” enabled.
I mention this because I had to use vers 3.0, and not 1.0 as most topics here suggest for the command below
I don’t think it is possible: shell commands run in home assistant container, but Plex lives in its own, isolated, container. Home assistant container doesn’t have privileged access, so you can’t even use docker to run commands in Plex context …
We are pretty stuck wrt Plex …
This is within a shell executing in Plex container. I had to apt update; apt install cifs-utils in order to be able to try a CIFS mount, but then container’s security kicked in
root@a0d7b954-plex:~# mkdir -p /media/plex
root@a0d7b954-plex:~# mount -t cifs -o vers=2.0,noserverino,username=<userr>,password=<pass> //server/share /media/plex
Unable to apply new capability set.
I’m afraid that this will never be possible if the mantainer doesn’t change something or HAOS gets modified to allow mounting file shares on add-ons.
Hi Glenn, I’ve tried to replicate your example to suit my install but I’m getting an ‘error 255’ when HA tries to run the shell command. Google suggests this is an authentication issue but I don’t really know what I should try changing to get it working. Any ideas?