I’m working on it
Hey guys, this is totally unstable, meaning configuration will change within a few days, and you’ll need to rename your startup.sh file, but you can try it if you would like.
Add this repo and use the run on startup configurator.
It’s absolutely required to “Disable Protection”. This jumps out of its own container to execute under a different container.
On first run, it will create a file accessible from samba or ssh in /config/startup. The contents of the startup.sh file will be run under the Home Assistant container by default. However, you can change that to run under any other container by modifying the container name under configuration.
If you try it, please report your results.
Thanks!!
But, I’m a little unclear on how this works. I see from the above (but not the documentation on GitHub) that it creates a startup.sh file in /config/startup. Presumably, we’d put mount commands there? Any examples?
And it can be uninstalled after the first run. Again, presumably because whatever it left behind in the base OS is still there. Will that be impacted by OS updates? Will I need to install it again?
Sorry for the dumb questions, I really want to try this, but first I want to understand what I’m doing - at least a little.
Hi, The project evolved a bit and I didn’t update this thread. It’s called Run on Startup.d, and it creates a per-machine startup script.
In this example, I’m using the Samba Add-On.
On first run, it will create config/startup/logs, and /config/startup.d
If you choose “Create example”, it will create a script for each container on your machine and execute them. You can find the logs in /config/startup/logs
Additionally, logs are available in Web UI. Here you can see I just installed AppDaemon and it has no logs available. The SSH addon had the example script executed and you can see this here.
You can then delete all the items you don’t care about from /config/startup/startup.d, then modify the scripts to run the commands you want, when this addon is started.
Personally, I have just one script for my Home Assistant container, which I wish to keep private.
Not sure if this will help but the below command works for me on Hassio, just needed to go to services and run shell_command:mount_emby_media_folder after I rebooted HA.
shell_command:
mount_emby_media_folder: mount -t cifs -o username="USERNAME"password="PASSWORD",domain=WORKGROUP //172.16.1.100/media/Music /media
I can’t get this to work =( Any tips?
I can connect from other devices.
Script doesn’t even create the /mnt/cctv
configuration.yaml
shell_command:
mount_nas_media_folder: mkdir -p /mnt/cctv;mount -t cifs -o username=<myuser>,password=<mypw>,domain=WORKGROUP //<ip>/shared /mnt/cctv
automations.yaml
- id: ha_start_event
alias: HomeAssistant Start
trigger:
- event: start
platform: homeassistant
action:
- delay: 00:02:00
- service: shell_command.mount_nas_media_folder
and this is the log:
[ 6809.244638] CIFS: Attempting to mount //<ip>/shared
[ 6809.244663] No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount.
[ 6809.244767] CIFS VFS: Error connecting to socket. Aborting operation.
Hey everyone, I feel like I’m really close, I just need a little help getting over the finish line. I am running Home Assistant OS on a Raspberry Pi 4 and trying to mount my Synology Diskstation’s music folder.
Here are the steps I took so far:
- Setup
configuration.yaml
like below. Trusted network IP is that of the Synology Diskstation.media_dirs
is commented out right now because it doesn’t exist yet. If you do not comment out themedia_dirs
the code will not validate. Replace myusername and mypassword with your own credentials. Domain is set to WORKGROUP by default in Synology > Control Panel > File Services, screenshot below shows where the settings are.
homeassistant:
# media_dirs:
# music: /mnt/media2
auth_providers:
- type: trusted_networks
trusted_networks:
- 192.168.1.26
shell_command:
mount_synology_nas: "mkdir -p /mnt/media2;mount -t cifs -o username=myusername,password=mypassword,domain=WORKGROUP //192.168.1.26/music /mnt/media2"
-
Do a full system reboot of the Raspberry Pi, not just the OS. After Home Assistant came back up I went to Developer Tools > Services and called the service Shell Command: mount_synology_nas. I clicked the button a couple of times, there doesn’t seem to be any indicator of anything happening. I went into Home Assistant’s Terminal within the left side navigation and typed
ls
, but did not see the media2 directory like I had expected. -
Now I uncomment the
media_dirs
lines withinconfiguration.yaml
. It now looks like this & validates!
homeassistant:
media_dirs:
music: /mnt/media2
auth_providers:
- type: trusted_networks
trusted_networks:
- 192.168.1.26
shell_command:
mount_synology_nas: "mkdir -p /mnt/media2;mount -t cifs -o username=myusername,password=mypassword,domain=WORKGROUP //192.168.1.26/music /mnt/media2"
- Restart the OS, I did not do a full reboot of the Pi this time. Upon starting back up, I can see media2 is the default folder, but nothing is appearing.
Within Home Assistant OS I checked Supervisor > System > Core logs and saw this:
ERROR (MainThread) [homeassistant.components.shell_command] Error running command: `mkdir -p /mnt/media2;mount -t cifs -o username=myusername,password=mypassword,domain=WORKGROUP //192.168.1.26/music /mnt/media2`, return code: 111
NoneType: None
Does anyone know what error code 111 means? I believe I’ve created the media2 directory but the Synology is failing to mount, so that’s why media2 appears empty. If I can figure this out I plan on doing a YouTube video tutorial to help everyone else out.
UPDATE 1: I found a post about error code 111 and tried adding -o sec=ntlmv2
to my shell command. So now the shell command looks like:
shell_command:
mount_synology_nas: "mkdir -p /mnt/media2;mount -t cifs -o sec=ntlmv2 username=myusername,password=mypassword,domain=WORKGROUP //192.168.1.26/music /mnt/media2"
Restarted Home Assistant OS, called the service, checked the core logs, and now I’ve got another error message:
2021-03-07 13:32:17 ERROR (MainThread) [homeassistant.components.shell_command] Error running command: `mkdir -p /mnt/media2;mount -t cifs -o sec=ntlmv2 username=myusername,password=mypassword,domain=WORKGROUP //192.168.1.26/music /mnt/media2`, return code: 1
NoneType: None
Still unsure how to solve this.
UPDATE 2: I had to reboot the Pi after other unrelated changes I made, and Home Assistant booted into safety mode because when rebooting the media2 folder that I had created no longer existed and the configuration.yaml
was referencing that folder. I used the built in file editor, commented out those lines, and started back up safely. I now see the need for using portainer to create the directory. I used these instructions on setting up my new folders. After rebooting the Raspberry Pi, these folders are still not appearing. Any suggestions as to what I am missing?
I’d assume that’s because the browser doesn’t support such files.
You mean VLC integration, not an addon. Because the integration is supposed to remotely control a VLC instance running somewhere.
I found a solution to mounting a NAS share on HassOS system. My system is running HassOS 2021.4.6
I could not get shell_command to work for using mount. I tried run a shell script from shell_command with mount in it but the mount part always failed.
Using the SSH & Web Terminal add-on from Community add-ons the mount command worked as did the shell script. So I expect the shell_command and the terminal are in different containers as discussed in posts above. (I’m not great on the underlying architectures)
So, following an example in the documentation for the add-on, I created an Automation triggered on HA Start.
alias: Mount Images at boot
description: Mount the Reolink folder at boot.
trigger:
- platform: homeassistant
event: start
condition: []
action:
- delay:
hours: 0
minutes: 1
seconds: 0
milliseconds: 0
- service: hassio.addon_stdin
data:
addon: a0d7b954_ssh
input: >-
mount -t nfs4 10.1.1.16:/volume1/surveillance/Reolink /config/www/images/Reolink
mode: single
In this example I am mounting an NFS share but CIFS also works.
I spent the weekend trying to get the shell_command working without success. This worked first try
Hello,
Thanks for your post I’m now able to mount the share. But I can’t be able to browse it from media browser. The folder stays empty.
Did someone get it to work ?
Thanks
Nice! Gonna try that when I get a chance. I’d love to give HA the ability to access my NAS.
Better still would be to put the recorder database there, as well as have snapshots go there directly (rather than the SD card), but one step at a time.
With SQLite, you can put a custom db_url
pointing to a path somewhere else. However, I don’t know how the automation will behave if the path is not yet mounted when HA starts. It can theoretically also cause HA to freeze (blocked on I/O) if the network share becomes slow or unavailable. Feel free to try and report your experience.
If you are storing a database somewhere else, you might as well just try to run MariaDB or PostgreSQL on your NAS.
Instead of saving snapshots directly to NAS, what I’ve been doing is using samba-backup to automatically create a daily snapshot that gets automatically copied to a SMB share, and old snapshots are automatically deleted. Simple enough to setup, and does not depend on the folder being mounted. You may as well decide to copy snapshots to other places.
Good luck!
It works for me, here’s a snippet of my configuration.yaml
:
homeassistant:
media_dirs:
mnt: /mnt
# Configuration is invalid if the directories don't exist.
#music: /mnt/Music
#video: /mnt/Video
I have a shell_command
that runs mkdir
followed by mount
. That means those directories are not available at the beginning of the HA startup, which means I can’t use them in media_dirs
. My solution is to use the parent directory instead.
For completeness, here’s the rest of my setup:
# configuration.yaml
shell_command:
mount_music: !secret shell_mount_music
mount_video: !secret shell_mount_video
# secrets.yaml
# Warning! These credentials may and will leak into HA logs.
# On the bright side, this SMB user has read-only access anyway.
shell_mount_music: 'mkdir -p /mnt/Music && mount -t cifs -o username=foo,password=bar //192.168.12.34/Music /mnt/Music'
shell_mount_video: 'mkdir -p /mnt/Video && mount -t cifs -o username=foo,password=bar //192.168.12.34/Video /mnt/Video'
- id: '1615168337199'
alias: Startup - Mount /mnt directories
description: ''
trigger:
- platform: homeassistant
event: start
condition: []
action:
- service: shell_command.mount_music
data: {}
- service: shell_command.mount_video
data: {}
mode: single
Yes, I’ve used that add-on, as well as the Google Drive Backup add-on. Both excellent tools.
However, they don’t get around the fundamental flaw with HA snapshots; the snapshot is first stored on the same drive as the HA OS. The “getting started” beginner’s guide for HA suggests using a Raspberry Pi and an SD card.
Then, after browsing these forums for a while, we learn that writing too much data to the SD card will kill the whole system.
My ideal would be for HA to direct the snapshots directly to an external storage location like a NAS, USB memory stick or whatever.
True, I agree with you. Writing snapshots to SD card first means more wear to the flash storage.
However, that is minimal, compared to how much (and how often) recorder
writes. A snapshot of a few megabytes every day isn’t much when compared to a maximum of one write on every second (that’s 86400 times more often than a daily snapshot). Sure, each write is only a few kilobytes (but on different regions of the same file, which means different sectors), and not all seconds require something to be written…
Still, I suggest making sure the recorder database size is under control, and reducing how often it writes to storage. Also because, if you keep having many writes, moving the database to a network drive will just move the wear from one device to another.
(I don’t want to discourage you, I’m just pointing to what could give the best results for the amount of effort.)
You’re very right. I’ve mentioned that on this forum, too. Apparently I’m the only one with this concern, since the discussion has never been picked up. It seems crazy to me that the recorder defaults to saving every event and state change to the database. Worse, properly configuring it is a convoluted process requiring knowledge beyond most beginners. But this is getting WAY off topic, sorry.
Hello Denilson,
Thank you so much for your explanations it works great.
Lagrap
Hi guys,
please help me out with Docker.
I understand that the question is not entirely on the topic, as far as I understand - this topic discusses the problem of connecting a remote media server to Hassio.
My task is not much easier:
There is Hassio running on Debian.
Debian himself works on a laptop. The laptop has two SSD disks on which the server is running and the HDD mounted as / home.
I was able to set up the Reolink camera so that it would upload (via FTP) the .mp4 video to the / home / scorpionspb / ftp / files folder.
How to configure Hassio so that I can see .mp4 files in the media browser.
media_dirs:
media: "/media"
recordings: "/home/scorpionspb/ftp/files"
This code doesn’t work in my case.
I assume you need to use a shell.
Search lead me to this topic just now when googling for if anyone has ran into same problems and yep, still the same. Can’t mount cifs and seems nfs neither now, even with all these shell command tricks. I fail to see the purpose of media browser on hassos if you can’t mount anything on it - have the devs thought we’d copy everything on the hass hard drives? Seems very odd.