Take a backup and check on your NAS that the file is there.
Doesn’t work for me mounting my Buffalo Linkstation (Samba version 2):
Jun 08 14:49:13 homeassistant kernel: CIFS: Attempting to mount \\192.168.1.25\media
Jun 08 14:49:13 homeassistant mount: mount error(22): Invalid argument
Jun 08 14:49:13 homeassistant mount: Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
Jun 08 14:49:13 homeassistant kernel: CIFS: VFS: cifs_mount failed w/return code = -22
Jun 08 14:49:13 homeassistant systemd: mnt-data-supervisor-mounts-NAS.mount: Mount process exited, code=exited, status=32/n/a
Jun 08 14:49:13 homeassistant systemd: mnt-data-supervisor-mounts-NAS.mount: Failed with result 'exit-code'.
Jun 08 14:49:13 homeassistant systemd: Failed to mount Supervisor cifs mount: NAS.
Maybe it uses Samba version 3? my NAS has no NFS support.
I think you’re probably right. You could check the kernel log for a more descriptive error.
Good for a first effort, however specifying username and password for the share needs also to have ability to add a domain to the mix. I use Domain Active Directory in my network and the NAS uses it to authenticate the username/password.
So far, I have tried the backup feature to a mapped drive on my Windows server, and it is working great.
I am curious if I could use another shared mount to use in my YAML configurations, specifically, I want to be able to save DeepStack files to my server (see save_file_folder below).
- platform: deepstack_object
Here’s my first question: Does the backup really get created on the share that you’ve mounted as backup? Or is it created on the local storage then copied there, like all the backup add-ons do?
There is a Feature Request for the ability to give the backups (formerly called snapshots) a meaningful name. It’s sort of languished for a few years, but I think this new Network Storage feature might give it more justification. Please vote if you agree.
Also, isn’t .tar a compressed format? I guess I always just assumed that, so I could be wrong.
Why is this limited to HA OS? I’d love this functionality, but use the Docker container.
Good question. This has been talked about a lot in the 2023.6 release thread and elsewhere.
As I (a plain ol’ HA OS user) understand it, the thought was that it’s not really needed in other OS configurations. If you’re already managing the OS and just running HA in a container or VM then you already have full control over what drives are mounted. I imagine you could run all of HA off a share from a remote server if you wanted.
What I suspect may be missing is the ability to run HA from a local drive, but create the backups on a (presumably already mounted) share. Or maybe you can and I’m just missing the details. So far the documentation has been fairly limited.
No, tar files are basically concatenated but not compressed. That’s why you often see them gzipped as well, as .tar.gz or .tgz.
yeah i guess samba3
Great feature, but the path structure is annoying.
I have a Windows Server, Path to backups in Windows would be
but in HA notation I have to use
Would be nice if this had at least been explained in the documentation
So far Experience has not been good.
System: HA running in VMWare on a Windows 10 Host
- In the UI, creating the Network Storage ‘share’ seems successful as no errors are produced.
- The created share is NOT visible in ‘Media’ >> ‘My Media’ in the UI.
- Using Samba Share, I navigate to the Media folder and I see the share has indeed been created, and I can navigate the files.
Found the problem.
Left over crap causing me grief.
Sftp support in future?
OK, I was able to successfully mount a share from my NAS as “backup.”
HA is still showing the local backups, and creating new ones on the local HA storage device. I restarted HA as well as rebooted the whole system (HAOS on a RPi.) No change.
Any ideas as to what I’m doing wrong?
That did it! Thanks!
I am not exactly sure where it writes the backup files while it is creating the backup but I saw the free space shrinking during a backup and no new files on the share until it completed. Once completed the space freed up on the HA install again and the tar file appeared on the share.
So, it looks like we’re back to square one: Don’t use the HA backup if you don’t want to write excessively to your local storage device (like, if it’s low on space or it’s an SD card with limited lifetime writes.)
I was hoping to be able to automate the backups to my NAS right from HA, rather than have to “pull” just the config directory contents as part of my regular backup process, then manually do full HA backups sparingly, like just before I make a major change.
I haven’t tested the “share” option to see if that works the same way (writes locally, first.) If so, it wouldn’t seem much better than writing locally and just keeping it there.
I’m mounting to Synology SMB share.
Basic functionality works. With that I mean, that I can mount the network share and use it for backup.
However I cannot seem to make it connect to any subfolder of the share, which is a bummer.
I’ve got a “Data” share and if I enter just Data into the “Remote Share” field things work.
If I attempt to add any subfolders such as Data/Backups/HomeAssistant then I get a mounting error.
Am I missing how I should define the folder structrure or does this just not work for the moment?
Anyone having issues using the media, from the Network storage, inside the cards?
I am able to use /local/Images/xxxxxx.png for the items inside “Media-> My media”
Now inside said My media I also see the NAS folder I added and the images inside it. But if I try using it inside a card as /local/Images/NAS_Folder/xxxxxx.png doesn’t work
I understand it makes little sense to still use /local/xxxxxxxxx but I can’t find the proper way to do it