Permissions issue when backing up to an NFS share

I’m running HASS in a Proxmox VM and have been routinely running out of space due to backups getting larger, mainly from the habitual running of updates and the backup requirements from that. As such I figured it was about time to just have the backups go onto my NAS directly (already have appropriate mirrors set up with Google Drive).

I created a network storage share for backups, trigger it, can see it processing and creating the tar.gz files with the expected user (all_squash, only exposed to HA ip) but I consistently get the following error from the supervisor.

24-01-09 16:56:30 ERROR (MainThread) [supervisor.utils.json] Can't write /data/mounts/nfs_backups/tmpu84zym4u/backup.json: [Errno 13] Permission denied: '/data/mounts/nfs_backups/tmpu84zym4u/tmpwqu6hih0'
24-01-09 16:56:30 ERROR (MainThread) [supervisor.backups.manager] Backup 6d63bf1d error
Traceback (most recent call last):
  File "/usr/src/supervisor/supervisor/utils/json.py", line 36, in write_json_file
    with atomic_write(jsonfile, overwrite=True) as fp:
  File "/usr/local/lib/python3.11/contextlib.py", line 137, in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/atomicwrites/__init__.py", line 169, in _open
    with get_fileobject(**self._open_kwargs) as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/atomicwrites/__init__.py", line 186, in get_fileobject
    descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/tempfile.py", line 341, in mkstemp
    return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/tempfile.py", line 256, in _mkstemp_inner
    fd = _os.open(file, flags, 0o600)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: '/data/mounts/nfs_backups/tmpu84zym4u/tmpwqu6hih0'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/src/supervisor/supervisor/backups/manager.py", line 240, in _do_backup
    async with backup:
  File "/usr/src/supervisor/supervisor/backups/backup.py", line 345, in __aexit__
    write_json_file(Path(self._tmp.name, "backup.json"), self._data)
  File "/usr/src/supervisor/supervisor/utils/json.py", line 40, in write_json_file
    raise JsonFileError(
supervisor.exceptions.JsonFileError: Can't write /data/mounts/nfs_backups/tmpu84zym4u/backup.json: [Errno 13] Permission denied: '/data/mounts/nfs_backups/tmpu84zym4u/tmpwqu6hih0

If I change the network share to a share and SSH in to home assistant CLI I can create files mimicking the structure completely fine, so it doesn’t seem to be a permission issue? But then the logs quite clearly state that it is?

Any ideas on what could be the cause here?

/srv/nfs4/home-assistant-backups  192.168.x.x/32(rw,wdelay,root_squash,all_squash,no_subtree_check,fsid=3,anonuid=1000,anongid=1000,sec=sys,rw,secure,root_squash,all_squash)

A couple of ideas:

What are the NFS logs in the NAS saying?

What if you set the permission to the NFS location for the HA user to 700?

I noted the following in the kernel log of the server that seemed interesting and lined up with my backup attempts:

Jan 10 17:09:04 data kernel: [608285.940096] nfsd4_process_open2 failed to open newly-created file! status=13
Jan 10 17:09:04 data kernel: [608285.940218] WARNING: CPU: 3 PID: 7637 at fs/nfsd/nfs4proc.c:456 nfsd4_open+0x5ea/0x7d0 [nfsd]

It’s probably of note that the underlying filesystem being shared is a mergefs mount but to my understanding I have it in NFS compatibility mode.

Shared the same directory via CIFS and backup worked as expected, but my brain still believes that ultimately I want it working over NFS given it’s linux ↔ linux hosts.

1 Like

I’d research around this. Perhaps it has to do with some obscure correlation between os.open(file, flags, 0o600) and the NFS setup.

I was running into the same problem and it looks like it’s a permissions issue on the NFS server side. I created a new NFS share at the root of my storage instead of nested inside another SMB share and it seems to have fixed the issue.