Synology DSM Sensor returning transposed details for two of my three Synology NAS devices

Hi,

I am using the Synology DSM Sensor component to grab stats from three Synology NAS devices on my home network.

I’m currently running HA v 0.63.1 (the latest revision at the time of this post).

Unfortunately, the details returned for two of the NAS devices are transposed. I’ve checked the configurations multiple times and compared the returned stats in HA vs those reported in the NAS DSM UI, and they’re definitely transposed.

Here’s a copy of the relevant section from my sensors.yaml file:

#### Synology NAS Devices
- platform: synologydsm
  host: 192.168.0.250
  name: 250 Synology DS214+
  port: 5000
  username: USERNAME
  password: !secret PASSWORD
  monitored_conditions:
    - cpu_total_load
    - memory_real_usage
    - volume_size_total
    - volume_size_used
    - volume_percentage_used
    - network_up
    - network_down
    - volume_disk_temp_avg

- platform: synologydsm
  host: 192.168.0.251
  name: 251 Synology DS212J
  port: 5000
  username: USERNAME
  password: !secret PASSWORD
  monitored_conditions:
    - cpu_total_load
    - memory_real_usage
    - volume_size_total
    - volume_size_used
    - volume_percentage_used
    - network_up
    - network_down
    - volume_disk_temp_avg

- platform: synologydsm
  host: 192.168.0.252
  name: 252 Synology DS214+
  port: 5000
  username: USERNAME
  password: !secret PASSWORD
  monitored_conditions:
    - cpu_total_load
    - memory_real_usage
    - volume_size_total
    - volume_size_used
    - volume_percentage_used
    - network_up
    - network_down
    - volume_disk_temp_avg

…and the relevant section from my groups.yaml file:

#### Synology NAS Devices
synologyNAS_250:
  name: 250 Synology DS214+  
  entities:
    - sensor.total_size_volume_1
    - sensor.used_space_volume_1
    - sensor.volume_used_volume_1
    - sensor.network_up
    - sensor.network_down
    - sensor.cpu_load_total
    - sensor.memory_usage_real
    - sensor.average_disk_temp_volume_1
synologyNAS_251:
  name: 251 Synology DS212J
  entities:
    - sensor.total_size_volume_1_2
    - sensor.used_space_volume_1_2
    - sensor.volume_used_volume_1_2
    - sensor.network_up_2
    - sensor.network_down_2
    - sensor.cpu_load_total_2
    - sensor.memory_usage_real_2
    - sensor.average_disk_temp_volume_1_2
synologyNAS_252:
  name: 252 Work NAS DS214+
  entities:
    - sensor.total_size_volume_1_3
    - sensor.used_space_volume_1_3
    - sensor.volume_used_volume_1_3
    - sensor.network_up_3
    - sensor.network_down_3
    - sensor.cpu_load_total_3
    - sensor.memory_usage_real_3
    - sensor.average_disk_temp_volume_1_3

…and the relevant section from my customize.yaml file:

#### Synology NAS - 250
sensor.average_disk_temp_volume_1:
  friendly_name: Disk Temp
  icon: mdi:thermometer-lines
sensor.cpu_load_total:
  friendly_name: CPU Load
  icon: mdi:chip
sensor.memory_usage_real:
  friendly_name: Memory Usage
  icon: mdi:memory
sensor.total_size_volume_1:
  friendly_name: Total Size
  icon: mdi:harddisk
sensor.used_space_volume_1:
  friendly_name: Used Space
  icon: mdi:chart-pie
sensor.volume_used_volume_1:
  friendly_name: Volume Used
  icon: mdi:chart-pie

#### Synology NAS - 251
sensor.average_disk_temp_volume_1_2:
  friendly_name: Disk Temp
  icon: mdi:thermometer-lines
sensor.cpu_load_total_2:
  friendly_name: CPU Load
  icon: mdi:chip
sensor.memory_usage_real_2:
  friendly_name: Memory Usage
  icon: mdi:memory
sensor.total_size_volume_1_2:
  friendly_name: Total Size
  icon: mdi:harddisk
sensor.used_space_volume_1_2:
  friendly_name: Used Space
  icon: mdi:chart-pie
sensor.volume_used_volume_1_2:
  friendly_name: Volume Used
  icon: mdi:chart-pie

#### Synology NAS - 252
sensor.average_disk_temp_volume_1_3:
  friendly_name: Disk Temp
  icon: mdi:thermometer-lines
sensor.cpu_load_total_3:
  friendly_name: CPU Load
  icon: mdi:chip
sensor.memory_usage_real_3:
  friendly_name: Memory Usage
  icon: mdi:memory
sensor.total_size_volume_1_3:
  friendly_name: Total Size
  icon: mdi:harddisk
sensor.used_space_volume_1_3:
  friendly_name: Used Space
  icon: mdi:chart-pie
sensor.volume_used_volume_1_3:
  friendly_name: Volume Used
  icon: mdi:chart-pie

and these are grabs from the HA UI:
21

30

37

It’s the stats against the second and third NAS devices (192…251 & 192…252) that get transposed.

Has anyone else experienced this, or does anyone have any helpful suggestions, please?

Many thanks,

Colin

I see nothing wrong with your config. COuld be an issue with the synology DSM component. I have one of them, but haven’t added it to my HA. Maybe i’ll do that tonight or later this week and report my findings.

Just for shits and giggles, have you tried reversing the names of the groups and looked at the result? Just to rule out any issues with groups.

My guess is that during the discovery of the devices, the linkages get crossed. This means someones gotta debug into it.

1 Like

Hi Petro,

Many thanks for your reply.

I think you’re possibly right regarding the discovery of the devices; on restarting HA numerous times today, I’ve found that the order of the NAS devices is not consistent.

Are you aware of a method of forcing HA to discover devices in a particular order?

I haven’t tried the S&G approach to reversing with the group names, but I’ll give it a go and report back!

Cheers,

Colin

not that I know of. Also, this would depend on the component. You could take the component and adjust the python code to force it to discover in any order you want.

Hmmm - maybe assigning the IP address (or part of it), or the MAC address, as part of the entity name during the discovery process; at least that way you’d be sure which device you were referring to.

I’ll have a look at the python code and see if I can work it out.

Thanks petro,

Colin

Hello. I am running HA v 0.74.2 and have this problem as well. Like you have I tried to re-order the names but without any luck.

Larry

Hey Colin. I’m running 0.75.3 on a Pi and have just discovered that I have the same issue in that the names of one of my synology NAS is labelled with the details of another one. Like you I checked everything but cannot see where the (friendly?) names and the IPs could be linked.
Did you ever work out what may be the problem?

With regard to the discovery order, I hate to burst your bubble but my HA is reporting the transposed name for a NAS that is turned off! when I do turn on this NAS HA doesn’t report it.

you guys figured this out?
i have the same issue with 2 of my synologys :frowning:

Hello. I’m curious if there is a command to make the sensors update rate faster.

I’m getting this error now : “bad indentation of a mapping entry at line 18, column 13:
- platform: synologydsm”
^

![|540x125]

you are doing it in wrong file, you need to create it in your sensor.yaml file

then my example :

  - platform: synologydsm
    host: !secret synology1_ip
    name: synology1
    username: !secret synology_user
    password: !secret synology_pass
    scan_interval: 3600
    monitored_conditions:
      - disk_smart_status
      - volume_size_used
      - volume_size_total
      - volume_percentage_used

It needs to line up with platform:

sensor:
  - platform: synologydsm
    scan_interval: 5

That’s what I did in the first place but it doesn’t work. Is there a minimum of seconds that needs to be set?2

doesnt mather, that seconds shouldnt give this error : “bad indentation of a mapping entry at line 18, column 13:

That was because I wrote the code in the main config file. Now it’s in the correct sensor config but it doesn’t work. 11 minutes have passed since the last sensor update.

no idea, but why poll so frequently anway ?

Just to see if it works.

indeed, still an open issue:

Thank you for taking your time to dig into this.