Configuring NUT integration to monitor remote NUT servers (pre-0.108.7)

Note: the issue that required using the setup method described below was fixed in 0.108.7. If you’re running that version or later, these steps shouldn’t be necessary.

We’re in the process of migrating from Domoticz to Home Assistant, and one of the major reasons for doing so is HA’s NUT integration. We live in a part of the world that is prone to what I will kindly refer to as ‘power events’, which means that having effective monitoring, event response, and historical tracking of what the UPSes are up to is rather important for us.

The layout of the property being what it is, all of the UPSes are connected via USB to RasPi Zero Ws. The RasPis are running NUT in netserver mode, which makes them perfect for remote monitoring. Unfortunately, this is a scenario that HA’s NUT documentation doesn’t really cover. I’ve been able to trial-and-error my way through addressing this, and now have a NUT integration in HA with multiple remote NUT instances being monitored; a single remote NUT server can also be monitored using this method. Home Assistant 0.108.3 was the version in use at the time of writing.

First things first: install the NUT integration in HA. This lives in the community repo (https://addons.community), so you’ll need to have that configured in order to pull NUT from the Add-On Store.

Once that’s taken care of, do not start NUT. There are two pieces of pre-configuration that need to take place:

First, change the NUT add-on’s config to the following:

users:
  - username: foo
    password: bar
    instcmds:
      - all
    actions: []
devices:
  - name: Dummy UPS
    driver: dummy-ups
    port: /dev/null
    config: []
mode: netclient
shutdown_host: 'false'
remote_ups_host: a0d7b954-nut
remote_ups_name: upsmon on localhost
remote_ups_user: remote_monitoring_username
remote_ups_password: remote_monitoring_password

Note that the ‘mode’ option above is set to ‘netclient’. This will cause the ‘users’ and ‘devices’ configuration sections to be ignored. In the example above, they’re set to safe fake values.

Having said that, the remote_ups_user and remote_ups_password options do need to be changed. Set those to the username and password that your remote NUT instances use for their monitoring login.

Next, in configuration.yaml, create sensors for your remote UPSes:

sensor:
  - platform: nut
    name: friendly_name_01
    host: ip_address_or_hostname_of_remote_NUT_instance
    port: 3493
    username: remote_monitoring_username
    password: remote_monitoring_password
    resources:
      - ups.load
      - ups.status
      - input.voltage
      - battery.runtime
  - platform: nut
    name: friendly_name_02
    host: ip_address_or_hostname_of_remote_NUT_instance
    port: 3493
    username: remote_monitoring_username
    password: remote_monitoring_password
    resources:
      - ups.load
      - ups.status
      - input.voltage
      - battery.runtime
  - platform: nut
    name: friendly_name_02
    host: ip_address_or_hostname_of_remote_NUT_instance
    port: 3493
    username: remote_monitoring_username
    password: remote_monitoring_password
    resources:
      - ups.load
      - ups.status
      - input.voltage
      - battery.runtime

Change the ‘name’, ‘host’, ‘username’, and ‘password’ options to reflect your environment. If you’re only monitoring one remote UPS, the extra two ‘platform’ sections can be removed, or more added as necessary if you have more than three devices to monitor.

Once this is complete, go ahead and start the NUT add-on. You should now have notifications to the effect of new entities being available for configuration, which are your remote NUT instances. There should also be a NUT integration visible for each remote NUT instance that was added.

Hopefully this is helpful for someone else - it’s not intuitive as to how to address this scenario in HA, but it is definitely doable.

I’m surprised you have this working… I’ve been running nut for years on a couple of my home systems but just can’t get it working reliably with HA.

Currently on HassOS 3.12, I can get two out of three of my servers connected, but at least one of them is always throwing the following errors in the logs:

2020-04-12 16:22:36 ERROR (MainThread) [homeassistant.components.sensor] Entity id already exists - ignoring: sensor.cp1500pfc1_status. Platform nut does not generate unique IDs
2020-04-12 16:22:36 ERROR (MainThread) [homeassistant.components.sensor] Entity id already exists - ignoring: sensor.cp1500pfc1_battery_charge. Platform nut does not generate unique IDs
2020-04-12 16:22:36 ERROR (MainThread) [homeassistant.components.sensor] Entity id already exists - ignoring: sensor.cp1500pfc1_battery_runtime. Platform nut does not generate unique IDs
2020-04-12 16:22:36 ERROR (MainThread) [homeassistant.components.sensor] Entity id already exists - ignoring: sensor.cp1500pfc1_battery_voltage. Platform nut does not generate unique IDs
2020-04-12 16:22:36 ERROR (MainThread) [homeassistant.components.sensor] Entity id already exists - ignoring: sensor.cp1500pfc1_input_voltage. Platform nut does not generate unique IDs
2020-04-12 16:22:36 ERROR (MainThread) [homeassistant.components.sensor] Entity id already exists - ignoring: sensor.cp1500pfc1_output_voltage. Platform nut does not generate unique IDs

It seems like there is something odd with the way the Nut plugin is generating the device IDs. My config itself is almost identical to yours:

- platform: nut
  name: cp1500pfc1
  host: 172.16.42.198
  port: 3493
  username: !secret nut_user
  password: !secret nut_password
  resources:
    - ups.load
    - ups.status
    - ups.status.display
    - battery.capacity
    - battery.charge
    - battery.runtime
    - battery.voltage
    - input.voltage
    - output.voltage
- platform: nut
  name: cp1500pfc2
  host: 172.16.42.17
  port: 3493
  username: !secret nut_user
  password: !secret nut_password
  resources:
    - ups.load
    - ups.status
    - ups.status.display
    - battery.capacity
    - battery.charge
    - battery.runtime
    - battery.voltage
    - input.voltage
    - output.voltage

And it doesn’t work at all through the integrations if you have more than one NUT server on your network. I can’t find a way to get it to add more than a single integration, so therefore you are limited to a single IP/Port combination. Yes, I could set up a “master” server that monitors the other two, but that gives me a single point of failure that I would like to avoid.

There’s been a bit of digging involved, at least as far as figuring out how NUT behaves under HA. Having said that, only three of my four UPS monitors are currently recognised by NUT as integrations, but more on that below.

Haven’t run across this one myself. No idea what to tell you, I’m afraid.

This comes back to my earlier comment re: only three out of four UPS monitors being brought in to HA as integrations. Your idea re: how NUT handles device IDs likely plays into this as well, but I feel that there are also other things going on - the issue that you encountered with only being able to add one NUT integration via the UI was also a wall that I ran into, but I was able to get an integration for each monitor added by working around the UI.

Two things that I learned in all of this:

  1. Unless you have your remote UPS monitors 100% up and running (meaning that they are configured correctly, recognising the UPS(es) attached to them, etc.) with NUT in netserver mode, NUT on HA will not add the remote monitor as an integration. This is where my three-out-of-four problem became evident: one of the remote monitors is attached to its UPS via a USB cable that I found out has gone bad. NUT on HA will not recognise it at all, even though it’s completely healthy and working correctly otherwise.

  2. In order to have remote monitors added correctly as integrations, any existing configuration that may have been applied for them needs to be removed and re-added as follows:

  • Delete the existing integrations
  • Remove any sensors in configuration.yaml relating to the integrations you just deleted
  • Reboot (just doing a restart of HA from Configuration | Server Controls won’t cut it)

Once HA has rebooted:

  • Make absolutely certain that your remote monitors are 100% functional
  • Re-add your sensors back into configuration.yaml (don’t add any that aren’t working absolutely correctly)
  • Reboot again

Finally, after all of that:

  • Check integrations and see if the remote monitors were added correctly.

If that doesn’t work, I have no idea where to go from there. It did for me; YMMV. FWIW, this is what my integrations look like at present:

This is the exact scenario I was looking to avoid as well. It seems to be doable, at least based on my experience - but NUT does seem to be slightly touchy under HA.

That’s basically the process I went through to get the two that work in the system now.

I’m gong to go open a bug against only being able to add a single integration, with the announcement about deprecating yaml configurations, it sounds like that option is going to go away at some point in the future. EDIT: someone beat me to it: https://github.com/home-assistant/core/issues/33944

I monitor all of my UPSs (and the rest of my home systems) via. grafana and sensu, so I know that nut is up and stable. Ironically, the one that I do have problems with from time to time (old RPi with USB that periodically has to be poked), is the one that is the most reliable in HA.

It looks like part of the issue with integrations was fixed in 0.108.6, but it still doesn’t work for me. I just opened a new bug: https://github.com/home-assistant/core/issues/34411.

I’m assuming it is because my UPSs don’t return serial numbers, so it can’t generate a unique ID. However, since the config file has a unique name specified, that works.

Understood. Just added (minor) commentary to the Github tracker. Agreed that it should Just Work™.

BTW, I did get ahold of the correct cable for the 4th UPS today, and it is now reporting in - but I had to remove all integrations and sensors from configuration.yaml, reboot, then re-add them to configuration.yaml and reboot again before I could place them where I wanted in the UI.

Edited title and original post to reflect that the issue causing these steps to be necessary was fixed in 0.108.7.