Block network administration from HA Supervised / Stop HA from auto configuring networks on reboot

Hi I need to stop or block HA Supervised from setting the ethernet adapter configuration.

The PC is a Debian 12 installation and acts as a router for internet traffic as well as running home assistant.

I cannot run core or run in a container because I need to use add ons.

For other reasons I cannot run HAOS in a virtual environment.

Any ideas? Something like disabling network in configuration.yaml would be perfect.

I think you missed the restrictions on supervised

This installation method can easily be broken if one manages the operating system incorrectly. Therefore, the following additional conditions apply:

The operating system is dedicated to running Home Assistant Supervised.
All system dependencies are installed according to the manual.
No additional software, outside of the Home Assistant ecosystem, is installed.
Docker needs to be correctly configured to use overlayfs2 storage, journald as the logging driver with Container name as Tag, and cgroup v1.
NetworkManager is installed and enabled in systemd.
Systemd journal gateway is enabled and mapped into supervisor as /run/systemd-journal-gatewayd.sock

Yeah fine. Unfortunately I can’t purely dedicate this machine to home assistant.

It seems to work fine when I configure the network manually after the systems booted, but upon reboot HA makes changes to the configuration that breaks the network.

So, I feel I’ve demonstrated that HA is fully able to run with my configuration, I just need to stop it changing the configuration on reboot

IIRC (from my long ago attempts at supervised) if you remove the network-manager package, supervised cannot control your network.

It worked at the time (“unsupported”, ofc), but I guess you’ll get an “unhealthy” situation nowadays.

1 Like

remove NetworkManager daemon from debian? I have been thinking of trying that. But I think that will break other functions of my network. Researching ways to block HA from accessing it


Well, the assumption here is that you can manage your network without relying on a high-level software.
I’m pretty sure the network manager is not a default package.

1 Like

Pretty sure it handles the virtual interfaces for the containers and auto configuration of adaptors as they are added and removed (something that happens on this machine)

But yeah, maybe there is a way to configure this without NetworkManager. Will experiment with it.

Thanks for responding :slight_smile:

Pretty sure docker itself does that.

Likely, but that doesn’t mean you cannot do without :wink:

1 Like

The core installation can run addons too, you just have to install and manage them yourself.

This is probably WAY easier than to try and fight the supervisor, which gets updated with new checks and functions quite often.

1 Like

You have to be careful with words :wink:
Core and Container installations cannot run HA addons. They likely can run the software that addons wrap, though, indeed.

1 Like

True. Bad wording.

You can gain the same functionality as the one addons provide, but you will not get the easy installation, update and management from within HA.


1 Like

OK, looks like I have a solution :smiley:

Short answer: Use something other than NetworkManager to handle the interfaces. After this HA then can’t see them and continues on happily as if they never existed. I chose to use Networkd to manage the ethernet interfaces because its there and dnsmasq for DNS, except the reason for this is slightly complicated.

Long answer:
So, the PC has an ethernet interface via a USB device that receives internet, a WiFi interface in AP mode (hotspot) that connects to all the IoT energy and temperature sensors in my house, and a PCI ethernet interface that shares internet to a small network behind a basic firewall.

The client (USB) interface was just auto-configured by DHCP and the host ethernet interface was usually setup with a static IP and ‘Shared to other computers’ in the Network Manager applet. This is what was getting messed up by HA. HA would reset it to either IPv4 disabled or as a regular DHCP client.

HA can have its way with the WiFi network, as it is there for HA anyway. Fortunately, as you’ll read later, it keeps a static IP.

Reading documentation for the Network Manager applet, ‘Shared To Other Computers’ means:

Shared to other computers — Choose this option if the interface you are configuring is for sharing an Internet or WAN connection. The interface is assigned an address in the 10.42.x.1/24 range, a DHCP server and DNS server are started, and the interface is connected to the default network connection on the system with network address translation (NAT).

That was surprisingly hard to come across but it does tell us what to repliccate in Networkd.

So, I made both devices ‘Unmanaged’ in Network manager by:

15.1. Permanently configuring a device as unmanaged in NetworkManager
… 2. Create the /etc/NetworkManager/conf.d/99-unmanaged-devices.conf file with the following content:

To configure a specific interface as unmanaged, add:


I made both the PCI interface (enp1s0) and the USB dongle (enx*) unmanaged. It is great you can use wild cards here, because the USB dongle changes its name each time it is plugged in.

Setting up the /etc/systemd/network/*.network files is pretty easy, I have 2:



Wild cards can be used here too, which is good. Not sure if the enx* interface needs a .network file, but its there.

I tried to setup dnsmasq, for DNS but found it conflicted with a copy of dnsmasq that NetworkManager uses.

For enp1s0 if EmitDNS is listed on its own, it will emit the address of the upstream DNS server address it knows about, the one automatically put in /etc/resolv.conf, which in this system is the DNS provided on adapter enx*. This works fine except unfortunately the address is not static and resets every time enx* is plugged in and out. So, DNS doesn’t work until the DHCP leases on enp1s0 expire and then they are resent the DNS address for enx* at whatever it is when this occurs.

I was struggling to get this to work with dnsmasq, as installing a new dnsmasq instance next to network manager gives a conflict on port 53. But then I noticed that the wifi hotspot has a static IP and a DNS server instance of dnsmasq running on that IP. So, for the DNS= section of enp1s0 I just gave it that address and it configures the DHCP clients to go to the wifi’s IP for DNS instead of the upstream one from enx*. dnsmasq handles the changing IP of enx*.

I believe you can add extra config files to use the dnsmasq instance in NetworkManager for other purposes but I am lucky to not have needed to.

So this all this seems to work, was a necessary learning process for me. This post will kind of act as a reference for me when all this inevitably blows up in the future and I need to remember what I was thinking :stuck_out_tongue: . But I hope it gets someone else on the right path too.

Good of you for posting all this, but why do people try hard to get around the rules given by the OS designer?

Your system will break at some stage, and you’ll wish you hadn’t.

Its not going around anything in the OS, its going around hard-coded behaviour in Home Assistant supervisor

If what you mean is why do people try to get around the rules set by the Home Assistant Supervised developers:

  1. We’re hobbyist tinkerers, of course we’re going to try and hack stuff
  2. We might have needs that don’t quite fit the average HA user and we want to see if we can make it work for us
  3. Some of the rules, like the one where ‘Nothing else shall be running on the host operating system’ are somewhat arbitrary and over restrictive at this point.

Like, if someone wants to put HA on an x86 router or a server that runs other things and doesn’t want to waste the resources on virtualising a whole OS why shouldn’t they try?

In my case, HA is fine with those two adapters doing other things so violating the ‘rules’ have worked out for me. It would have been much easier to have a simple option to ‘ignore these interfaces’ or ‘disable automatic network configuration’, but to be fair those may just not be implemented yet.

But yeah, the ‘OS shall only run HA’ is overly restrictive, and I’m not going to try and meet that. I don’t want multiple PCs running when I don’t need to, and I’m not going to upgrade the hardware just to log MQTT data and host a webpage.

If my system breaks I’ll fix it, thankyou. Home Assistant is not something that is essential to my life. If eventually it becomes too hard to maintain I may migrate to another solution.

All fair enough. Horses for courses.

What continuously amaze me is that proficient sysadmins go through hoops and loops to make Supervised work their way just for… add-ons I guess?

It seems to me this is a perfect example where the workaround is 10x more complicated than using plain containers both for HA and 3rd parties …

That’s exactly where I came to when I tried Supervised years ago, and that was before it became even more complicated and restrictive (because the devs want it to die, but don’t tell anyone :joy:)

I love a few things about addons, as opposed to docker standalones

  1. ingress
  2. authentication - eg mosquitto addon just uses a home assistant user.

Fair enough. I completely hate the restrictive and “greedy” (as in “that hardware is mine, all mine”) aspects of HAOS and Supervised, though.

:rofl: Well Chris B, most of the complexity is due to doing the network config the leet linux way outside of NetworkManager, which is what you bloody recommended!! :rofl: Haha

Nah much of the complexity is due to the repeated IP address and adapter name changes due to the USB dongle. Most people with propper NICs won’t have to deal with that, home assistant really doesn’t add any complexity.

And NetworkManager would randomly change its config every few months, even before I had HA so I think this is actually more robust than what I had.

Aaand the whole setup isn’t even that complicated in the end, its just a few config files.

If you ask me, HA shouldn’t kill off supervised. But really, it should concern itself only with what it needs and try to be a little more robust. I don’t really see any reason why it shouldn’t be able to co-exist with other software.

… Is this a good time to mention the computer is 32bit x86 aswell? :stuck_out_tongue_winking_eye: