Is it a VM or bare metal (rpi maybe)? Can you elaborate on your setup?
Itâs a Supermicro server. Itâs running office systems and doors, thatâs why I need the redundancy on ethernet ports.
Whatâs the host OS and what type of installation of HA do you have? There are many things to consider on the network stack to get to the underlying problem because there are multiple layers involved. The delay on the network interface coming up and becoming available might be prior to HAâs own network configuration. If the device is not up and ready at the host level, it will add up to delays at the guest level.
Gold star for you cr0muald0! Exactly what I was after. I do have a couple of questions that someone might be able to answer. I donât see ANY conf files under /etc/NetworkManager/system-connections/
after making those changes. Also the primary interface I had to do this - otherwise it would get a DHCP address an interfere with traffic for that IP network:
nmcli con modify âSupervisor enp2s0â connection.autoconnect no
Need to look at my switch configuration for the default pvid - Iâm surprised it could GET an ip addressâŚbut that autoconnect no keeps it disabled at bootâŚ
Hi, welcome! what kind of setup do you have? It is the recommended way to create NetworkManager profiles with the nmcli tool. A basic question would be if you ran the sequence in the tutorial ânmcli con addâ, ânmcli con showâ, ânmcli con editâ, and last the âsaveâ and âquitâ commands, already inside the editor, of course. But Iâm guessing you double-checked all these. You can always try and check write permissions to this folder by manually creating the connection with a text editor like nano. The manual for NetworkManager shows some information on how to create a profile for a connection:
And check properly point 7 here for your routing problems
Iâm using generic-x86-64 setup. I went thru your procedure and was successful in creating a VL100 and VL200 interface. I was pleased that when I installed Mosquitto MQTT server that it âlistensâ to port 1883 on both the 192.168.1.250 (VLAN100) and 192.168.2.250 (VL200). Reboot, power off - it retains my VLAN configuration. But there are no files in that system-connections interface. What odd about that is that early on when I was experimenting with nmcli I actually DID see files in the system-connections. In fact I created a BACKUP directory in that directory - and copied all the files in system-con to that BACKUP - thinking that it might be a quick restore of that configuration. Before anyone ask I used "ls -lstra: to look for the files in the directory. Now I donât have a BACKUP directory and the system-con is empty. Its a bit of a mystery but Iâll sleep fine tonight if I never solve it :-0.
There is something about that environment that not normal though. Look at this:
â / sudo nmcli -f NAME,DEVICE,FILENAME connection show
NAME DEVICE FILENAME
enp2s0@vlan100 enp2s0.100 /etc/NetworkManager/system-connections/[email protected]
enp2s0@vlan200 enp2s0.200 /etc/NetworkManager/system-connections/[email protected]
Supervisor enp2s0 â /etc/NetworkManager/system-connections/Supervisor enp2s0.nmconnection
â / cat /etc/NetworkManager/system-connections/[email protected]
cat: canât open â/etc/NetworkManager/system-connections/[email protected]â: No such file or directory
That first command SHOWS those files in that sys con directory - but you certainly canât âcatâ itâŚ
Certainly weird, but⌠are you running as root? These config files are root readable only, and I see that you used sudo to run that first nmcli command. Try a âsudo su -â to elevate your permissions or login as root and then list the contents of the folder. If this is not it, then these definitions are being stored somewhere else
I think you are already root when you ssh (whoami say so anyway). I tried to elevate via sudo just for grins and it made no difference. Like you said it curious. I did a search of all files and did not anything useful. I may dig around later and see but for now this is just a science project curiosity
Are you running home assistant as a docker container supervised installation? What OS? And are these commands run on the host level or container level?
Iâm just running the vanilla installation of the current version of HA for âgeneric-x86-64â. That the one that has an OS (Alpine linux) and supervisor (the docker manager). These are all of the docker containers I see currently running:
Did you follow this tutorial or some similar installation?
I see that NetworkManager has some specific detailed steps to work properly, and my feeling is that this is an edge case where not all is set up properly, although it works sufficiently. Do you have dbus installed? I read somewhere that in order to function properly inside a container, NetworkManager needs dbus installed. I would bet that there is some containerization layer preventing the user accessing/seeing the folder and file contents, but the system somehow can (imagine that you mount a volume on top of this folder when NetworkManager has already read the network profiles/configs - just wild guessing hereâŚ)
NoâŚI went by HA âofficalâ install process for an generic-x86-64 install. There were no CLI commands. First booted up you had a number of docker containers running - the only change I made was installing node-red, mariadb, esphome post installationâŚat least some of those steps are already implemented in this version of HA. For example they recommend you installed networkmanager - it already installedâŚ
BTW, if I log in via keyboard/monitor/ directly attached I can see those files under the networkmanager directoryâŚssh in and no. In both cases whoami says root - but there some restrictions going on via sshâŚ
I guess that explains it all and closes up our mystery the config files are obviously stored at the host level and passed on from the container to the host via nmcli and dbus, so when you login to the host via terminal as root you can access/see them. When you do it via ssh you are logging in as root inside the container (a different kind of root), so the separation layer between host and containers, for security reasons, prevents access to the host files. So many ways to install HA that Iâm even amazed that the tutorial works so well for so many people!
Good catch. It is the ssh container that processes my ssh connection - so that does make senseâŚ
I did use that link you sent to customize my shell a bit more when I ssh in - to make it more like a normal bash looking shell. I even installed neofetch and added that to the profileâŚlooks more like my arch linux workstation now!
Looking good!
Just wanted to add a cautionary tale about multi-homed hassos/Home Assistant coming from my experience. I ended up re-designing my network so that Home Assistant can run on a single interface.
Iâm not saying multi-homing canât work, but in some situations it may cause other issues that complicate your network or your Home Assistant installation in ways that are difficult for the home assistant team to support and therefore may not be stable. I think a single-homed model works best for HassOs/HomeAssistant.
My network originally consisted of
- VLAN 1 - Servers/Management (Internet enabled)
- VLAN 10 - Client devices, laptops mobile etc. (Internet enabled)
- VLAN 100 - IoT devices (No Internet access)
- VLAN 101 - IoT and streaming devices (Internet access)
- Home Assistant running on HassOs on RaspberryPi 4 with interfaces on VLAN 1, 100 and 101
My Homeassistant configuration manages devices in VLANs 1, 100 and 101 and was accessed via VLAN 1 so was set up with VLAN sub interfaces on each of these VLANs.
The first problem is more idealistic than pragmatic, but the concept of bridging my server VLAN with my IoT VLANs on a device (other than my router/firewall) never sat well with me.
The second problem is real, and while there probably are good solutions, the complexity isnât worth it in my opinion. It comes down to MDNS (Multicast Domain Naming System).
For my client devices on VLAN 10 to be able to cast or airplay to my TV or Chromecast on VLAN 101 I need a MDNS reflector running on my router since MDNS is a multicast protocol and cannot be directly routed across networks. I also have services running on VLAN 1 which rely on MDNS, so my MDNS reflector (avahi) listens on all of these networks.
Home Assistant periodically announces itself on your network using MDNS so that during setup you can access the HA web interface without needing to know itâs IP address by using the URL homeassistant.local:8123. This announcement has a safety mechanism in that if Home Assistant sees that another device is already advertising the name homeassistant.local
it will add a random number to itâs own name and advertise that eg. homeassistant2342.local
.
The problem when you have Home Assistant running on a multi-homed host with a MDNS reflector on the networks is that the MDNS announcements will be seen by all interfaces. Home Assistant has no way of knowing if these reflected announcements are from itself or another device, so it assumes thereâs a name collision, announces a new randomised hostname which gets reflected back and so on. This creates a looping situation where your network can become saturated by MDNS announcements.
@skull29 hit this problem here:
My solution to all of this was not to disable MDNS but to re-think my network architecture.
I created a new VLAN (110) for Home Assistant, then through my firewall enabled access in and out of that VLAN from my IoT, Server and client VLANs as needed. In this way Home Assistant remains single homed, MDNS doesnât get confused and I have precise control of traffic in and out of Home Assistant and my IoT networks.
Can I do this using Opnsense? if yes, can you give me an idea of how you enabled access in and out of the valns ? was it through a rule?
Thank you
I think this is a great option, creating a DMZ for HA, if you can solve your broadcasting, routing and firewall issues. I remember I referred to it here
Opnsense has an mdns plugin
https://docs.opnsense.org/manual/how-tos/multicast-dns.html
you need to set it up and create firewall rules to filter which ips can traverse the network segments/vlans.
Thatâs it!