By static i mean assigned by dhcp but never changing (fixed).
Static is a homonym and interchangeable with fixed. The static/fixed IP is set in the device. Reserved IP addresses are set in the router.
I have explained what I meant with the terminology I used. There is no sense in arguing whether I’m using the “right” term or not, because you already know the underlying meaning. Accordingly, either you understood what I was trying to say or not. If you understood, great! If you did not understand despite it being explained already, then that is your problem now.
There is no need for arguing about this. Some integrations do need static ip address for device for work properly. On some devices you can set up ip address on some you can’t so you will use router to assign static ip for those devices. And that is all to it.
Such as ??
Pretty much anything you add manually via IP address to HA.
Be specific. What integrations REQUIRE a static IP?
Android debug bridge, frigate, home connect, midea, mqtt, samsung smart tv, ssh, syncthing, just to name a few that will need a static ip.
Esphome also is working much better with static ip, no matter mdns.
Yes any ESP device will start faster, not “much better”, with static IP because the processor doesn’t need to negotiate with the DHCP server. Once the connection is established, static or DHCP makes no difference to the operation. The ESP processor by default will first try the last used IP, so if the IP is reserved then it will connect without a DHCP negotiation.
Frigate configuration requires the camera IP, so the cameras should have reserved IP. I don’t know if Frigate can resolve an mDNS address. I’ve never tried it because my cameras have a reserved IP.
An MQTT broker is a server and should have a static IP.
Again, you assert that integrations REQUIRE a static IP. Almost all will work just fine with DHCP.
This is going way off-topic, so drop it.
I don’t think you get the central problem with DHCP. Sometimes, DHCP forgets an IP assignment and gives an existing device a new IP address. And when that happens, if the device was not added to HA using mDNS, it’s fucked.
THAT is why I insist on giving every device a known, fixed IP, whether DHCP-leased or not.
The only time I had an issue with DHCP was when two devices had the same MAC address. That is not supposed to be possible, and it took me weeks to figure out.
DHCP doesn’t “forget” an assignment. A DHCP server remembers an IP assignment for the lease time. Depending on the server, 1 to 24-hours. Once the lease expires, the IP address may be reassigned to another device. Every time a client is connected the lease renews again. A reserved IP is basically a permanent lease.
Static, AKA Fixed (sometimes) IP is set in the device itself. Hopefully in an address space outside of the DHCP pool.
Again, you are using a fuzzy description of “fixed” IP. Of my 50 or so WiFi devices, about a third of them have a reserved IP. That is not a fixed (static) IP.
Now, back to the OP’s question. Use DHCP as the default, reserved when necessary and static IP only if absolutely necessary. From your description it sounds like you may have two DHCP servers which will guarantee erratic behavior.
My intuition tells me you don’t have nearly enough networking experience. Otherwise you would not make that claim with a straight face.
Very rarely, DHCP servers can experience data loss leading to loss of the lease list. Then all hell breaks loose.
To avoid that issue (admittedly a rare one), I recommend assigning IP addresses via DHCP or configuring them by hand if DHCP won’t work. Sometimes it’s as simple as confirming in the router config that an initially dynamically-assigned address forever belongs to the device that got it.
Bonus from this approach: changing to a new router while preserving addresses and avoiding havoc is easy, cos you already have a complete list of all known IP addresses you can copy to your new router.
Late to this discussion but here are my obsersations.
I attempted to migrate to a new subnet with a different IP range to my original when I was getting started on implementing an OPNsense router. Ultimately I would like to setup the HA instance in its own WIFI network with its own Access Point. This would allow me to lose the rest of the network and retain all my home automations.
Didn’t go so well, most of my devices are based upon DIY ESPhome boards and they are extremely picky about the network they reside in. Some migrated but most needed re-authentication which was troublesome and I had to reflash quite a few when they simply disappeared from the network and point blank refused to reconnect. Those that did get discovered mostly stopped working with their automations. I gave up and reverted to their original IP range. It was a very painful experience altogether.
If I attempt it again I will take a different approach which would involve leaving the Homeassistant in its original subnet (192.168.1.x) and creating a single SSID WIFI network with the original network name and password. I would then create a whole new network subnet for everything else and create a firewall rule to allow my administration machines to talk into the HA network. Even this is fraught with complexity as all my other fixed devices such as Kodi boxes and the router itself would need migrating and this would likely cause unforseen problems.
Its always the unforeseen issues which crop up which cause the longest time to resolve (the unknown unknowns).
Ultimately I sort of share the observation of one of the other contributors - is the gain really worth it. All my HomeAssistant devices are flashed with custome firmware so the chances of any of them going rough are just about zero - so what is the issue I am trying to avoid ? Really the only real gain would be that fabled stand alone in house HA which would trundle along no matter what I messed up in the rest of my LAN - and even then I forsee the need to add at least two dedicated access points to give good house coverage - which is quite a bit of expense and wasted watts of energy.
Going to brood on this for a little longer before attempting it again.
When putting home assistant in a separate subnet from ESP home devices or other devices that rely on device discovery, you have to create a discovery proxy, or also called multicast DNS proxy, between the two networks.
This is usually accomplished when you’re using something like OpenWRT by creating two different network interfaces in your router that each have access to one of the networks, then running the Avahi reflector (enable-reflector: true in its config file) so that whenever a device brought guests or queries multicast DNS in one of the subnets, the other subnet can also get broadcasts and requests from the first, and vice versa.
This is an artifact of an implementation detail of how multicast DNS discovery works. When a device is added via multi-cast DNS, Home Assistant (or whichever other system that uses multi-cast DNS) will add a host name that ends in .local for that device in its records. Multicast DNS is then later used to get the current IP address for the device associated with that hostname — and this works so long as there’s multicast responses on the same network. The minute a device is moved to a different network and there is no reflector between the two networks, multicast responses stop working, then the hostname can no longer be resolved… then the device just shows up as absent.
If there is unwillingness to set up a reflector between the two networks, or the hardware is unsuitable to set up a reflector, or there are security concerns, the only other alternative is to manually add devices by IP address. This works for most devices (ESPHome included), but some devices rely on discovery, and there is nothing that can be done to manually add them.
I am only covering multicast DNS discovery in this post, despite the fact that there is another protocol for discovery which is called universal plug and play. And that one is even worse, because there is no such thing as a reflector for universal plug and play, and it requires multicast routing to work across different networks, and it also requires a daemon running on multiple places (mrouted) and sometimes even trickery with firewall rules to tamper with the TTL. It really is awful.
Its my experience that multicast is very poorly handled within OPNsense and the various proxies are a bandaid for a flawed implementation. Its a real issue for certain technologies such as SAT>IP and sometimes the only solution that works is to assign fixed IP’s and then plug all the devices into the same switch and allow them to bypass the router altogether.
Multicast DNS needs a proxy. Packets are sent with TTL zero, they wont be routed between subnets even if you add routing rules.