WoL - Not working

Not to dive too far into the Docker vs full VMs

That’s not true. I have host bridging on my macbook containers (for work) using named networks in my docker-compose stacks. Perhaps you are referring to Docker Desktop, which is limited in its execution (by design)?

The option I’m referencing is quoted here.

The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.

This has been a crippling limitation for some time now…

Also not true as you can have multiple docker stacks that are bound to individual NICs and interfaces.

This might be true, but I’m referencing where you’ve bound the networking traffic to the host’s network. At this point, you’re much more constrained and will inherit the same IP address/Mac/etc as the host – aka no multi NIC use-age with designated traffics without some form of VM mapping.

Also also not true. I have HA Core (docker) running on “powerful hardware” (Ryzen7 3700 with 64GB of RAM). The container takes as many resources as it needs. It runs in a separate stack connected to a central swarm across multiple hosts.

Last time I benchmarked Apple’s Virtualization framework, on Apple Silicon devices, Docker was notably resource constrained and didn’t perform nearly as well as full VMs. I might want to revisit/check this.

This one is partially true, but also not. Yes, you do lose some management benefits, but at the same time, there’s an ease of use factor going on as well. The only benefits that I know of that I lose are things like explicit CPU bindings and perhaps some over-simplicated things (like privileged mode execution for containers).

Snapshot states amongst how interfaces work for management are powerful. Docker is also a pain-in-the-a** when it comes to maintaining/updating/etc software. I don’t want to dive too far down this rabbit hole, but I do not have nice things to say to people who call Docker less hands on – Docker is very hands on. I can’t express how many features have broken without any sort of response/updates/communications on macOS from Docker stuff – with no ETAs on fixes and skew on docker version support. These issues rarely happen in a proper VM.

With all that said, it’s been a long time since I’ve used Fusion, but IIRC, you do need to bridge the hypervisor’s internal lan with the host LAN. Otherwise, VMF just swallows the outbound magic packet and it never gets broadcasted.

Yeah, this is where I need to find a solution.

Note, definitely would like to avoid #holywar territory on this front :sweat_smile:

lol agreed. :slight_smile:

True, but if you’re still using Docker Desktop to manage your stack, there are bigger problems to deal with.

Eh, in my instance (I have a LOT of containers spread across multiple hosts), the real hands-on work was building it all out (which I spent way too much time on). After I got all my extends and docker-compose files done, it was simply spin up the swarm and deploy. I haven’t had to actually touch anything aside from adding/removing containers for a couple of years now. All my pulls and monitoring are automated.

I’ve faced the same issue with hypervisors on Mac with the only 100% reliable solution I’ve found is (laughably) Parallels. I will agree, Mac support for containerization SUCKS and has for many years. But unless it’s for work (where I have a dedicated devops/entops team to fall back on), I don’t run any of my smart home kit on Mac. Everything here is Linux. I even have a couple of older Mac Mini boxes running Ubuntu serving as hosts lol.

Ha! Same. We’re good.

Anyhow, I’m going to spin up a VMF instance and see if I can get WOL to broadcast out. I’m not entirely hopeful, but I’ll give it a shot.

2 Likes

There’s something about me having an M1 Ultra Mac Studio with awesome power savings making this the most attractive approach – otherwise it would be dedicated Linux boxes. Plus I have a setup with a fairly limitless DAC (150+TBs of storage) for consumption over Thunderbolt 4 connections. (I have quite a bit virtualized on this machine communicating locally to each other as well).

I’m fairly stubborn in getting this up and working given these machines will be upgraded at least every 4-5 years. Lots of benefits with portable constantly upgrading hardware that’s peak performance.

I was debating on moving to another VM (like Parallels . . .) just to see what happens. The nuclear approach is to make hassos make an API call to the host machine :sob: which would then fire a WoL call. And thank you.

1 Like

Side note, I’m fairly certain Docker Desktop is the only option outside of a Linux environment. I’m “pretty” sure that’s still the case.

You have no idea how much I hate you right now lmao. I might be a little jealous. :wink: My biggest box is my Unraid server with 64TB of storage, 10gb networking and a Ryzen 5800X CPU. He’s the workhorse of the house…

TBH, I’ve had good results with Parallels. I have 1 Windows VM and a couple of Ubuntu VMs running on a Macbook Pro (2021) and they have been rock solid with zero issues (and I do some crazy stuff with my poor VMs). Performance when having all 3 running at the same time has been good with no noticeable lag on my desktop.

Now that I think about it, that might not be a bad approach. Simply create a REST command (or maybe a REST switch?) in HA and fire that off. Nice, clean, easy… :thinking:

Honestly? No clue. My docker install is through Brew. I do have the GUI, so entirely possible. But quite a few of our containers utilize host networking and they all work locally. TBH, I’m not a devops engineer, so I’m not sure if they tweaked something to make it work “properly” for our environment.

Yeah. I was really hoping I wouldn’t have to as it’s just another thing I have to maintain. I can either spin up a small server on my side, or try using something simple like:

Unsure if there’s any recommendations for out the box home-brew utilities here.

The MacOS settings really confuse me, so I might not be of much help.
What I noticed is

  • the 169 address, which is a IPv4 autoconfiguration address.
  • the Ethernet bridge seems to have a DHCP client running and also both an IPv4 and IPv6 address, which a bridge would not need, since it should be a layer2 device.
  • the IPv6 address is configured with fd65 address, which is a link-local address, but I do not know what the fd65 range is for. IPv6 usually take precedence over IPv4, so the magic packet might go out on that interface.
1 Like

Yeah the address you’re looking at is a pre-reservation address that just ensures the host doesn’t assign anything – or use it – as an interface. The ethernet port is purely virtual.

And yeah, I have to double check the IPv6 thing. I never really dove that far into IPv6 assignments with most of my networking skills predating its popularization.