Which docker image to use for Intel NUC?

Hi there,

historically I’m using the " intel-nuc-homeassistant" image for couple years now… It seems though, that it takes a little longer till the new version updates will show up there, so I’ve been asking myself - is there a difference between official (the one mentioned in the docs) home-assistant and “intel-nuc-homeassistant” ?

Thanks!

Unless you have a really old NUC, then it will be quite overpowered to just run HA, so many install a hypervisor, like Proxmox and then run HAOS in a VM on that.
This also makes it possible to run other servers, like a media server,a NAS server or whatever you like and need, that can not be combined with the HA image.

Just use ghcr.io/home-assistant/home-assistant:stable
The device-specific images are not supposed to be used directly.

Looks like OP is running “Container”, here, specifically to be able to run other stuff than HA without the overhead of HAOS in a VM :wink:

Doesn’t docker do same?

Docker is a container service.
It is a form of virtualization, but it is not that hard separated from the actual hardware and the host OS, so you are limited by those factors.
You would not be able to run a Windows server on a Linux Docker system, but you can on a Linux hypervisor, like proxmox or qemu or similar.

I think you will a number of folks that have successfully implementations of both Windows and MacOS running inside docker containers. I’m not saying that this is a ‘better’ or more stable route than using a true hardware based VM for either, just it is possible.

Search on container vs virtualization to get a description of the difference.

And yes you can run windows in Docker, but it is Docker on Windows.

That windows container is just running a virtualization layer (qemu) inside docker.

At best, beyond the “because I can” aspect, that’s just not understanding what containers are meant for.

I did not look into that so much, but running a hypervisor inside a container makes no sense at all.

FYI @koying and @WallyR , to the OP’s original goals and this IMHO incorrect statement: ‘You would not be able to run a Windows server on a Linux Docker system, but you can on a Linux hypervisor, like proxmox or qemu or similar.’, the post below is an good explainer of the options, benefits and tradeoff of various configurations. The number of options of virtualization require that you do a bit of homework. However, starting from a ‘root’ of Docker containers or ‘containers’ in general, might well be a good path IMHO to simplify long term success.

That Article also state that you can not run wi dows on a Linux Docker system and vice versa.
Only Windows on Windows and only Linux on Linux.

I agree though that this is a setup where putting a hypervisor inside a container makes sense, but only because the case here is several identical system running side by side.

Still far-fetched solution, imo.

The article seems (I only cross-read) to focus on disk space optimization. My assumption is that you’ll still need to dedicate CPU and RAM to the VM’s inside docker, so It’s not like you’ll save a lot vs. using the hypervisor directly, but you’re guaranteed to have a lot more headaches :wink:

Ram and cpu will be affected too, because the kernel processes of the hypervisor can be shared.
But I agree with the complexity.

My point was that you still will have to dedicate CPU and RAM to the VM, be it in docker or not, so you don’t save anything on that part.
That article actually describe how to spare 5.62 GB disk space (the vagrant base) for each VM above the 1st one :wink:

I have tried to wrap my head around that article and it does not make sense to me.
Nearly all hypervisors have to option to have a RO image and then run a change file system on each VM.
I think that is what the article is trying to accomplish by doing it in strange complex ways.

1 Like

I agree :slight_smile:
That article just doesn’t make sense…