No it doesn’t. I run within a VM, not with the HassOS image. I run my own ubuntu VM with no docker at all and installed Home Assistant directly on it as if it was a machine. It wouldn’t be needed if it wasn’t running inside my NAS. If I did not already have a NAS, I would run:
Debian or ubuntu on any machine → Home Assistant Core.
No docker container nonsense, VM or supervisor. Single command line for upgrades. No deletion or config/restore of containers and configuration for upgrades and full OS capable of running a ton of other things.
The VM in the NAS gives me the additional capability of backing up and snapshot the entire VM which contains all my servers. It is a lot simpler and more efficient to manage.
NAS OS → Ubuntu VM → Home Assistant Core. (Not HassOS prebuilt image with or without venv) + a dozen of other home automation servers and services.
As I said containers have their place and utility. There are situations where they will be absolutely needed. For Home Assistant, I would they are most often not (I actually have yet to run into a situation where it is) and I think they are being abused for the sake of convenience. A level of convenience which could be achieved much more efficiently using various simple OS shell scripts called from the GUI. The elephant was called out to crush the ant. Now we need to feed the elephant… It is very good at crushing ants alright…
I suspect that most folks who’ve chosen to run in a venv would have no difficulty in either updating their OS, or installing Python manually. Even if you’re not going to use the pyenv installer, installing Python is only a few steps:
wget https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz
tar -zxvf Python-3.8.3.tgz
cd Python-3.8.3
./configure --enable-optimizations
make -j4
sudo make altinstall
Of course, there will be many folks that blindly installed using a venv, and they’re in trouble, but that’s why the other install methods exist.
And that’s almost no different than running “Debian->docker->HA Core”. The only thing that is different is you literally have two complete OS’s running - one inside of another.
Just because YOU don’t like docker for some non-understandable reason (even tho you think a VM is the bees knees ) doesn’t mean that it’s the right choice for anyone else.
Running the containers is as simple as a simple docker-compose or docker run command. The only things that are a bit tricky are the bind mounts. Once you get past that then it’s really dirt simple and very quick.
and you don’t have to worry about trying to figure out any VM stuff. If you find you no longer need something or it didn’t work as expected then two click’s and it’s gone.
If you like doing stuff from the command line then there’s way to do that just easily as nick mentioned.
You’ve said this before and all it shows is that you seem to have no idea how docker works. Remember those bind mounts I was talking about? They exist so that there is no loss of data or any configuration info.
As far as overhead or efficiency is concerned I have a NUC i3 and 16gb ram and I run 25 docker containers (including 3 instances of HA) along with a full Kodi media server and my CPU is running at 11% and 30% memory use. I wouldn’t call that a big drain on the system.
I wouldn’t call it abused but I agree it’s definitely for convenience.
create a new container but point it to a different config folder than your existing one (ex, I use hass-config for my production HA and hass-test-config for the second)
Start that container and wait for it to write all the configuration files to the config folder you selected above. Do all of your on-boarding stuff.
after it’s all up and running edit the http: port in the new container to use a different port than 8123. I use 8124 and it works fine on my system.
restart the new container and then you should have access to your second instance on whatever port you chose above using the IP of your docker host machine.
if it’s working then restart your production HA container again.
are you running core or supervised install on your docker?
If you are running a supervised install I’m not sure how you can (or even if its possible to) do what you want since the supervisor controls everything.
If you are running HA core in docker then it’s trivial to do it by the procedure above.
Seeing posts from the 0.110 release, I wonder if custom components and custom lovelace cards is not what should be deprecated… nearly all users with issues have custom things…
I think you missed the original intention of the post, which was to alleviate issues with the supervisor which is developed and managed by the core team.
The core team does not develop or manage anything custom. So deprecating this would not alleviate any pressures.
That’s what you aren’t understanding. Home assistant supervised is supported by the core team, which is the reasoning for the post. They don’t want to support it based on their current resources. But as we can see by the [On Hold] status, wants and needs can be very different.
As for the custom side of things, that is 100% up to the user. It’s not home assistants responsibility to manage custom solutions, nor should it be. Even if the main dev team decided to deprecate custom cards, someone in the community would find a way around it.
I’ve changed from a supervised install to VirtualBox. I’m actually quite liking it. Got a couple of VM’s running now. Nice to keep it all apart and neat! Only 1 week in, but it’s fun to try something new.
Problem was that I couldn’t update to the new version of Home Assistant today. Wasn’t sure why. In the end, found I needed to increase the size of my VM disk from 8GB to 32GB.
Just thought I would share that with anyone trying out a VM at the moment!
I don’t see why you have to “stop using addons”. I run core w/ add-ons. I just had to download them and place them accordingly myself. I never got supervisd/add-ons working and just decided one day to use core. I don’t like my portainer UI littered with a bunch of containers that turns out, did nothing for me. That said, I think docker and venv are the best installation methods. Asking people to run an entire OS for a single piece of software is kinda ridiculous, it should go on top of any existing OS.
This completely ignores what the OS was made for, which is a handful of SoC boards and there to run docker and talk to specific hardware for specific boards.
I have dealt with this many times now and found out that the venv was actually very painful to deal with the first time when HA forced a python upgrade. That’s why I removed it and have been upgrading painlessly since. I am one of those who did not fully understand how the venv works and found it to add more pain than benefit.
As @Tinkerer posted above, it actually is pretty straightforward dealing even with multiple versions of python on the same OS. It is just about understanding systemd and changing the script launching the HA service.
To some extent yes and it has its benefit: No dedicated resources and lightweight.
The downside and what I see as the major difference is that you can’t run anything else on it. Which I do. HA is less than 10% of that VM.
The VM stuff is easier to understand and to manage than docker maybe because it is a full OS. Likewise if something in my VM has something gone wrong, I can do two clicks and delete it without deleting the mini OS called the container.
I agree with you that the overhead is minimal but non zero and increases with the number of containers as each containers adds its overhead. But why have it at all when you can get the same convenience in 99% of the cases without it? I use docker too only when needed. I just don’t think HA needs it as much as it is using it. It is a design philosophy question more than anything else. I was posting this in response to all these people who installed the supervisor not knowing what it does under the hood and said they can’t do without it. It only adds a level convenience which can very easily be replaced and made much more efficient if the devs wanted to do so. Now the deprecation talk is because we HA added a manager to maintain the system and the manager requires too much maintenance so we want to deprecate the manager… The maintenance of the manager has become about the same and in some cases harder than maintaining the system itself. Users may just not see it because someone else is doing it. If deprecation must be then it should be replaced with something equivalent…
After a week i finished my setup of HassOS on Proxmox, i used docker version only for have all resources for cctv (zoneminder). now, after tests, my nuc8i3 \ 32gb ram run proxmox with 2 vm, 1 with ubuntu server and 1 with hassos without any problem and now i’m using it definetivly. The only problem i found is that i had to change lots of my custom script (bash\perl\python) that done lots of system-side automations and i have to create new scripts for manage Proxmox VE from “main server” VM, so now i can stop, start, reboot hassos from main server even if are 2 different VM, and can shutdown\reboot VE from it. so now i’m very happy because proxmox snapshots and other features are awesome. of course i had to buy new ssd for the space problem.
I went a very similar path as a test. What’s really strange is that I downgraded my hardware and added virtualization to the mix and yet…somehow…my HA instance appears to be faster. It might be a placebo effect, but I really dont think so. It’s just got a bit more pep.
Old Setup:
S/W: Generic Linux Install / Ubuntu 18.04 Server foundation
H/W: Nuc 8th Gen i7 / Nvme hdd / 16GB Ram
New Test Setup:
S/W : Proxmox 6.2 / Hassos VM
H/W: Nuc 4th Gen i5 / SSD / 8GB Ram
VM configured with 4GB Ram and 4 virtual procs (host proc is dual core with 4 thread)
I also have a second VM with Ubuntu server 20.04 with 2GB allocated.
Somehow this seems snappier. My camera streams are smoother and load faster and they seem to require less load on my processor than the NUC 8 i7
Don’t know how this is possible…but it’s what I’m currently seeing. If this holds up, I plan to migrate the proxmox and VM’s over to the NUC 8 as a permanent setup. Very impressed with proxmox so far. The ease and speed to spin up a VM’s and the ability to easily create snapshots and save on my NAS…heaven sent.
OH and I also have a Nortek zwave/zigbee stick attached with USB passthrough. Worked without a hiccup.