If you ssh into to host, and run
do you see the home assistant container ?
If you ssh into to host, and run
do you see the home assistant container ?
No, there were several hassio but no home assistant container. I just rebooted the pi and now it’s there and I can connect. Thanks!
@francisp Can I ask you one more question? In this setup (Supervised) where in my debian filesystem I should file the config? The developer tools say
Path to configuration.yaml: /config
But I don’t have that folder on my root directory so I’m assuming it must be somewhere else.
I just found your response in another thread Where is the config file stored for supervised installations?
Is it me or do I not see a link to this saga of a thread NEW communication which says HA Supervised is fully supported and here to stay… Read the post, it’s like a week old and it closes the loop on this thread — in fact this thread should be closed
@balloob and @frenck thanks for keeping that install method going. It provides the greatest flexibility and we truly appreciate your extra effort it takes in keeping Supervised around. Best news I heard all day… stay safe folks!
Good point, updated the initial post here. We had already updated the blog post but that didn’t get sync’d over.
Sorry i confused, i used a raspberry pi 3 b+ with raspbian light version, And i have many separately services on it,so i used hassio generic installation because i dont like to overkill to have one raspberry for only one service, And now i see that the installation stopped support is there any way to install my hassio via docker such the previous installation or something else?
It is no longer deprecated.
I am using Hass.io (Supervised) on Docker now for over a year on my x86_64 Ubuntu machine. I am very comfortable with it and it’s working like a charm!
Updating, installing addons etc. is super easy. Yes I can understand that one person is not enough, but supporting generic Linux with Docker I think is a pretty valuable feature especially for people like me running several webservices on one machine besides HA. Also regarding that raspis can run Docker for quite a time now… I think there are a lot of people out there like me using their HA-machine also for other stuff. Yes, VM may be an alternative but as I see it Containers are much more popular these days…
IMHO it maybe is even worth to drop another installation method instead and putting more force to this solution(?). Maybe you should start a poll on that matter…
At least I hope this masterpiece comes back alive some time. So far I now have to research how I’m getting VM to work…
just adding my two cents…
edit: Ok deprication is on hold, sorry did not get it. But I don’t find the installation instructions any more…
The install which you flash to an SD and run on a pi uses docker. The difference is the operating system, not docker.
That’s nice, but I still think there are many people out there not willing to throw away there already existing installation of Raspian/Raspberry Pi OS or whatever with already running services just to add a new service like HA. Especially Pi 4 can definitely handle a lot more than “just” HA.
So I think a step by step installation guide/script is still very helpful to many people. I found the installation guide in the community guide section and I hope it’ll become more “official” again in future.
I feel that it is exactly what I said. It is getting rid of the supervisor on one specific type of the installation is it not? What did I misunderstand here?
That’s because I run HA in a NAS which has its own OS on which I cannot run VMs but not Home Assistant. What works for you may not work for others. Actually it is the only reason why I am running it within a VM. Running it this ways gives me the same benefits you get from a container, backup, snapshots etc…
The same logic you mention applies to containers. You seem to assume that a container is not a virtualization layer… it is! It is just a stripped down VM with a stripped down OS to minimize resource usage. The only reason why a container would be needed, just like a VM should be because the applications cannot be run on the host OS. The thing you don’t understand regarding what I said applies to the same way whether it is a VM or a container. We are basically saying the same thing: If you can run it on a host OS why run it in a VM? Why run any containers at all?
We are saying the same thing here. Again the only reason why I don’t run HA on the host OS is because the host OS runs a customized distribution on linux which cannot run HA and I would not do it for security reasons. I therefore have the choice of running a 2 dozen of nearly identical mini OS each with its application called containers or one single VM running one OS running all of these applications. Which one is more simple and efficient? I am advocating that it is the latter and by very far.
I did not say you can’t do the same thing with a container…
Why does HA run multiple containers and felt the need to add an additional supervisor? The only reason why containers should multiply is if an application cannot run in the same container. If they could, and for HA I am convinced that most can, then there is no reason for having a supervisor to manage all these containers. However by the time all the dependencies are added and if you want to run other things in that container, the container has pretty much become a full VM. You should be able to run HA and all of the add ons within a single container. A VM can do that easily. I am sure HA can too as I said but as designed, the nonsense is to put every single integration and add on within its own container. It ads complication and overhead. That’s all I am saying. I understand a large number of people here love docker. I tried it, used it for various other applications outside of HA and I really don’t understand why it is even implemented for HA since its benefits don’t really apply here: Give the ability to create a multiple environments separate and distinct from the host and from one another can be run with minimum overhead, much less than for a VM. The key benefit is if these environments are different from the host and from one another which is not the case for HA.
So one can either run HA on the host OS as an additional application alongside all others, or as its own OS on a dedicated machine or within a VM/container if the host OS can’t run it for one reason or another. There is just no justification for adding the complexity of the spawning of a bunch of side components in their individual identical mini OS and a supervisor on top of it to manage them. Maybe there are add ons and components requiring an environment incompatible with the HA core or with other components but I have not seen one yet. It seems to be a case of “We do it because we can and it looks cool” without serious though about whether it is really needed and underestimating the resulting complications it comes with. The cost to benefit ratio is just not there. Gaining a bit of convenience and adding a whole environment along with all the overhead, maintenance and support… There are much simpler ways to do the same thing.
It’s not. You’re using the word “Supervisor” which refers to an application distributed as a docker container in two installation methods. The first of those two installation methods will get a new name very soon and the second one is called Home Assistant Supervised (not supervisor). The original plan was not to eliminate Supervisor, the application, but Home Assistant Supervised, the installation method.
However, that plan was cancelled and Home Assistant Supervised will continue to be supported (although only if installed on one specific Linux distro: Debian).
Am I the only one who is curious about Pascal and how he is doing? Is he getting better? Is there someone who can take some of his workload of his hands already?
just search for people who have had issues with dependencies over the years, a addon/ component may one want version, HA another. docker solves this perfectly. The addon owners then dont have to worry about this
I keep hearing this argument - that all the add-ons are available as containers so you can just install whatever you want. That may be fine for something like mosquitto, but figuring out how to get all the autocomplete features working with node-red or vs-code, or even finding a docker version of vs-code, was not an easy task. At least is wasn’t when I did it last year.
When I moved from a Pi4 to a NUC and was deciding on an installation method, I was also trying to learn docker-compose so I chose to do it that way. It all works and Watchtower gives me automatic updates, but I have spent a lot of hours making everything work right.
It sounds like you don’t understand containers. There are many reasons to use containers rather than running everything natively on the host.
This might be a convenient way to look at it, but it’s not accurate. A container does not contain even a stripped down OS. The host virtualizes the OS and provides that to the container, rather than virtualizing the hardware as a hypervisor does for a VM.
THANK YOU for keeping this install method. Users like me coming down from the Windows drug for 30 some odd years (3.0 was my first Windows version, DOS 2.11) and I’m now LOVING Ubuntu 20.04 and I find this install method VERY, VERY helpful.
There are enough complicated and esoteric pieces to learn as it is. Here it is at 4:21AM and my install is finally back up and running after a TOTAL crash moving to 0.112. That went ok, but the new Supervisor install killed everything. Thank God I discovered Timeshift for backups last week. I had to do a full restore and then a complete reinstall of HA supervised on generic linux.
Sure I could install each and every piece in Docker, but what a TOTAL PAIN in the a$$. The “depricated” method is much preferred, simple, and lets me focus on figuring out the depths of HA without having to be an OS jockey, which I’m NOT. Sure it’s mildly amusing to be learning Linux from the ground up, but give us all a break here. Please!
Ok, off soap box. Thanks for listening.
After reading this post when first released in May I migrated from supervised on generic Linux over to the VM HassOS image, this was an easy enough migration.
One thing which bugged me was that I had previously been taking full automated VM backups of the supervised generic Linux VM onto my Synology NAS using the following method with Change Block Tracking enabled. https://www.synology.com/en-uk/knowledgebase/DSM/tutorial/Backup/How_to_enable_CBT_manually_for_a_virtual_machine
However since migrating to HassOS, this had an IDE drive pre-configured, not SCSI so I have been struggling to find any reference on the web for CBT of IDE drives.
Does anybody here know if this can be done, or have any reference docs for this?
I’m presuming that this may need to be converted from an IDE to SCSI disk, but does the HassOS image support or have the driver for SCSI disks?