Al right, I’ve been playing with HA for a bit now. So far, I’m really liking it, but I’m still running into the “issue” of missing some add-ons. I understand that everything add-ons do can be done manually as well, but it would be nice to not have to remember a specific port to get some functionality, and just be able to have a button in the HA UI.
So, let’s take the text editor (formerly configurator if I’ve understood correctly) as an example. I know it’s not required, and there are many other (even better) ways to handle configuration file etc. editing, but it’s just a simple example I’d like to try.
Goal:
Text editor (/configurator) button on the left side of the HA UI, near the Overview, Map, etc. buttons.
Functioning text editor when I click on that button
I tried:
I looked at this GitHub repo, which promises an editor comparable to what’s normally available as an add-on, but I don’t see any mention about integrating it in the UI. As far as I understand it, it’s just a completely separate piece of software that can be accessed through a browser, but does not in any way integrate with the HA UI.
So just to clarify: Is it possible to create exactly the same experience as provided by add-ons (including the button in the HA UI) by manually installing something, or is it only possible to create a (very) similar experience, without any direct integration into the HA UI?
So, for code/config editing, there is the excellent code-server container (Docker) and run that on the same server you are running HA on. From there, use the iFrame panel to host it: iFrame panel - Home Assistant. I do this very thing and have them (HA and code-server) setup in the same docker-compose.yml mostly because I don’t want to have to expose my HA filesystem as NFS.
Core users will never have the same experience that Supervised users do. For some of us, that’s fine (myself, I make HEAVY use of bookmarks), but for others, not so much.
The iframe integration is pretty much exactly what I was looking for, thank you!
Do you maybe also know if it’s possible to reorder / remove the default entries in the sidebar that is persistent? I found this merged PR, but that seems to be a local setting, i.e. stored in the browser, and doesn’t survive logging out/in.
@Timmiej93
I understand your confusion, because you will see different terminology depending on which document you read or how old the forum post is, and add to that the confusion or personal interpretation of the forum poster and the chaos is complete.
This is my take on the different install options, based on the install doc, glossary, and my own confusion:
(Also refer to the Compare Installation Methods table at the bottom of the Install doc)
Home Assistant Operating System
You flash your storage with an image that is a package consisting of:
the OS, “Home Assistant OS” (also referred to as HassOS)
the HA “application”. The app in this case consists of the full suite of HA components,
namely Core (Home Assistant), Supervisor, Audio, and other components of which the functionality is not quite clear (DNS, CLI, Multicast, Observer). Installing the app in this way is referred to as “Home Assistant Supervised”, or a supervised install.
These components run in Docker containers.
You get a locked down environment where you have limited access to the OS, can’t run your own containers, and are basically restricted to whatever HA has to offer.
Recommended for people with no knowledge of or interest to fiddle around with the OS, and/or have no need to add any functionality that is not covered by the HA ecosystem.
Home Assistant Container
You run the Core container (Home Assistant) in Docker. You only have this one container, not the HA Supervisor or any of the other components mentioned under (1).
You get limited HA functionality, as you can’t load any add-ons. There are also no auto updates, or checks for new versions. You do have full access to the OS, and can manage your containers as you like.
Not quite sure who’d want to do it this way.
Home Assistant Supervised (Yep, the same name is used for how HA itself is installed and also for how it is deployed w.r.t. the underlying OS.)
You install the OS. On a RPi that would be Debian.
Then you install “Home Assistant Supervised” ( the full suite of HA components).
And you get
Full access to the OS.
All features of Home Assistant. Ability to install add-ons, version checks and auto updates of certain components, etc.
Ability to manage your Docker environment and add new containers to your heart’s content.
THIS MAY BE WHAT YOU WANT.
Home Assistant Core
No idea.
When it comes to installing “HA Supervised” on Debian (nr 3 above), you could consider to use the IOTstack setup to get you going. It’s basically as simple as flashing Debian to your SD/SSD, get IOTstack from Github, select the stack of containers you want to run from its menu, wait for Docker to be installed and the chosen images to be created, do some app configuration, and you’re off to the races!
That is indeed exactly what happened. I also noticed that most of the HA documentation assumes that you know certain things. If you don’t know those things, you’re even stuck with the documentation.
I’ve been running my test setup this way, and so far I can’t really complain, but on the other hand, I’m not that deep into the game yet. It’s harder to get functionality normally provided by add-ons, but not impossible as I’ve noticed.
Is it though? I want to run everything related to HA in one (or as little as possible) Docker containers. As I understand it, the supervised version of HA installs Docker, and creates at least one container (to run HA), and a container for every addon you install. Since I plan on running HA on an RPi, I’m not sure this is the best way to go about it. I’ve also seen that docker inside docker is, although officially supported, questionable?
In theory I agree with you: If the host machine is powerful enough, the supervised install is the best. You get full control over everything, and the most options. If it’s the right choice in my situation, I’m not so sure about. Your post definitely makes me reconsider it though.
You should do some kind of capacity planning, see how you compare to others who host HA on a RPI. And of course ignore the outliers, guys who scream their heads off because the Pi is so slow. So many things can influence performance, like are they (still) running on SD cards, or are they using a SSD?
Maybe a good question to start would be how many integrations you have, or plan to have in future. “In future” is relative though, because with Moore’s Law you will upgrade your hardware at some point anyway.
But you must see it in perspective… controlling lights that you switch on when it gets dark and off when you go to bed, or a garage door that triggers twice a day (or maybe more after Corona), pushes up the integration count, but does not exactly count as a heavy load.
Be critical of what you really need, and discard the rest. Have a look here for an analytical approach to include/exclude sensors from the database.
Below is the list of HA-related containers that are part of the supervised install:
(I removed some columns that are not adding value here)
You will definitely need MQTT, and most likely Node-Red. Let’s just start with these two for now. You will either have to load them as add-ons within HA, or else run them as containers outside HA. Both options are perfectly acceptable (I run them outside). So whether you choose install method 1, 2, or 3, the net effect may be exactly the same - they run in a container each.
That raises the next question, convenience versus flexibility. Option 1 offers convenience but no flexibility - you can only do it one way (add-on), and can only use what is available. Option 2 you can only run them outside HA, because it does not support add-ons. (any other options?). With option 3 you can choose whether you want to run them as HA add-on or as container managed outside HA. With 2 and 3 you have the ability to grow, to add additional containers as your needs change. As example, I now wanted to host my own Mumble (Murmer) server as part of a VOIP intercom setup. I had the freedom to either run it in a container, or install the service directly on the Debian host. With option 1 you’re screwed.
Lot’s of people. I’ve even heard that many of the developers use this install method.
I’m using it right now for two installs of HA - one is my production system and another I do some testing on.
Definitely not true. I use HACS all the time.
You say that like it’s a bad thing. I think that’s a positive (see below). And besides there’s nothing to auto update. The only thing in HAOS or HA Supervised that auto updates is the supervisor (unless add-ons auto update? I don’t use them…).
I personally despise auto updating anything. if something breaks then I at least have an idea why since I just did an update so it’s a good starting point to troubleshoot.
I get notified immediately as soon as there’s a new version so that’s not an issue either.
Then I update at my leisure with literally two clicks by using Portainer to manage my Docker containers.
Maybe but definitely not definitely.
If you don’t use any wifi based devices then MQTT isn’t needed.
And if you use ESPHome in API mode you don’t need it either.
Again not likely at all.
I’ve used HA for over 3 years (almost 4) and I dabbled in NR for about 15 minutes and never saw the use for it. I’ve been able to do anything I want in HA using automations aside from a couple Appdaemon apps.
Some people love it, some (most?) don’t.
This is the very basic install of HA directly onto the host OS in a python virtual environment. It’s the exact same functionality as HA Container except there is no docker involved in running HA.
It’s what everybody ran before there was hassio (now renamed to HA OS).
That won’t be possible.
You can’t run “docker in HA”. It really doesn’t even make sense to think of it like that.
HA is an application. You can run it in a docker container or directly on the OS. It isn’t OS in itself so you can’t run docker in it.
Docker is an application that allows you to run applications in containers.
When you run HAOS you are installing an OS (hence the name) and that automatically installs docker and then automatically installs HA (the application) in a container along with all of the other containers needed to run HA Supervised as noted above.
So even with HAOS you aren’t running docker in HA. You are running HA in docker which is running on HA OS.
No matter how you install HA everyone always gets the same underlying HA. The only difference is if you run HAOS or HA Supervised you get the extra functionality of the supervisor which allows you to do some additional stuff (add-ons, snapshots, one clock updates are the ones that come to mind.
But there are some downsides to the supervised install. The biggest is that it dose automatic updates of the supervisor with no user control at all. And it doesn’t always go well. THere have been a few times people have woken up to a dead HA because of a failed supervisor update.
This still stands, pretty much as said. My goal is to have as little as possible Docker containers related to HA. So my (unknowing) ideal would be to have 1 container for HA, in which (for example) HA, an MQTT broker and something else related to HA run. So everything related to HA is inside 1 container.
This one was more of a general remark, probably related to something like installing HA Supervised inside a docker container, but that idea has already been removed from my head.
As for who’d want to run it this way; I much prefer core over supervised because I don’t need (nor want) auto updating as I can manage everything on my own and there are update checks available for core (using binary_sensor.updater) and watchtower for docker containers.
Since add-ons are pretty much nothing more than docker containers (with customizations to be able to interact with HassOS/Supervised), they can be replicated with a proper docker-compose.yaml file and some integrations with Lovelace.
This is known as the “appliance” approach. There are pros and cons to this. Pros are that you have a single place to manage your home automation “system” and a super simple topology. Cons are that you have a single point of failure when something breaks or goes wonky (Yes, I said “when”, because with home automation, it is never an “IF”) and a simple topology will fail when your system grows beyond it.
In my house, I have over 1000 entities (150+ devices) operating on Zigbee (67 devices), Z-Wave (14 devices), and WiFi (97 devices) and I also capture energy and atmospheric data (temperature, humidity, lux, etc). All of this traffic (and trust me when I say, there’s a LOT of traffic) gets captured in various data stores (InfluxDB, MariaDB, Percona and Cassandra). Not to mention my MQTT traffic (of which I actually run a cluster of MQTT servers to reduce and spread the load around). I often tell people that if there is something in my house that can be automated or made “smart”, it probably is. The funny part is that compared to some of the users here, my environment is still considered to be small.
I’m sorry, but you should really work on your wording. You can’t go around saying it can’t be done if it can be done. It may not be the optimal way (or even anywhere close to a good idea), but it can be done, like aibish and code-in-progress mention. Saying it can’t be done only confuses those who have little to no knowledge on the subject, since they see others doing it, but then you say it simply can’t be done, as if that’s a fact.
I get that you want to prevent people from doing something dumb, but saying it can’t be done when it can is not the right way. People need to understand why something is a bad idea to really accept and comprehend it, if the only reason they know is “because some guy online said so”, that’s not gonna stick.
I appreciate the effort, I really do, but just repeating “it can’t be done”, despite others saying and showing it can be done, doesn’t help, it only confuses.
I guess you are right but I guess I was using “can’t” more in the colloquial sense of “you can but why would you?”.
I mean technically you can have a windows pc with vmware on it running Debian in a vm running docker in that with a raspbian os inside running home assistant supervised along with mqtt, nginx, node-red, zwavejs, pihole, adguard, etc, etc all in the same container.
If your skills were up to the task you can technically do anything you want.
But why would you?
If you really understood docker well enough to completely build the image containing all of those things in it you likely would understand docker well enough to know that you wouldn’t want to do that.