Is it possible to convert an addon for HA OS to run in a standalone container for HA as a docker?

Hi,

As HA in docker doesn’t support add-ons, but add-ons are essentially docker containers, is it possible to convert an add-on to run in a stand-alone container so it could be used with HA in docker?

Many Add-ons already have stand-alone containers, so it is relatively simple to spin them up and use them with HA docker. However, some more homegrown add-ons do not have this development.

Cheers

Can you give an example of an addon that can’t be done from docker?

I’m working on an add-on that allows for direct WhatsApp messages to be sent, so not via a third party. It’s not my code, and I was trying to convert it to run in a docker so I wondered how possible it would be. You can see more here - Use on HA Docker install · Issue #8 · giuseppecastaldo/ha-addons · GitHub

TBH, we might have got somewhere. I have a container running - to a point. It starts and then exits, and the logs suggest it’s looking for

url: 'http://supervisor/core/api/services/persistent_notification/create',

and this endpoint doesn’t exist on the HA docker, but http://HA_Docker_IP_address/api/services/persistent_notification/create does.

Not exactly. There’s a few non-obvious things that supervisor takes care of you that you’ll have to handle yourself when talking directly to HA:

  1. That API requires authentication. Addons are provided with a supervisor API token automatically, you’ll need to ask docker users for a long-lived access token
  2. Users can set the port in their HA config. You’ll need to ask them for it. Or just ask them for the whole base url since you can’t predict the container name or IP address either.
  3. Many users add their SSL certificate to HA under http. When they do this all communication with HA must be SSL encrypted, it no longer accepts http traffic. Almost certainly the SSL certificate selected will not be valid when talking from one container to another by ip address or docker container name. Unless it’s self-signed in which case it can’t be validated normally anyway. You’ll need to ask for a CA cert or provide an option to skip SSL verification.
  4. HA also has a mutual TLS option. If you want to support users using this then you must have a way for them to provide a CA cert and a peer cert and keyfile.

To be fair Im not even sure if supervisor handles #4 right now. I know it didn’t for a while, I forget if it was fixed. Either way just be aware that you’ll have to deal with all the HA HTTP options since supervisor isnt

2 Likes

Nice one @CentralCommand , there is some good info here. I saw the need for a long-lived token in the docs, and posted that to the git repo (albeit just a hardcoded poke at the API).

On the cert front, it wouldn’t be too much of a stretch to automate some of this with an ACME client and the Let’s Encrypt CA. But still, I think that is for down the road. getting something functional is step one.

I just wanted to say thank you both. I have managed to modify the add-on files and build a docker image that can be used to create a standalone container. With a bit of environment variable passing from a docker-compose.yaml file, this seems to work nicely, albeit at a basic level. I have used a simple long-lived token as one of these variables, plus the ability to define the host HA container address. I will leave SSL for someone that actually knows what they are doing :wink: .

One thing I don’t know how to do at the moment is to build an image for different architectures. I had to modify the Dockerfile to build from the HA repo that targeted my specific architecture (amd64). Still, it not super important right now.

Nice! Not sure which addon you tried this with but if it starts it should be good. One other gotcha to be aware of, a number of addons actually don’t support docker restart. That’s because when you restart an addon in HA supervisor actually stops the container, deletes it, recreates it and runs the new one. And when you shutdown the system supervisor deletes all the addon containers.

Some addons rely on this and move files around in their startup scripts. Then those scripts fail after a docker restart because the file it’s looking for is missing. It always expects to start from an container built clean from the image.

Since I expect you’ll want to restart your addon made container at some point I’d recommend testing this. You might need to modify the startup scripts to support it.

You could just use the HA builder tbh. It can be run locally and then just builds the per arch images from the config. Or else pull out what you want from that code to use locally. That’s how what builds basically all addon and HA images.

1 Like

@CentralCommand I believe this is what @swinster is trying to do is get this to work

Your posts are ever helpful, your intimate knowledge and clear explanations should become a lesson to the other devs.

1 Like

Correct @nickrout - indeed, if you want to look, this is the ongoing thread, the last few posts show an image pushed to the docker hub (Docker) and a manual build process (see Use on HA Docker install · Issue #8 · giuseppecastaldo/ha-addons · GitHub)

I will have to look into the HA builder. TBH, I’m just fumbling and faceplanting my way through, but it seemed like a good idea at the time :smiley:.

From what I can see, files are only copied to the image at build time, not at run time. I was trying to make it so that the things could be built as add-on or as a standalone docker image with too much effort. There are some gotchas (like if you re-deploy the stack and the whatsapp-ha container start before the HA container), but honestly, I have modified about 10 lines of code and created about 15 bugs :rofl:.

I don’t see anything in particular that looks like a restart issue in the run.sh for that addon. Although there’s a lot going on in that repo so not sure what all is for the addon and what is for a related custom component.

For reference Grafana is an example of an addon that likely can’t be restarted because of the mv lines in here:

Those lines will fail when the container is restarted because the file its trying to move no longer exists in the spot where the image put it.

This pattern seems to be common with addons that also include nginx as a secondary supporting service in the container but it could happen with anything. Just add a docker restart to your test steps once you get it up and running but before you call it done and ready.

Thank Mike.

The whatsapp-ha repo seems relatively small, but it references a bunch of Javascript/typescript here - GitHub - adiwajshing/Baileys: Lightweight full-featured WhatsApp Web + Multi-Device API. Coding is NOT my speciality :wink:

I will try to test out some things over the weekend.

@CentralCommand - I am looking into the HA docker build process you mentioned, so I am looking at this page (Local add-on testing | Home Assistant Developer Docs), although I am not too sure what still needs to be done. For example, in the first section that relates to Local add-on testing, refers to the devcontainer environment, but I think this requires a full HA OS deployment, which I’m not using.

I think instead, I should be looking at the Local build section (Local add-on testing | Home Assistant Developer Docs), however, I am still struggling to understand the exact requirements.

Never mind, I think its working :slight_smile: I converted the docker run command into a docker-compose.yaml file (just for ease of editing and dumping into portainer) and pointed it to the add-in folder, which appears to be chugging away now.

Well, nearly :expressionless:

Most of the images seem to get created. However, there are two issues when I remove the --test argument:

  1. When removing the --test argument, the images fail to be pushed to my docker hub repo. The username and API token are specified correctly, as the login to the docker hub occurs right at the beginning of the build process, as I can see that the Login Succeeded from the docker logs. However, we see logs (for all builds) :
e[0me[34m => => exporting layers                                                   16.6s
e[0me[34m => => writing image sha256:562846cfb5194c610460c21f82129f5d9f2c1f29595c8  0.1s
e[0me[34m => => naming to docker.io/swinster/whatsapp-ha/whatsapp-ha-aarch64:1.2.4  0.1s
e[0me[?25h[22:40:13] INFO: e[32mFinish build for swinster/whatsapp-ha/whatsapp-ha-aarch64:1.2.4e[0m
[22:40:13] INFO: e[32mCreate image tag: lateste[0m
[22:40:13] INFO: e[32mStart upload of swinster/whatsapp-ha/whatsapp-ha-aarch64:1.2.4 (attempt #1/3)e[0m
[22:40:14] WARNING: e[33mUpload failed on attempt #1e[0m
[22:40:44] INFO: e[32mStart upload of swinster/whatsapp-ha/whatsapp-ha-aarch64:1.2.4 (attempt #2/3)e[0m
[22:40:45] WARNING: e[33mUpload failed on attempt #2e[0m
[22:41:15] INFO: e[32mStart upload of swinster/whatsapp-ha/whatsapp-ha-aarch64:1.2.4 (attempt #3/3)e[0m
[22:41:16] FATAL: e[31mUpload failed on attempt #3e[0m
  1. With the --test argument removed, not all the images are built. For example, I only see four images created (armhf, amd64, i386 and aarch64). However, the logs indicate that five are to be built. Interestingly, when the --test parameter is specified, all five images appear in the local docker images.
[22:35:43] INFO: e[32mRun addon build for: armhf armv7 amd64 i386 aarch64e[0m
[22:35:43] INFO: e[32mInit cache for swinster/whatsapp-ha/whatsapp-ha-i386:1.2.4 with tag latest and platform linux/386e[0m
[22:35:43] INFO: e[32mInit cache for swinster/whatsapp-ha/whatsapp-ha-armhf:1.2.4 with tag latest and platform linux/arm/v6e[0m
[22:35:43] INFO: e[32mInit cache for swinster/whatsapp-ha/whatsapp-ha-amd64:1.2.4 with tag latest and platform linux/amd64e[0m
[22:35:43] INFO: e[32mInit cache for swinster/whatsapp-ha/whatsapp-ha-aarch64:1.2.4 with tag latest and platform linux/arm64e[0m
[22:35:43] INFO: e[32mInit cache for swinster/whatsapp-ha/whatsapp-ha-armv7:1.2.4 with tag latest and platform linux/arm/v7e[0m
[22:35:47] INFO: e[32mRun build for swinster/whatsapp-ha/whatsapp-ha-i386:1.2.4 with platform linux/386e[0m
[22:35:48] INFO: e[32mRun build for swinster/whatsapp-ha/whatsapp-ha-armhf:1.2.4 with platform linux/arm/v6e[0m
[22:35:48] INFO: e[32mRun build for swinster/whatsapp-ha/whatsapp-ha-amd64:1.2.4 with platform linux/amd64e[0m
[22:35:50] INFO: e[32mRun build for swinster/whatsapp-ha/whatsapp-ha-aarch64:1.2.4 with platform linux/arm64e[0m

It is, of course, possible to push the image one by one manually after they have been built, but it would seem sensible to push them automatically from this process.

For info, this is the docker-compose.yaml file I have used:

version: '3.7'

services:
  builder:
    container_name: builder
    image: homeassistant/amd64-builder
    command:
      - --target
      - /data
      - --all
      - --test
      - --image
      - whatsapp-ha-{arch}
      - --docker-hub
      - swinster/whatsapp-ha
      - --docker-user
      - swinster
      - --docker-password
      - dckr_pat_xxxxxxxxxxxxxxxxxxxx
    volumes:
      - /data/ha-addons/whatsapp_addon:/data
      - /var/run/docker.sock:/var/run/docker.sock:ro
    privileged: true
    stdin_open: true
    tty: true
    restart: 'no'

Ohhhhh, I think I have seen what I have done. I have messed up my naming convention on the docker hub repositories. Currenly, I only have the swinster/whatsapp-ha repo, so I would push like swinster/whatsapp-ha:tagname, and I am currently pushing swinster/whatsapp-ha/whatsapp-ha-aarch64:tagname.

I guess I should either set up one repo per architecture type (so whatsapp-ha-aarch, whatsapp-ha-amd64, etc), and then tag each image as per the release version and latest, or ignore the architecture type in the image name (which seems to be what you guys do in Docker - albiet, you also have hundreds of repos for individual architectures as well).

Ok, I managed to get it working with multiple repos in the docker hub, but that does feel a bit over the top, and it might be nicer to have just one repo with different tags. I was hoping the os/architecture would take care of things by itself to have a common image name. However, I can’t apply a single tag to multiple images locally, even if they are created against different architectures. I will think more about this tomorrow.

I feel I am very close to making something nearly useful (if only from a learning PoV). Unfortunately, this week, ha-builder is having some issues for me with building cross-platform images. Last week, all was good, but there have been changes in the underlying whatsapp-ha repo this week (hence I wanted to rebuild the images), so I’m not sure where the issue lies.

I would still love to be able to build and upload into a single Docker Hub repo, using the different platforms to identify the images for each version (as is the case with the homeassistant/home-assistant docker repo), but I’m not sure how to achieve this.

1 Like

I have managed it for my addon recently: Expaso/hassos-addon-timescaledb: A HomeAssistant add-on containing PostgreSQL, psql client and TimeScaleDb (github.com)

Basically the trick is to avoid communication with the Supervisor API when not running in a managed home-assistant environment.

I do this by overriding some scripts of the addon-base image from Frenck, and add extra checks around places where I communicate with the API.

Please note that addon-configuration is coming from the Supervisor API, which you won’t get when running standalone. Your scripts should be build in such a way that it will fallback to another form of configuration, like a file on disk mapped into the container.