While the hass docs are a start, it is missing so much information about schemas and what different options actually do, I have no idea how things work, or how I should be testing my add-on locally.
There is a single paragraph on how to install an add-on locally, but it doesnt tell you what you need to do if you change the local add-on files and want to test the new code. Do I need to uninstall the add-on, remove it from the /addons folder, reboot the hass host and on a fresh start, place the new add-on files into /addons? Will HASS then re-read config.yaml and render the new config.,yaml description and DOCS.md? Or do I need to change the folder name in /addons? because the add-on has its own ‘slug’/‘hostname’ that survives uninstalls and such. How is it all connected? how can I change the add-on slug/hostname? where is the config for the add-on stored? not in /config/.storage/* or home_assistant_v2.db. How does the addon system know when there is new code in the local /addons folder that needs to be rebuilt so we can test the new code?
Ive been having a NIGHTMARE of a time trying to make my add-on work due to how HASS handles addons, how can I get my new code into the addon? Why are the docs so sparse and you have to guess and try every option and every value to saee what it actually does? why are the schemas not anywhere to be found for config.yaml options? why is there no schema for translations/{lang}.yaml? Why is thre no explanation of how config.yaml nested dicts/lists render or how to add translations for nested keys?
Why is it so hard to develop a simple add-on for HASS? I’ve followed the dev docs and still have no idea how to get my new changes to reflect in the addon?
Am I missing something here? Is there some magical document somewhere that explains any of this?
I dont care about the name I care about the backend that handles all of that and how it works, like how do I get my add-on to be treated like it has never been seen, so i can get the new updated code rebuilt into the image. Theres no docs on how adding the repository works, when you remove the repo and then re-add it, does it read config.yaml, DOCS.md, README.md and show the new changes?
When rebuilding the image, does the docker command use caching or no? Because in my Dockerfile, it installs a python lib from github and even when new code is pushed to that repo and the image is rebuilt, the new code isnt in the newly rebuilt image. So, how does it all work and why does it not work as any sane person would think it would?
Bumping the version does not pull in new code. If I bump the version and then go to addon store → check for updates and do a ctrl+f5 refresh, I can see the version number changes, DOCS.md and README.md seem to be re-read, but what about config.yaml? and when I re-add the repo and re-build the image, will it pull the freshly pushed code from the pythoin lib src repo that pip is trying toi install? or does it used a cached layer?
So, no. I didnt name it wrong
Just imagine if all of this was somewhere in a document that people could read and understand without having to deep-dive the complete core repo and dissect and reverse engineer 16 other add-ons AND THEN STILL have no idea how the hell the add-on build, metadata and storage work.
Im going to try and detail what I mean. I have the port 22222 SSH server setup so I can access the host of my HASSOS install and also have the SSH addon installed to ssh into the homeassistant container. I have a session open to the 22222 host and streaming docker logs -f hassio_supervisor
Steps
Developing an add-on (not locally in /addons) by pushing new code to a repo and then using the HASS addon store to add the repo URL
I can see my addon in the addon store
I can read the ‘Documentation’ tab without installing the addon
Install the addon (supervisor builds the image on device because there is no image key in config.yaml)
addon starts and is working (1 TCP socket server, 1 fastapi server, 1 aiomqtt client and aiohttp sessions all working)
Go to use the ingress page, it works (fastapi hosts the ingress page and the api)
Something is wrong with the backend python fastapi code, the ingress webpage queries fastapi to initiate 2FA to a cloud account and to also submit the resulting OTP to get an access token and then uses the access token to export your device list from your cloud account.
Here is where the problem lies:
I now fix the code and push it to the repo
go to the "installed addons’ page, click the 3 dot menu, click check for updates (which does a git fetch/pull or similar, shown in the port 22222 hassio_supervisor logs and pulls new code in for the installed add-ons)
my addon seems to have pulled the new commits in (only way to know for sure is to bump version, change DOCS.md or README.md and see if the new data is rendered in my addon page), so i go into my addons page and click the rebuild button, to rebuild the image.
wait for the new image to be built and started and then re-test the ingress page, the new code wasnt pulled in.
???
Dockerfile
in my image, I install 1 external python lib from github using the pip install 'git+https://github.com/user/repo#ref' nomenclature. This is the code for the tcp socket server, fastapi, aiomqtt, etc. So, when I push new code for the backend, it goes to this repo.
push new code to the backend repo
in the addon store, check for updates and then in my addon, rebuild image
Since the addon repo is not changed at all, only the repo which hosts a lib that is installed in the Dockerfile. What happens when the image is rebuilt?
I dont think the docker command uses a cache, because every time I rebuild the image using the rebuild button, it takes a bit of time.
Since the backend repo is the only one that changed, is HASS using its own cache or something? whats going on that when the image is rebuilt, it isnt installing the backend python lib from the newly commited data?
IDK if that makes it any clearer. Basically, I am trying to figure out why new code isnt making its way into a newly built image, even when new code is pushed to the backend repo.
Do I need to change the backend repo and then change some stuff in the addon repo (like bump version or edit README.ms, DOCS.md, translations/en.yaml, config.yaml, repository.yaml) to bust some sort of cache?
I just recently tried doing the local /addons way, but when I edit config.yaml, README, DOCS, repository.yaml, etc. it doesnt reflect in the addon store, which tells me it isnt pulling newly added data from the local /addons folder, so how do I trigger it to re-read local addon data?
Are there any docs that outline any of this? I’ve looked but came up empty handed.
Sorry, was very cranky yesterday after that experience. I have some recommendations to make things easier for anyone else wanting to develop an add-on.
I use PyCharm, so the devcontainers set up wasn’t an option.
Develop your add-on locally, on the HASS machine by using studio code server and placing your add-on folder into the /addons directory.
make sure the image: key is commented out in your config.yaml file, to force supervisor to build the add-on docker image locally
In order to pull new changes into your LOCAL add-on:
Navigate to the add-on management page (where the start / stop / restart / rebuild/ open webui buttons are)
For README.md, DOCS.md and CHANGELOG.md: Do a CTRL + F5 refresh on the add-on management page
For logo.png and icon.png: Do a CTRL + F5 refresh on the add-on management page
For Dockerfile: Click the REBUILD button to delete the current image and rebuild a new one from scratch (no docker build caching, whenever you press REBUILD, the Dockerfile is re-read)
For config.yaml (Configuration tab) changes, it wont show until you REBUILD the image (also translations/, apparmor.txt, run.sh and build.yaml changes)
After rebuild, your new changes will be in the add-on image/container (config.yaml changes will be reflected; new options / schema / permissions/ mappings / etc.). You may need to CTRL + F5 refresh, though.
I added a log output with a string as a sanity check. So, when my code runs, I can be sure the latest code was pulled in after the rebuild (change the string before REBUILD):
SANITY_CHECK = "new_code_pulled_23"
# <rest of code>
logger.info(f"{SANITY_CHECK = }")
Edit: Also, it is a good idea to configure HASS to open SSH port 22222 to the host. There is an add-on that can do this for you, or you can follow the debugging docs and create a USB drive to load SSH keys into HASS to enable port 22222 host SSH.
When you are installing/building your add-on, all the good debug logs are in the hassio_supervisor container.
open a terminal and run ssh root@HASS_IP -p 22222
once logged into the hass host, follow the supervisor logs: docker logs -f hassio_supervisor
Now you’'ll see useful debug logs when installing, rebuilding and ‘checking for updates’
Edit 2: You can also use the HASS web UI to view the supervisor logs, but I find having terminal output is much better for debugging