Check out my CI deployment pipeline

@bachya Could you share your Slack setup for HASS and CI/CD pipeline?

Sure thing! Reminder that I use Drone.io, so my example will reflect that.

One of the best parts about Drone is its robust plugin architecture; to that end, I use the Slack plugin. Build notifications operate in their own Drone pipeline (so I can ensure that it doesn’t run until all of the preceding pipelines run):

---
kind: pipeline
name: Build Notification

trigger:
  status:
    - success
    - failure

depends_on:
  - ESPHome
  - Fail2Ban
  - Home Assistant
  - NGINX
  - Shell Scripts

steps:
  - name: Send Notification
    image: plugins/slack
    settings:
      webhook:
        from_secret: slack_webhook
      channel:
        from_secret: slack_channel_name
      template: >
        {{#success build.status}}
          `{{repo.name}}/{{build.branch}}`: Build #{{build.number}} successful
        {{else}}
          `{{repo.name}}/{{build.branch}}`: Build #{{build.number}} failed
        {{/success}}
    when:
      status:
        - failure
        - success

Let’s break this down:

  • trigger: run this pipeline on either success or the failure of whatever comes before it
  • depends_on: wait until these pipelines are done before executing
  • steps: the actual pipeline steps; there are several sub-parameters:
    • webhook: the webhook for the particular Slack app to send the message to (note that I store that value as a Drone secret so I can publicly share this configuration without exposing the webhook)
    • channel: the channel to send the message to (again a secret)
    • template: the message template to use (check the plugin documentation for all possible tokens)
1 Like

Thanks for sharing those CI pipelines. I am still using a Travis CI pipeline, since home assistant has an integration to poll Travis CI, which makes it easy to poll for new successful builds. However, I would like to explore other solutions such as Github Actions or Done CI.

What are your best practices to restart home assistant based on the build result? I see the following options:

  • polling (e.g. Travis CI sensor)
  • webhook (requires public access to home assistant)
  • ssh tunnelling (requires public access to ssh)
  • run drone ci / gitlab runner locally on the home assistant host

I try to avoid public access for my services wherever I can. Is there any other options?

With Drone CI you can run everything local, so the ssh tunneling is not an issue as it is on the internal network.

1 Like

Good point! I added this option to my list. I am currently testing to run a local gitlab runner on my home assistant host. I will look into drone ci next!

I worked with travis, gitlab and now drone ci. I am really impressed how easy it was to setup drone ci with a docker and ssh runner.

Deploying the new config with the ssh runner is a lot easier than working with travis.

Inspired by the pipeline from @bachya I created a basic pipeline for my deployment:

---
kind: pipeline
name: testing

steps:
  - name: test
    image: homeassistant/home-assistant:latest
    pull: always
    commands:
      - mv travis_secrets.yaml secrets.yaml
      - python -m homeassistant --script check_config --config .
  - name: lint
    image: cytopia/yamllint:latest
    commands:
      - yamllint .
    failure: ignore

---
kind: pipeline
type: ssh
name: deploy

trigger:
  branch:
  - master
  event:
    exclude:
    - pull_request

depends_on:
  - testing

server:
  host: 192.168.178.5
  user: pi
  ssh_key:
    from_secret: SSH_KEY

steps:
- name: pull
  commands:
  - cd /media/usbstick/container/homeassistant/config
  - git pull

- name: prepare
  environment:
    TOKEN:
      from_secret: token
    URL:
      from_secret: url
  commands:
    - >
      curl -X POST -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json' -d '{"entity_id": "script.prepare_restart"}' https://$URL:8123/api/services/script/turn_on
- name: restart
  commands:
  - cd /opt/configurations/docker-compose/homeassistant
  - bash
  - docker-compose down
  - docker-compose up -d

See: https://github.com/TribuneX/home_assistant/blob/master/.drone.yml

1 Like

How did you manage to handle changes made from the ui and effect the committed files?
For example when doing changes with the ui to the lovelace ui that behind scenes change lovelace.yaml that is committed…

And what about custom-components, nodered etc that are also managed from the ui of homeassistant?

Thanks for sharing this. Are you still using this? Specifically, the ESP home drone.io pipeline? for some reason the command does nothing. Does this still work for you or is it outdated?

  - name: "Config Check: Latest"
    image: esphome/esphome:latest
    pull: always
    commands:
      - "for file in $(find /config -type f -name \"*.yaml\" -not \
         -name \"secrets.yaml\"); do esphome \"$file\" config; done"

The command below work but cant find a way to exclude the secrets.yaml file.

....
    commands:
      - for file in config/*.yaml ; do esphome "$file" config ; done

my .drone.yaml

Still works for me. If you run that command directly on your repo (no Drone), what happens?

I have it working now. I had to modify the command to get it to work

for file in $(find config -maxdepth 1 -type f -name "*.yaml" -not -name "secrets.yaml"); do esphome "$file" config; done

removed the backslashes also added -maxdepth 1 because my config files are splited and added in subfolders, only the main config files ar in the root of the config/ folder.

https://github.com/theautomation/esphome/tree/main/config

1 Like

I have also added a deploy step to the ESP devices, when checks and tests are success then compile & upload. it would be nicer to upload only the modified files but cant find a good way.

  - name: compile and upload
    image: appleboy/drone-ssh
    settings:
      host: 192.168.1.99
      username: <username>
      password:
        from_secret: ssh_password
      port: 22
      script:
        - cd /docker-home-services/esphome/config/
        - for file in $(find -maxdepth 1 -type f -name "*.yaml" -not -name "secrets.yaml"); do docker exec prd-esphome-app esphome "$file" run --no-logs; done