Automations: Testing Framework

Sometimes I’m writing an automation for an event that doesn’t trigger very often. For instance, here in the UK some energy providers will occasionally pay you money to cut your energy usage at certain times, maybe 1-3 times per month. I have an automation set up for this scenario, which is somewhat complex as it has a fair few actions and some of them use templates, making it harder to know it will work as intended until the actual time comes to trigger it.

I would love to see a YAML-based testing framework so that I can write a suite of tests that will check all the different scenarios that might occur when the automation eventually runs. Writing complex automations TDD style would be an absolute dream! Also, tests have a very useful side-effect of helping to further understanding of code itself, including why certain actions are included in case you forget the next time you come to edit it.

A testing framework would essentially work something like this:

  • “When” and “And If” clauses can be tested this way, which can’t be currently tested by running the automation
  • You can define multiple individual tests, each one self contained with no side effects that might taint the other tests
  • Actions would be “mocked”, meaning no action actually takes place when the test runs, but a record of performed actions is kept for assertions
  • Updating entity states records the new values for assertions but doesn’t affect the live values
  • Tests can be added to the automation YAML
  • Every time the automation changes, including the tests, the tests run and you get near-instant feedback on whether or not your automation will work.

A simple test suite might look something like this:

tests:
  # Describe the test
  - it: >
      Tells the inverter to start discharging the solar 
      battery until the end of the saving session

    # Set up some vars
    variables:
      start_time: 2024-02-08T18:00:00Z
      end_time: 2024-02-08T19:00:00Z

    # Mock the state of some entites
    before: 
      state:
        binary_sensor:
          octopus_energy_a_xyz_octoplus_saving_sessions:
            current_joined_event_start: "{{ start_time }}"
            current_joined_event_end: "{{ end_time }}"

    # Describe the expected outcome
    steps:
      # Triggers the automation (defined in When clause)
      - state:
          binary_sensor:
            octopus_energy_a_xyz_octoplus_saving_sessions: On
      # Checks that the desired outcome is reached
      - assert:
          state:
            time:
              inverter_force_discharge_start_time: "{{ start_time }}"
              inverter_force_discharge_end_time: "{{ end_time }}" 

  # Another test
  - it: >
      Disables export diverters until the end of the saving 
      session
    variables:
      start_time: 2024-02-08T18:00:00Z
      end_time: 2024-02-08T19:00:00Z
    before: 
      state:
        binary_sensor:
          octopus_energy_a_xyz_octoplus_saving_sessions:
            current_joined_event_start: "{{ start_time }}"
            current_joined_event_end: "{{ end_time }}"
        select:
          myenergi_eddi_operating_mode: Normal
          myenergi_zappi_charge_mode: Eco+
    steps:
      - state:
          binary_sensor:
            octopus_energy_a_xyz_octoplus_saving_sessions: On
      - assert:
          state:
            select:
              myenergi_eddi_operating_mode: Stopped
              myenergi_zappi_charge_mode: Stopped
      - global:
          current_time: "{{ as_timestamp(end_time) }}"
      - assert:
          state:
            select:
              myenergi_eddi_operating_mode: Normal
              myenergi_zappi_charge_mode: Eco+

The interface would show you the test results automatically. Perhaps in the top, a button next to “Traces”. Clicking it would take you to a screen where you can view the tests (and any errors encountered). Apologies for this being sparse, I ran out of time while editing it and it’s awkward due to Shadow DOM, but I think you get the gist:

Love this idea. I was just searching to see if something like this existed, and came across this request.

I’m new to HA, but have a developer/test automation background (Java). I’ve started setting up a few little automations here and there while building up my catalogue of devices, but I am currently refraining from building more complex automations for this very reason. The fear of having to maintain them in future and accidentally breaking a condition and playing whack-a-mole to test and fix everything is too real.

I like the initial idea of the YAML structure and the simple test result layout to start. It would be good to hear from someone on the HA team to know if this is something that’s been considered before.