Setup components/platforms in testing

Hi all,

I’ve been trying to figure out how to effectively test my custom component and I’ve been hitting the wall for a while now.

My component implements a “room thermostat”, that controls multiple other climate components based on a room temperature sensor. (I’ve decided to built my own component since generic_thermostat, climategroup, etc. didn’t fit my use case and I wanted to learn something new).

I have a working proof of concept and am now going to make a more robust implementation, so I am only asking about best practices/tricks for testing.
I have setup my testing using pytest-home-assistant and it works very well, however I am encountering the following issues:

  • How can I setup multiple entities (e.g. custom component room_thermostat and generic_thermostats) in the same domain? async_setup_component() apparently can only be called once per domain
  • Is it possible to preload the climate component and then setup my custom_component without reloading the climate component? I’d like to replace service callbacks in the climate domain to track calls and right now the callbacks are reset on async_setup_component(). (this is an issue, since I want to test service calls during setup).
  • What is the best practice to track service calls? Are there any helpers/mocks?

I think most of my issues arise because my component is on the one hand derived from ClimateEntity and has on the other hand dependencies to other ClimateEntities.

Please let me know if what I am asking is clear enough, I am very new to HA development.
Thanks in advance!

1 Like

This is an excellent question and I have the similar set of challenges and am using that pytest-home assistant also. Here’s what I’ve been able to do,

There’s three types of tests i do - Unit Tests, System/Functional and Integration Tests.

Unit Tests are focused on testing a python method in the integration (e.g. set_temperature). First step is to be able to setup the entity by passing config into it. Second is using mocks to intercept calls - to provide either specific return values/exceptions or in the test to verify the mock is called with the right parameters. The main purpose is to exercise all the code in the method AND make sure exceptions are well handled. My general work flow, is to write these as I write new code.

System Tests are focused on testing the integration end to end inside of a running HA. This testing I do not have automated. I have some HA config setup and also have a simulator - in my case I wrote an external simulator that acts like the Lennox S30. I run this within vscode and have a set of tests I manually run, being in the debugger also helps for debugging. In this testing I’m using the Lovelace dashboard with the thermostats card to perform end to end testing.

Lastly, I have a Integration Test - which is a docker container - that I use for release testing - e.g. deploying the component to make sure it deploys, not missing dependencies, etc - and i let it run for a day or two. Typically I run this against a real Lennox S30.

As I’ve gotten more diligent with unit tests, the later two steps tend to find far fewer issues, and so while they are manual it’s not bad.

Examples of unit tests in this repo.

@PeteRage has excellent suggestions, I wrote up a blog post on testing custom integrations that might provide some additional resources: Building a Home Assistant Custom Component Part 2: Unit Testing and Continuous Integration - Automate The Things