My TeamTracker custom component calls an API and then uses the response to set the values of ~50 different attributes on the sensor, while all vary based on if the game is in the future, past, or ongoing. It handles lots of different sports and there are a lot of different variations that should be regression tested. This turns into something like 10 sports * 3 status * 50 attributes = 1500 expected values to be validated. Too many to many set up.
Right now, I capture the json from the API into a file and can use the YAML config to tell the sensor to read the file instead of calling the API. I then use an HA automation to export the values of all of the attributes (for all of the test sensors). Then I do file compares of the exports to see if anything changed/broke from earlier runs.
It’s fairly automated but not completely and I’d like to use pytest if possible but can’t find any examples. Right now these test sensors are included in my HA config and are created automatically. I’m a C programmer at my core so the whole Python test framework is new to me. I was able to get a base pytest in place that basically validates the sensor gets created, but there’s no way I could input all of my test cases and expected results for all of the attributes for all of the sports combinations.
Ideally, I’d have it read the YAML to set up the sensors, export the values of the attributes after they are set up to establish my baseline values, and then future runs would compare their exports to the previously established baseline.
If this makes sense, are there any working examples?
If I am thinking about this wrong, I’m open to other suggestions.
Thanks!