Hi,
I’m developing my first integration (https://github.com/home-assistant/home-assistant/pull/26456). Everything is going well, but I do not reach the CI Quality Gate because I do not reach the coverage expectation. (https://github.com/home-assistant/home-assistant/pull/26456/checks?check_run_id=234201629)
When we look to the coverage (https://codecov.io/gh/home-assistant/home-assistant/compare/ad9daa922b7826bb40cce144c0d4a348378c7a95...140236d30e2e2ae440bc929f34d7337901c68dee/diff) the part that are uncovered are dealing with error management and logging.
I had a couple of look in other integration source code, but I didn’t find any (or much) test that deal with this ?
- should we mandatory test those errors/loggings ?
If yes, someone would have an example ?
In my opinion you should test as many cases as you reasonably can, including any error situations.
Ideally you would create a test case that deliberately produces an error, for example by tweaking input parameters. If that’s not possible in your case, then you could always mock certain classes or methods to produce the desired exception. Often you see something like with patch("some.class.or.method", side_effect=MyException)
which throws the exception when the object is instantiated or the method is called. When you search for side_effect
in the existing HA test cases you should find hundreds of examples.
I don’t think it’s necessary to test if the message has been logged, but you should ensure that the system is in an expected state at the end of the test case, for example by checking that an entities state is in a defined/expected state after an error occurred.