It would be really useful to have an abstraction layer between the UI (including automations) and the hardware.
For example, at the user interface level, I don’t care if a sensor is a Z-Wave device, home grown, a script, or whatever, just whether there’s movement or what the temperature is in a particular room. But, when I’m adding a sensor, it’s all about the API.
There would be several advantages including:
- You could exchange devices (across protocols) without having to alter the user interface
- If a device doesn’t quite operate as expected, it could be fixed in this layer. For example, a cover device whose open/close values are inverted could be fixed at this intermediate level.
- Aspects of a device which are obtained differently could be coalesced, e.g. if a sensor’s battery level is obtained via a different command.
- You could create the UI before adding the devices…
You could think of these as ‘virtual’ devices, and similar to command line devices, but the idea could be taken further. For example, the virtual device could be edited via the UI.