My thought is to have a wake word for each assist device (per area) and being able to just say “turn on the lights”.
For example “Hey kitchen, turn on the lights”, and it turns on the lights for the kitchen area only. But still be able to control other entities not in that area by using their name or alias.
This is similar to how Google assistant speakers and hubs work.
I’d prefer all my voice devices in different rooms to have the same wake word, but “know” where they are. So if I say “Hey Jarvis, lights on” in my bedroom, it turns on the bedroom lights (i.e. any entities also bound to the same room), while in my basement the same command would turn on the lights there.
Currently, you either need to use clunky naming (“Hey Jarvis, garage heater on”: “garage heater” is the entity name, but Home Assistant should recognize that the comand “heater on” coming from the “garage” assist should trigger that entity) or have overlapping entity names.
One way I could see to do this would be to have entity exposure and aliases per-pipeline, thus you could define a pipeline for each room with (more general) aliases for that room, whilenretaining specific entity names. Or, likely more complex, do more advanced pattern recognition in the sentences, perhaps trying to append the location to spoken entity names or something of that nature.
The ideal solution to this would seem to be allowing the exposing of different components to different assist pipelines, and then allowing more fuzzy component matching. But my solution works for me short-term.