Disable "All Lights" actions?

Is there a configuration or ability to restrict Home Assistant’s ability to trigger every light in the home? Or at least when triggered from a voice assistant?

I would like to use voice to trigger lights more often, or more advanced LLM based assistants, but “All lights” trigger somewhat often. It’s like a 3% chance, but it’s often enough where I don’t want to try anything new because I don’t want to turn on the lights in someone’s bedroom when they’re trying to sleep, or over correct by then shutting off the lights that someone is using.

Essentially, it would be great if there was a safety net where I could just prevent “Turn off the lights” (implied the room I’m in) from being heard as “Turn all the lights” or something by just preventing that action from being possible.

1 Like

I don’t have THAT answer, but. . .

Do you have lights assigned to rooms/areas?
Do you use room/area names when issuing verbal lighting commands?
Do your lights (or light groups) have memorable unique names?

I’m asking, as those practices have made my family’s verbal command of devices pretty straight-forward, as well as my frequent learning/tweaking practices w/o triggering unwanted actions. (I feed HA stuff to HK, and have Siri do our bidding, but it should be the same for HA voice).

They are assigned to rooms and areas.
I do not use the room names to turn on the light unless I’m not in that room.
They have memorable unique names, but I will never remember the unique memorable name I gave them. It’s just not in my nature.

I get that being more specific would avoid vague commands accidentally sounding like it should affect everything, but I really need is the ability to be vague (or wrong) without it irritating everyone else in the house. Sort of the whole concept behind LLMs is to say “Turn on the … light up whatever I called the place I do work at” and have it work even if I don’t remember if I named it Workbench or Workshop or Workstation or TinkerTown.

Especially since I never, ever want to turn on every light, and that’s sort of the only problem with being vague at all.

Sounds to me like you need to be tracked within your home so you can filter your commands only to affect the room in which you’re currently located.

Create a voice automation that will be activated by the phrase “Turn all the lights”, but will do nothing. Or you can assign it to turn off the light in the room.

1 Like

The voice commands are already processed from a listener that knows what room it’s in. The problem comes up when it mishears me, or when the LLM goes rouge.

Using the Sentences parser, it works for exact sentences but the trouble is predicting how it might mishear me in the future. It also seems to read rooms that don’t exist (e.g., “Gym room” vs “Jim’s Room”, or whatever) and default to all lights again.

It’s a great idea, and I’ll certainly leave the most common “all lights” sentences I can think of. I do think the right solution is to disable light.turn_on when there’s no target. Fix the problem at the source, versus playing whack-a-mole with all possible sources.

For the…Or when the llm goes ‘rogue’ part. That’s very much a context engineering problem. You can make it not do that with careful prompting. It does that because It doesn’t know what it was supposed to do.

For the misheard problem. That’s @mchk note above so thag if it does trigger it does nothing

That’s exactly what I’m getting at. How do you stop the LLM from leaving the toaster on? Well, it has no ability to turn the toaster on, so there’s no risk.

So instead of trying to find every way the LLM or Sentence Parser might accidentally hear a nonsense phrase that’s close enough to run the “default” every light, just remove the ability to trigger every light so there’s no risk.

I’m in a position where playing around with it is unusually costly since there’s always someone trying to sleep or trying to work at my place, but I do think a ‘hardwired’ guardrail could make a lot of sense for others as well.

Edit: I think I’ll try only exposing some of the lights. It doesn’t stop the non-llm parser, but that one is less likely to mishear (or misinterpretation) what I’m saying.

Nno you explain to the llm what it’s all allowed to do and not do. (read Friday’s Party. Yes it’s possible)

You can tell it that it should never default to all lights and instead do x. If you are prompting accurately this is entirely possible.

It’s pulling from a list of tools, right? I tried asking the LLM what tools it had access to, but it ended up triggering everything, not just the lights. What’s the tool that it should avoid? Like, if you were to specifically ask it to turn on every light, does it run light.turn_on on every entity, or does it run light.turn_on once, with “all” as a target? I’m asking to know how to word the prompt to avoid the problem. “Never run light.turn_on without a target” or “Never run light.turn_on with ALL as a target”, for example.

I’d answer directly but THAT answer WHOLLY depends on your existing prompt and tools.

Have you added any of your own prompts or are you bog standard ‘you are a helpful assistant’?

So first. Realistic expectations on an LLM (because a lot of folks think it’s a magic unicorn) it only does what you tell it. The tools you’re referring to tell it what it can do but give ABSOLUTELY ZERO context about what to do with it. That’s your responsibility.

Some verbosely describe every action. I prefer ground rules.

If you have no prompt then it’s a lift to build your ruleset. If you have an existing prompt to do this you may not have it clear enough for the llm to understand or may accidentally override it later. There’s not one answer.

So what’s your current approach

I have a few, because I’m testing out different scenarios. Like, one model can (potentially) read camera images and tell me if there’s a package sitting in the rain. A different one has tool use. Wish it could do both, but whatever. Anyway, the tool one is mostly prompted to respond with short responses, not to use markup or text emphasis, be truthful but also be a jerk. It’s just more entertaining that the sicophantic ones like Chat GPT and the like. Then there’s a list of devices in a template in that Jinja2 template format so it know that status of some things without having to look them up. I haven’t found a way to intercept that prompt and just typing the same data into the developer template editor just says “devices is undefined” so I don’t really know what’s in it. This is after all the ‘personality’ system prompts.

The current time and date is {{ (as_timestamp(now()) | timestamp_custom("%I:%M %p on %A %B %d, %Y", True, "")) }}

Devices:
{% for device in devices | selectattr('area_id', 'none'): %}
{{ device.entity_id }} '{{ device.name }}' = {{ device.state }}{{ ([""] + device.attributes) | join(";") }}
{% endfor %}
{% for area in devices | rejectattr('area_id', 'none') | groupby('area_name') %}
## Area: {{ area.grouper }}
{% for device in area.list %}
{{ device.entity_id }} '{{ device.name }}' = {{ device.state }};{{ device.attributes | join(";") }}
{% endfor %}
{% endfor %}

The tools themselves are coming from somewhere else though, and I don’t think those change, so we could just assume the prompt is a new one “You are a helpful assistant”, and whatever prompt would ‘disable’ the all_lights behavior could just be modified (if it even needs modification) for the larger prompt I plan on using for non-image tasks.

That all said, and I say that all because I am interested in the inner working of the LLM. We are getting wildly off topic. I may have gone a little bit too much into my specific situation which invited creative solutions to my problem, in lieu of turning off the thing that’s causing all my problems. It’s hard because people are truly trying to help, and I appreciate it, but my first stop wasn’t asking for help, it was trying to solve this problem and I’ve been at it for a while. I’ve boiled it down to just…if I could disable lights.turn_on for the whole house, I could try this and that, and get a little looser with the LLM, give it fewer guardrails (because a hard-coded guardrail would be in place), without having to constantly tiptoe around and maintain this one issue that really bugs my roommates.

Is there (and I’m asking “the room”, not anyone specifically) like, a way to override device_action.py, not just the specific sentence that most recently caused device_action.py to misbehave (so to speak). I really would like to solve the root of the problem once rather than try to patch every leak. My next if it isn’t possible (and it very well may not be, at least not in a way that doesn’t break future upgrades) is just to unhook their specific bedroom lights from the network. I’d like the ability for them to say “Turn off my lights” and have it work, but realistically they don’t do it now so it’s not like they’ll be missing anything.

For that specific ask no and if you could it’s probably unwise and would have adverse consequences. Thus why everyone is offering alternatives…

No, bad idea, do it differently basically. You’re going to have to do it the hard way. There’s no magic bullet here.

One thing is for certain. There’s no like, obvious config line that I’m just missing.

homeassistant:
  lights:
    allow_empty_entity_trigger: false
    solve_all_grant's_problems: true
1 Like

Here is where you can add or remove what devices the llm can see/use… the less you have here the better your tool calling will work

Keep it as simple as the things you will actually use with voice
And in your case remove the lights you don’t want to accidentally get triggered