In short, what I’m trying to achieve is a way to identify if a state/attribute change came from a ZigBee switch, or if it was from an external source such as the dimmer on the wall, or Google Assistant.
What I have now is a MQTT event that listens to my ZigBee switch, and then does some automation stuff with it such as change the light scenes.
What I want to do though is have some other automations that say, if the lights in the living room change then do X. The key though is that these automations shouldn’t run if the light changes came because the ZigBee switch was pressed.
Use a debug node to look at the triggering devices complete message. In that message, you’ll find a context value that might be null.
I use this to determine who or what triggered a zwave entity. If I control something from the UI, it sticks my user id in the context value. If I touch a light switch on the wall, the system doesn’t know who I am, so the value is null.
Use this value as a condition in a switch to send the message down whatever path you want.
The issue with this is I rarely use the UI. I’ve got a ZigBee battery switch that I use in the living room to choose a preset scene I’ve made for Bright, Relax and Movies.
The majority of other interactions will come from either the dimmer itself or probably my Google Home
This is hardly possible, afaik.
My solution for this is (not ideal but the most reliable I found for my use case):
create a template entity (a template light in your case) which delegates all actions (turn_on, turn_off, etc) to the original entity.
in addition on every action publish a MQTT Message to a special topic.
in Node-RED you can listen to that MQTT topic and do the extra stuff you want to do.
So, when you have two switches, and you want Switch_1 just to control the light and Switch_2 to do some extra stuff, connect Switch_1 to the original light and Switch_2 to the template light. Or in case of Google Home, let it control the template light, instead of the original light.
When you press a Zigbee switch, what information do you see in NodeRED under the old_state and new_state context for the triggered device?
You can use the information in the context to store that data in flow, msg, or global variables and then use that information to build your logic to handle your scenario, no?
Every time my temperature sensor changes state, I get this information. The context shows that no user triggered the state change, which tells me that it was either a manual update (like touching a switch) or it was automated by a non-user…etc.
There’s got to be a way for you to identify the “type” of trigger (i.e. physically pressing the switch) by looking at the msg data for unique information.
So this is sort of what I do. It works pretty well overall but there’s some issues of note.
First thing to note is that while this technique does allow you to mostly differentiate between someone pressing the button on the physical device and HA/GHome turining on the light that’s all it can do. Basically you either get null or not null. If the user_id is null then you know that the integration initiated this state change and thus it probably came from the physical device (more on this in a sec). If the user_id is filled in then it definitely didn’t come from the physical device. But there’s no real way to tell whether it came from someone using the HA, google home, alexa, etc. if that’s what you need to do. Everything uses the same user IDs.
Second thing is dealing with restarts is a battle. Restoring the state of the entity after restart counts as a state change and user_id is set to null with that one. So this is why I said probably above as you have to contend with these restore after restart events. I try to filter those out by removing events where old_state is null but I haven’t really found a perfect solution here, this just works “well enough” for me.
Third thing is on Google Home. Although it isn’t easy to differentiate a google home driven state change event from an HA driven one there is a good way to tell when someone interacted with a device from google home. Any time someone interacts with HA from google home an event is fired with the type google_assistant_command. You can drop an events: all node to listen for this event and can parse its content to see what device they interacted with, you’ll find it in msg.payload.entity_id. I assume there’s probably something similar for Alexa if you care but I don’t use Alexa so I don’t know what that is.
Final thing I’ll note is just from some personal experience here. I’m going to throw out a guess and say you’re trying to do something similar to what I did which is differentiate between when automations turned something on and when human beings did because when a human being does something you want it to override the automation. If so one of the easiest hacks I found is with the actual numbers. For instance every UI that displays light brightness to people displays it in percentages. But the actual state of lights is captured in brightness which is a number from 0-255. Which means if a human being set the light then they will never set it to almost 1/3 of those numbers (100% = 255, 99% = 252, 253 and 254 are impossible in a percentage based UI). So if you have your automation always use a number that the UI will never pick you can differentiate between the two. And its not like any human being is going to notice the difference between brightness 254 and 255.
It looks like you’re doing more then just lights so this trick won’t always work (switches for instance only have two states and UIs definitely present both). But for lights in particular it is a quick and easy way to separate the source as human or not-human.
One of the challenges I was having was Zwave light automation getting in the way of manual override. There are a few things I haven’t ironed out that I’d like to have in my setup.
Automated lights. (done.)
Override the automations by pressing a switch. (done.)
Re-enable the automation? Can’t figure out the logic on this such that it works with our habits. HA can’t read my mind, unfortunately.
Recording and comparing programmed automations in contrast to human adjustments to machine learn and adapt the automation automatically based upon our response to the initial base automation. If the light turns on at X time at X intensity, and we reduce the intensity manually repeatedly, the automation would adopt the new setting based upon the likelihood that it would happen again.
Any manual override gets 20 minutes minimum (whereas normally my presence-based stuff get 5 minutes before lack of detected activity turns them back off)
After 20 minutes resume looking for activity in the room. Override persists as long as activity is detected
After 2 hours, turn off the override even if there’s still activity. This is long enough that conditions have probably changed enough to warrant recalculation.
Not going to pretend its perfect but does seem to work pretty well in my house. Or at least frustration with the lights has largely abated and it gets it right enough of the time that it can be considered helpful. I still write a mental bug report every time I hear “Ok google turn on the lights” but it occurs at an acceptable rate now
Well that sounds cool but probably not for me. I mean google nest has invested a fortune in automating my thermostat that way and after a year of battling with it we finally just turned off all those smarts and set the schedule manually. The amount of manual adjustments has dropped exponentially since then and user happiness increased accordingly
If someone else is willing to invest the dev time in something like that I’d try it out. But I’ll probably stick with manual adjustments in my automations otherwise as it seems like a lot of work with unclear benefits to me.