User Context for Each Device and UI Component (Lovelace), tristate buttons (in context of presence and use of AI)

My comment is an observation and highlights what I feel is missing in HA (Home Assistant) in the context of AI.

Problem Description:
Users need control. Both radars and AI work great 90% of the time, however, there are issues with priorities and settings:

  • User-set brightness should take precedence
  • There should be an easily accessible option for setting, for example, brightness and color for automatic light activation
  • There is no simple control option (tri-state button: auto/on/off)
  • To use two instances of the Lovelace component referring to the same devices, you need to perform gymnastics with virtual devices, e.g., MQTT
  • This causes a significant increase in complexity and time required for management

Situation Description:

Case 1 - AI:
After releasing llama3, I completely outsourced the control of heating and lighting to the model. It works like this:

  • The model receives data, for example, from a weather station
  • It receives descriptions of each room
  • It receives user preferences
  • It receives information about the room (presence, CO2, temperature)
  • The model creates a description of the current situation
  • Based on these data, the model decides what temperature to set, the brightness, color, or whether to turn off at all

Case 2 - Presence:
It works like this:

  • There is a series of radars that cover the entire house creating a shared Cartesian space
  • Detecting presence in, for example, the kitchen turns on the light
  • Users are frustrated because the light is set permanently, has 4 different options, and is controlled from various sources (Lovelace, wall switches, etc.)

In both cases it works great in 90% of time, but in 10% of time it is annoying because of lack of user control. It is also impossible to train AI, because there is no context what to do with data, is it temporary, or is it something I prefer in specific situation.

Why We Need It:

  • If we want to leverage AI
  • Easily override settings for, for example, automatic lighting
  • Not to annoy the wives

What We Require:

  • Tri-state button auto/on/off
  • User context assigned to each input instance (Lovelace component/physical device/entity).

Such context should be sent along with the state so that we will not only know which user turned what on but what it means for the system.

It may look like that:

{
state: …,
data: …,
context: {
[user defined fe:]
prio: 1,
target: ‘kitchen_radar_trigger’
},
}

Example how I think it should be managable, on left is normal lovelace control, on right is exactly the same device, except all the settings are not triggering lamp, instead are stored, and when radar detects presence are used for light.

Yes, there is:


.

If you can work out how to give control of your house to a LLM you should be able to work out how to override it. Probably all you are missing in your automations is context. See: How to use context

?
yes, I can even create 3 different buttons, or even write custom component.
But if we think about future, we literally need tri state button as built in component.
There was a lot of requirements about it before for years.
But explain for example to my wife what she has to do instead of adding entity to dashboard.
Good luck.

Same with context.

However right now when we can easily integrate AI and other things what was not the case before, it is really needed.

About context, you don’t get what I am talking about. No automation context, but context that is send with user action, that can point which component was used (check my screenshot, in both cases it is same lamp, but two different use cases)

Who said anything about buttons?

Use an input select for more than binary options, not buttons. You don’t change the selection, nor does your partner. Automations do.

And you don’t have to explain context to your partner. If she uses the physical light switch then context in an automation is used to put your input select in “Manual/Override” mode. She does not need to know anything except she has override control via the physical switch.

In short everything you have asked for is available now. You just need to implement it.

It seems to me that what you are really asking is for the developers to fix your flawed LLM automation implementation for you.

no.

In example this is how my physical buttons works:

  • 1 click on/off
  • 2 clicks: toggle full brightness
  • hold for 2 secs: toggle automatic mode
  • hold for more than 2 secs: change brightness

Each button covers 4 devices.

If you want to override settings for radar triggered light, you need to go to automation/nodered, and change it there, or create additional set of controls, to mimic lamp.
It is crazy when you have more lamps, like garden, road, entry, basement, 1 floor, 2 floor, each multiplied by amount of lamps.

I know it all is achievable, cause I am doing that. But it is crazy how much work and how much of issues later appears.

Instead toggle with 3 states, something like virtual device (shadow copy of real one with additional context), and additional information that you can pass through lovelace component - will make everything crazy simple.

I don’t want to be rude, but what I am understanding here is “no, because of no”.
The same like with polygon shape areas.

No. It’s “no” because you are not listening. For the third and last time there is already a tri-state (or more) control:

You can have whatever options you want. e.g. On - Off - Auto

For the last time, you don’t even try to read and understand what I am writing about.
Let’s stop our conversation, and let’s see if users have similar needs.

Home Assistant mostly uses Material Design for it’s controls. Do you see a control here that would fit your needs?

If you do it would be a lot more likely to get actioned.

There have been “pet-project” controls (e.g. thermostat card with circular slider) but they are few and far between and would require a frontend developer to want to implement it and continue to support it.

I could write it, I wrote multiple things before:

  • polygon shape areas
  • WYSIWYG for editing/creating MQTT devices on frontend
  • grid layout for lovelace
  • shopping list with overpass turbo integration
    etc

You know what was the problem for years?
The lack of will of integrating it, and because of that later, even with polygon zones, I was out of optimizm and willpower to write and finish anything.

About tristate, that’s how it can look like, it is first example that I’ve found in google:

But user action context is much more important IMO, it will allow users to make crazy things.

edit: FE is piece of cake, problem is with BE support: