I know Home Assistant does not call self as “Smart” or “AI based”, but have a lot of sensors, history, automations by which we say what we want from him.
HA knows when I came home, knows I will turn on lights, turn on music player, which lights I prefer at evening. Yes, I have automated this, but it would be great if HA predict my wishes analyze my behavior and would suggest this or that to me based on my habits.
We have first step - https://data.home-assistant.io/ but we need to move on!
Perhaps this is not easy to implement, but WTH?
This is a very wanted feature from the Creator, @balloob. He’ll make it happen, eventually. But AI is hard. Its not something that is turn a switch. It has to be set up correctly and implemented correctly. A lot of components to work with
I’m going to go out on a limb and say that, while I wouldn’t want to call it easy, given the amount of work that has gone into open source machine learning (the one I’m most familiar with is Leela Zero), building Home Assistant is harder, and it’s (HA’s) foundation must be done and stable. Then, to have Home Assistant be smart, I think “all” you’d need is to feed your history into a machine learning engine which generates, as it’s output, a “network” that HA can use for automatic automations. The network generation could be automatically be re-generated every so often based on new history being available.
Or I might be completely off base.
Sometimes it not straight forward to get the intended outcome though, e.g.:
- Infanticide : In a survival simulation, one AI species evolved to subsist on a diet of its own children.
- Space War : Algorithms exploited flaws in the rules of the galactic videogame Elite Dangerous to invent powerful new weapons.
- Body Hacking : A four-legged virtual robot was challenged to walk smoothly by balancing a ball on its back. Instead, it trapped the ball in a leg joint, then lurched along as before.
- Goldilocks Electronics : Software evolved circuits to interpret electrical signals, but the design only worked at the temperature of the lab where the study took place.
- Optical Illusion : Humans teaching a gripper to grasp a ball accidentally trained it to exploit the camera angle so that it appeared successful—even when not touching the ball.
Pro active software that is wrong gets turned off. If anything we will end up with suggested automations.
The link to our data portal is good. It actually contains a link to a jupyter notebook that has some sample algorithms to find commonly interacted with entities per part of the day (morning,noon etc). I originally wrote those to be part of Home Assistant but couldn’t find come up with the right experience. There are also a lot of other things to work on too.
Would it be possible to do something like Google Assistant has started doing - posting a notification that suggests an action, based on past behaviour. Eg - I get “Activate Goodnight House” because I “commonly request this, at this time of night”
We could do with it being more intelligent than Google Assistant though because “Turn on the light” is often suggested to me, but when I press it - Google has no context as to the room I am in, so it just turns on every light it can find instead of the one that I commonly ask to be turned on at a similar time each night.
A big problem is that any existing automation will just act as self reinforcing, even if it’s not doing really what you want.
If you create an automation that turn on the lights at 8 every morning, the AI will learn that this is very important, because it happens every day, even if what you maybe really want is for them to turn on 10 minutes after you get out of bed (and that happens to be around 8 in the morning, so it was the best you were able to create manually).
And you already have that automation, so the AI does not really help anyway…
So for the AI to learn something useful, you would need to turn off the existing automations and have the house learn from your manual actions for a long time before it actually gets any useful data to work with.
This reminds me a bit of this XKCD comic…
This is actually not impossible, but isnt without its complexities and assumptions along the way.
First off to the comments above there are MANY considerations on the depth you would want re-enforcement learning to occur. ie is every weekday the same? realistically no, so do you compare every Monday? Or if you’re a shift worker, maybe 3rd Monday of each month. These parameters would vary for everyone so the amount of time to learn would be quite long (to baloobs’ point if the AI gets in wrong, the degree of acceptable failure is low, and so people switch it off - Especially if your lights come on at 6am when you wanted to sleep in!).
Then there is a discussion around triggers. Does it make sense to use time based triggers - sometimes yes, in the basic sense. ie close the blinds when it gets dark. However time based automations are not really what makes a home smart. There was a company I saw at CES who had begun exploring this (i will try and find the name) - but they looked at the events occurring in the house, not just the time they happened. ie The my car arrives and the garage door opens, then the door unlocks. That is a routine that is time independent, and if it occurs enough a good possible automation to add.
This presents its own challenge though, as some of us are further along than others, and the sensors we own/implement are more extensive than others. ie some folks down own presence sensors for their car, or even a garage come to that. Therefore the data points that are possible to gather will vary greatly from instance to instance.
The other complexity is the data gathering and AI models itself. Assuming the HA team would offer 2 (maybe 3) options like other examples:
- Nabu Casa’s AI service
- Instructions how to use your own cloud service (AWS/GCP/Azure)
- If you have the GPU / compute an offline docker version
I imagine there would be 3 Steps to building this:
-
The AI model being used will need to be tuned to understand the HA environment and thus (in the early days) require a lot of data (and volunteers) to identify and train patterns. Essentially an unfiltered stream of the history for each device. This model training requires a fair amount of compute and isn’t hugely expensive if run centrally - BUT - that then requires vast amounts of data about your home sent to the central system/cloud - so to preserve the security focus of HA, this would need to be anonymized and standardized (and comes with so many complexities in itself - names of devices would need to follow a standard format, the rooms/areas would need to be configured, and lights/switches or input_booleans etc).
-
Once a basic model has been built, it can then be deployed into one of the above 3 options. At this point everyone’s system becomes different (and if the model is improved, it would need redeploying and additive options become available). I imagine only the Nabu Casa AI service would be continually learning.
-
Lastly the implementation would need to be figured out. To some points above would the AI offer ‘suggestions’ ie “HA noticed every monday when you arrive home the garage door opens and the door unlocks, would you like HA to automate this”?
The result could be to spin up an automation entry (yaml?) to that effect. That way the AI is simply identifying regular routines that ‘could’ be automated.
Does the system ask you (or notice if that routine is unsuccessful over time?)
Early on I can imagine basic routines as mentioned, but eventually you could get as granular as “when you switch on the TV at 7am you watch cartoon network” or “when the occupancy sensor in the living room has been occupied for longer than 5 minutes you turn the tv volume up” and so on,
I don’t know if this belongs here or in its own WTH, but it’s related to the idea of Home Assistant using AI to identify patters of behaviour. I’ve thought about the possibility of Home Assistant working out when you tend to turn certain lights on and off. Then, when you enable an “away mode”, it could turn those lights on and off at those times with some random variation to make it look like someone’s at home.
It would be like the HA equivalent of the away lighting functionality in Alexa’s Guard Mode.
that’s an interesting application of AI.
Essentially replicating common lighting behavior to simulate someone being home.
This also would have a low quality bar (given no one is home)
So for instance - here is the sort of the thing the AI could learn:
User has lighting and motion sensors. AI could monitor how long it takes on average for motion to be detected again after a light has been turned on. From this is could suggest a reasonable duration that the light should be left on for when motion is detected, that would save the life of the bulb (ie not switching on and off constantly), but also saves energy by not having the light on for hours for no reason.
I agree with @balloob that pro-active AI controlled background actions should not be the goal. Hence, AI-based assistance should be appreciated and follows also the vision of not have force to adapt to technology.
I’m using home assistant for around one year. My pain I experience is home assistant only does exactly want I tell it. Devices have to be manually configured, dashboards have to be designed and automation routines have to be engineered (exception: auto discovery of network devices). But shall home assistant not assist the user in every aspect? The goal imho shall be to reduce user interaction as much as possible with home assistant.
An easy start could be to reduce user interaction with the dashboard. A single dashboard has limited actions like buttons and controls. The user performs actions usually upon the same event like “clean the kitchen after breakfast”. As the amount of sensors increase, also the number of events increase. With a simple correlation, an associated event can be identified, than a suggested action can be pushed to the user pro-actively instead of requiring to open the dashboard UI, search for the action and perform it.