FOREWORD
My name is Daniel, i’m from northern Italy and both by passion and work, I’m a programmer and personally I’ve always been fascinated by home automation and control of my small house.
After using HomeAssistant for many years, in my opinion the best ever, I realized that even though it was very interesting, it always lacked something to make the home truly intelligent (not only smart)
I made some experiments using AppDeamon and I shared in this forum my experiences but in all sincerity I was not satisfied and did not give me that feeling of autonomy if not always disguised behind some prediction.
With this post, perhaps a bit long but certainly interesting, I want to share with you what I learned during my readings and analysis, during my failed experiments and watching the experiments of others too.
As I am Italian, I apologize if I make any mistakes with the English writing.
If you find this interesting and would like to contribute, thank you, you are welcome!
It could make for an interesting discussion!
WHAT IS (REALLY) THE ARTIFICIAL INTELLIGENCE
No, it’s not a stupid question, in fact, it’s a very complicated question.
Many might answer « it’s that thing that does other things by itself » but it is not so
For what I, personally, have been able to read and understand the concept is much more evolved and extended: currently there is no system with artificial intelligence.
Smart speakers, autonomous robots, smart systems and thousands of other fantastic technologies called smart are actually only programmed to do certain things in certain situations but, if those situations should change, they fall into the so-called fallback or do not know what to do.
In simple words: they don’t know why they perform certain actions, they perform them because they are programmed to do them under certain conditions, without asking themselves any questions or any consideration of the possible effects.
In fact a real intelligent system should be able not only to evaluate the current conditions but to reason about possible actions that could lead to a certain desired solution, according to the cause->effect logic, as well as to think if there are alternative solutions and eventually choose the best one according to the current situation maybe in constant change.
Probably only the human brain is capable of making such complex reasoning, without something/someone telling it what it should do in what situation to apply
OK, BUT THERE ARE SYSTEMS THAT CAN PREDICT ACTIONS
Exactly.
Currently all smart technologies are based on mathematical calculations and algorithms that involve initial training of input data or situations and how that information should be processed to perform an output action. Often this calculation is purely mathematical based on statistics
Example with HomeAssistant entities
Temperature … … …: [18°C
] [19°C
] [20°C
] [21°C
] [22°C
] [20°C
] [19°C
] [18°C
]
Heating … … … … . : [.yes
] [.yes
] [.no.
] [.no.
] [.no.
] [.no.
] [.yes
] [.yes
]
Windows open … … : [.no.
] [.no.
] [.no.
] [.no.
] [.yes
] [.yes
] [.no.
] [.no.
]
With these data and a statistical mathematical calculation or based on a Bayesian equation, it is simple to deduce that when the temperature is below 20° we must turn on the heating, when it is above 21° we must open the windows and so on, but we do not ask the question of why below 20° we turn on the heating or why above 22° we must open the windows; we are satisfied with a statistical prediction.
If the user would open the windows when it is 18°C outside because he wants to air the house, the system would fallback or ignore the situation, this is called exception.
BUT THERE ARE MACHINE LEARNING SYSTEMS!
Yes!
But what they do is guess a possible action output based on an initial data training input, providing a solution based on the distribution of the training data.
So they can identify an APPLE from a CUP (if we talk about visual ML for example) simply because the APPLE has a different shape and color from a CUP but if you try to make them recognize a BALL, it could be half APPLE and half CUP - you have to be the one to say that it is a BALL !
OK. SO WHAT DO YOU SUGGEST?
What I’m trying to think, to experiment and to share with us, is the fact to completely overturn the thought, moving away as much as possible to a system of prediction aiming more to a system of reasoning. My goal is no longer to turn on the heating when it is cold, but to make the system reason about why and, if the heating is controlled by the computer, about the possible consequences that such action would bring (example: it makes no sense to turn on the heating with the window open).
In my (crazy) mind, always thinking of HomeAssistant, there is an idea of initial observation of user actions and sensor values (training) to then relate the elements and understand the cause->effect logic so that it can be applied to achieve the desired situation.
To achieve this I need to create a data model of actions (user) and reactions (sensors) and relate these two models in order to make the system understand what happens in case certain situations occur and apply them in case it is necessary to arrive at certain conditions.
At the moment I have a vague idea of how to do it but I think the most important thing for now is to talk with you, evaluate if my thesis is correct or not and maybe arrive TOGETHER to a possible solution.
I want to reach this goal thanks to you..
Maybe we can do it together.
Thank you!