I would like to dip my toes into the world of AI, but I haven’t seen any discussion about how safe it is to incorporate it in Home Assistant. Heck, on the latest news they are reporting that some air fryers are now gathering your data and even maybe recording your conversations.
So what are your thoughts on safety? Any configurations safer than others?
AI is only as ‘smart’ as the programmer feeding it 1s and 0s. AI is nothing more than a whole bunch of IF statements that may or may not lead to the the correct conclusion.
If you trust a toaster to tell your fortune, then yeah AI is all good. If you have more than a brain cell or 2 you know AI is bad.
perfect, we tested the ancient chinese secret but don’t feel like we’ve added a lot to the post as far as information … and just saying “no” without any backing also doesn’t give me much to go on. Would love to hear some opinions and what makes you think that it is or isn’t safe.
Also, isn’t there a Nabu Casa version of AI? Isn’t this a little safer? I would think that from a company that prides itself as keeping things as local as possible, this model would be somewhat safe. Am I wrong or missing something?
honestly this is like asking is the internet safe.
it would be easier for you to say why you believe it possibly is not safe and we address those points directly. @forsquirel basically gave as good an answer as any. beyond that you really need to say whats on your mind.
sending data to third part is never safer so nabu is not “safer” than local model
Saying Nabu Casa is safe(r) is just goading the Black Hats to target it more.
I have an AI running on my local GPU. It’s still has models with hundreds of thousands of lines of code, probably written by an AI, so that’s not safe either.
I currently use the AI to make jokes, funny replies, and the like. I have not given it access to my entities. I have a new Voice PE and a Satellite1 Voice Assistant on the way so this will change very soon.
However The Google has access to most of my entities, so there you go.
Nothing is safe, you can’t live in a bubble.
Take chances.
All the talk around “AI” is mostly hype, fear, and misinformation. We are no where near to AGI, this would be your Skynet scenario (hello, this is John Connor). What we have now are just very convincing (although not always correct) copywriters, “artwork” generators, and the like.
Having said that, there are some very compelling use cases around home automation. And I think Nabu Casa are absolutely correct when they say they are in the best position to leverage this technology, as they follow a local first, user in control, interoperability kind of philosophy (as opposed to the other players, who are pretty much all doing the aforementioned “surveillance capitalism”).
I"ve recently started playing with AI, specifically running an Ollama model on my GPU. If you think AI is just a bunch of if/then statements then you have no business making comments on this topic. AI is a still an emerging technology.
Is it safe? There are very few people are qualified to answer that as the neural nets behind them are stilll largely a black box, even for the people training the models.
I personally would not give it access to the smart plug that my respirator is plugged into. Not because I think it is nafarious, but becuase I don’t trust it to be 100% perfect.
Do not let this discourage you from playing with it though. If you don’t want your data in the hands of big brother, then run a model locally and block it’s access to the internet. If you’re worried about AI doing something you didn’t ask it to, then don’t give it access to anything critical.
Exactly. We do not have general AI all we have is predictive text models (LLMs) and while they may have issues with data surveillance that is in no way exclusive to them. It is a far bigger issue.
Please stop calling it AI, it is nothing of the sort.
What do you gain by connecting an air fryer to the internet ?
For it to listen will require a microphone, which seems an unnecessary expense … unless you are giving it voice commands “air fryer, cook at 200 degrees for 10 minutes”. Even then, don’t you still have to put the food in and out ? I just don’t see any real benefit.
I don’t consciously use AI myself, but I consider it like most things … AI is a tool with pros and cons; can be used for good or bad; excels at some things but also has limitations. I would suggest that you do some more research, and have a play with it.
There certainly seems to be some very active interest, particularly with regard to HA Voice Assistant … and I believe a fair bit of confusion about the multiple pathways through HA Voice Assist.
there are options which use Nabu Casa cloud to anonymise calls to external AI
there are other options which will run locally on a Raspberry Pi (but are limited and considered slow
and there are options which run on a local PC (preferably with a powerful graphics card) which can give results not much below the external AIs - but at higher cost than I can afford
I get the clear impression that results are heavily influenced by how many dollars you spend on the hardware to run it.
AI is not the problem.
The problem is for AI to work it needs to be online and recording all the time.
The recordings needs to be analyzed for commands and this is the caveat.
It is not the AI that is really the problem, but the fact that the data can be used for so much more and if it is a cloud service, then you have no way of controlling it.
Local AI is safe, but AI is extremely resource heavy, so many can not run a local AI that is acceptable.
Saying AI is safe (or not) is far too broad a statement. If we are talking a out LLM’s and their creators:
A LLM is a model describing natural language and can produce language based on what it learned from the vast amounts texts is was fed. It does not think, it does not know true or false, it does not reason. It predicts what words are most likely to follow given the conversation. In a sense it is just a very sophisticated parrot. Hence it also can produce falsehoods. So one part of the question is, how far do you trust a language model to control hour home, if it can make mistakes and does not care that it does. The same can be said for voice assistants by the way, though they are programmed in a more structured, limited way.
The makers of LLM’s and voice asssitants do not give you all this for free, and also need real human beings input to improve the model. Hence they will use whatever you put in the way they see fit, even more so if you do not pay. So the question is, do you trust the companies behind it with the data you put in.