Hey, everyone. Turned off all my Alexas when they announced they were sending all voice recordings to Amazon no matter what - and I don’t trust Amazon at all.
Just set up my first HAVPE and it’s wonderful. Have two more on the way.
My question is about LLMs. With HA Voice being about privacy first, and Alexa and Google assistants being significant privacy concerns, how much of a privacy issue is setting up an LLM in the pipeline? Does that put me right back in the open like with Alexa?
Using Anthropic Claude Sonnet 4.5 currently in testing. It works wonderfully, but if I’m just as exposed with an LLM assistant in the pipeline, I should probably reconsider, if privacy is a primary concern. I’m not too concerned with HA Cloud being involved, as I trust them far more than Amazon/Google, and I’m not quite ready to buy hardware to do everything in-house just yet.
So, keep LLM in the pipeline? Pull it out and just deal with simple/pre-determined phrases for home integration and nothing more?
Morning. Great question. And it has a very VERY simple answer.
You have three choices.
cloud llm free.
If you do not pay. You are the product. END OF LINE.
(Sorry, saw Tron this weekend… It’s not Superman but also not bad I dislike Leto and he actually did a good job…)
EVERYTHING you pump through will be collected and used for training. Bet on it. Now think about what’s in there and look at your question.
I will NEVER use a free cloud llm to drive my install for this reason.
PAID Cloud LLM. Same as above but contractual guarantee that it will NOT be collected and used as long as you never flip the bit in your api dashboard to share data with the provider. It’s about as good as you can get to private while not building your own. Some folks are comfortable here. Some not. Either way it costs real money. My gpt4x powered Friday builds typically burned $20 usd per month.
Or
build local (or private cloud with same parts functionally identical)
Ultimate privacy… Ultimate cost. Expect between $500-1500 usd to start. Competent GPU / NPU and plenty of Vram REQUIRED.
Ok, cool - so, I’m a paying customer for other purposes, so now I’m off to make sure that I’m not allowing sharing. I do want to eventually host my own - looking to build a 1-2U server for unraid that I run on an old NUC right now. Once I have one sufficiently powerful I should be able to run Ollama on it and get decent results. I think.