I have setup the assist with my Claude LLM (https://www.anthropic.com/) successfully and it’s working fine from the beginning. I used the anthropic-Integration. Amazing and I am really suprised how easy and smooth it went.
Now I am sitting here and have long discussions with it (or her? him?)
Me: I simply want to switch on a light in the living room.
It: 1. Switches on the light 2. starts a discussion about the context, a misleading information about an “Area” and a “Switch” having similar names, and from the context the proabability of me referring to the switch (instead of the area) is higher, And there is also a device in the livingroom unavailable. It tried to reach that device, but it is not responding, it is apologizing for not being precise and for not reaching out to the devices unavailable. And if I refer to the area instead of the lamp it apologizes and kindly asks me to be more precise and to send a different request. And at the end it asks me if everything is ok and if there is something else it can do for me.
… 3 minutes …
Oh nooooooooooooooooo! Sh … up pls
I told it to be more “straight, precise and short”. And the answer is well understood. It offers to simply say “Done” instead of Grimms fairytales. Yeah, that’s what I want. But next time this whish seems “forgotten”.
I would think that this “context” or “setting” is not found in HA, but rather in the LLM or in the integration. Or maybe it is a matter of how to handle it?
Any ideas? Your 50cent on it?
Because this way… if I want to get exactly THIS - I would invite my “daughter” - instead of a bot chewing up my ears
my very first and the last experience with Claud was exactly like yours. I ask a question and it started to talk to itself. after a short while i’ve been informed I’m out of credits, without even ending this very first task. funny but pathetic.
You have to be way more specific in your requests. If you don’t tell these things EXACTLY what you want (on EVERY REQUEST), they will just keep making shit up.
Thanks for your reply. Who is “they” and “these things”? Please be “way more specific”.
I assume you mean the AI?
So why do I have to change my normal (human) behaviour? Und not “they” adapt to humans?
I had this discussions in my university in the early 90s, when the first websites had so many misleading attributes. And other students told me that the “web” will always be an expert-thing. A grandma will never have to use it and hence we, the IT professionals have to learn how to use it.
Really? How about now, year 25?
I say if there should be “user acceptance”, they have to learn. But this thing is resetting in every session. So it should learn and preserve OR it should start with a setting matching my needs (which I will NOT implement in json!!!)
If this is not changing fast, people start doing what is commonly called in german language with new features or software: “GGG”
What happens in real life when you ask someone that is a know-it-all a simple question…
You get a long answer with trivia you could care less about.
The AI ‘thinks’ it knows everything and indeed has a giant list of facts to choose from from all sources, both questionable and valid.
Ask VERY specific questions to get specific answers and don’t forget to tell it to limit it’s response length, not reply in obscenity or porn, and skip the emoji’s. Otherwise it has open season to respond in any matter it decides you might be looking for.
And don’t ask Grok to do naughty things, because it just might…