My understanding is that it uses a collection of documents from a couple of years ago – and we all know how fast HA changes!
I’m still hopeful this will become useful in a medium / long-term. But I trust our HA team and if they think this is best, then it’s best.
If so, we’re all tech nerds and we all would want this to become a thing.
But if we see people using it in a flawed way instead of learning, I support their decision.
I just want to ask sensibility. It’s a brand new tool, and we should revisit it in the future. I find lots of possibilites with all this. In it’s current shape, well, let it be.
– I wasn’t even aware we had such a big stance on this when it came to HA, but I’ve tried ChatGPT myself and I can confirm, not a SINGLE response it gave me was right. I even tried to slowly teach it to give the proper response but it kept getting worst.
At the moment, no, it SUCKS with YAML.
If you want to help others then post your knowledge not copy and paste AI, but if you want help likely an AI trained on home-assistant data, would likely be in the top 10 percentile of those offering knowledge.
ChatGpt is just a general purpose model, likely had no supervised learning or reinforcement learning based on home-assistant and it still gets a lot correct even if at times it hallucinates.
Provide home-assistant data for supervised learning and reinforcement learning and likely the title of the OP would be very different. “Need help, talk to our AI”…
As an open source community I’m not sure we would be ready for that.
But a BUS company might like to have an AI to answer all users concerns and complaints
Its exactly what an opensource community could do as it only takes their supervised knowledge and reinforcement knowledge which is little more than peer review community forum solutions.
A bus company would have to employ and pay for the supervised knowledge and reinforcement knowledge and implementation.
The forum here lacks a good representation of peer review but I presume Stack Exchange like forums are rubbing there hand with glee at what that data can provide in terms of reinforcement learning.
This is where we are going unfortunately as a few will wield AI tools that will absorb a knowledge domains source material that learns and grows and gets beter from the peer review you feed it.
The biggest barrier was the huge costs of massive GPU server farms for training but with what MetaAI has just done with the OpenSource GitHub - facebookresearch/llama: Inference code for LLaMA models and Stanford CRFM they are training models for a few hundred $.
So ready or not, still doesn’t change the fact they are likely to end up anywhere and everywhere that has a specific knowledge domain, large or small.
M$ must be cursing as they just invested $10 billion that likely will be opensource and can be trained on the cheap, so at least there is one bonus to what will likely be many, many jobs gone.
You can have a look at in application AI help with M$ copilot Microsoft’s new Copilot will change Office documents forever - The Verge
With projects such as this much of the supervised training is documentation and the reinforcement learning is forum peer review.
Likely we will see more of a fusion of the three.
ChatGPT was trained with data that is several years old… so any updates to an application that are newer than that will not be included in the result.
even now a small amount of additional training is available. You can copy the text to him, and he will understand the text and can retell it. You just need to have a diff documentation, as maybe it will work with new version.
sorry to nitpick this but ChatGPT (or any other AI) is not a him/he. It’s an it.
Once people start mislabeling the AI with gendered pronouns as if it was quasi-human that leads down a bad road in the future when it possibly becomes ubiquitous.
It risks the AI getting “rights” that humans now have. If you don’t think that’s possible then look at the way corporations are regarded in the US. And those don’t have any type of “intelligence” or “consciousness” that an AI might seem to have.
It’s likely inevitable anyway but the longer we can put that off the better.
Highly debatable. ChatGPT doesn’t understand anything. It knows how language works and mimics real people in how they use language. It can even extrapolate that behavior to generate new text using contextual information and following language rules to generate new eloquent texts. What it cannot do is evaluate its own words on truth. It is in fact very naive in accepting pretty much anything it is told. It is very sophisticated text manipulation, which tricks people into thinking it understands.
nick2525 is not a native English speaker.
Many languages like French, German, and Russian have nouns deviced in 2 or 3 genders.
And if you are talking about a noun in these langauges the pronouns follow the gender of what you are talking about.
So it is normal to talk about he or she as a pronoun in those languages. And you often hear non-English speakers using he or she about a thing. It is not correct English - no discussion about that. But it is not because they think that Oxygen, October, vinegar or chat robots are male living things. They are male words (in German in these examples) and you use “er” (he) in German. Nick seems to be Russian. And Russian like German has the 3 genders.
I think Nick has a really good English in general and I personally see these little mistakes as charming.
Oh, ok. thanks for the clarification.
No offense meant.
One time ( of the beaten track ) , i met a couple , with not “perfect” english skills… after awhile, and a lot of confusions, misunderstandings etc. ( my head was about to explode ! ) … when i realized that His name was “Wat” and Her name was “You”
W(h)at You ?, ( No Him You ), ehhh w(h)at ? ( Yes Me ! ) You ? ( No Him ) , i was dying when i realized “my confusion”
But who’s on first?
I’m the big Question mark, was totally confused for atleast 30 min before i dared to straighten it out.
“Wat” sounded like “What” , he called his wife Him(he), and her name was You … imagine a conversation before i got that straighten out ( it was actually Her that eventually figured that everything was a total mess in my head)
Actually you are not up to date with how things are evolving that is even taking the OpenAi devs by surprise.
It may hurt human ego that we are nothing more than chemical based LLMs, but it could be true.
Irrespective of that if a LLM is trained on a knowledge domain then the output it can give can match the top 10 percentile of human input and is extremely useful.
A Home-Assistant language model could be a great addition to Home-Assistant and provide users with instant help, but apart from MSM click bait you need to read up on some of the more tech docs on ChatGPT and update what your concept of understanding is as if the ouput in mostly correct then it is very human like as nothing is infallible or doesn’t contain what could be consdired self opinion of ones own environment.
We are entering a strange period of AI that is currently accelerating tech at unprecendendted levels by purely applying Attention training and feedback to Transformer LLM models.
I was going to give the same answer
We do gender animals, it doesn’t make it human like, even plants can have a gender.
I guess that you fear things like giving an AI, CEO responsibilities.
The law today have moral and natural person, it’s not going to change soon, an AI may have kinda moral person status but those are always under the responsibilities of real physical people.
If an IA do something wrong, the person in charge of that AI will need to take responsibility.
It took a very long time for animal to be considered conscious so we are not going to believe that AI have some kind of consciousness
The only issue there is that sometimes there actually really is a slippery slope to worry about.
you’ve heard of PETA, right?
Yes, but I also don’t refer to my plant by he or she. It’s still an it.
And again both of those examples are obviously living things that have a gender by their very nature and the way we have defined gender.
AI is not alive and therefore it does not have an inherent gender. There is no reason to assign it one.
There is no reasonable comparison to be made between AI and even a plant let alone an animal. And the fact that you did try to compare the two only lends weight to my point.
So who is the responsible party if an autonomous car kills someone? The “driver” (quotes because by definition there isn’t one) for failure to exercise control of the vehicle, the owner for buying it in the first place enabling it to be put on the road or Tesla because they designed it to operate the way that it did? All three? none because it was “just an accident”? How do you place blame if it was because a transistor misfired and caused an unexpected event?
really? I’d venture a guess that there are many people who might think that right now. People believe all sorts of crazy unreasonable things.
borrowing from Arthur C Clarke’s “Third Law”:
“Any sufficiently advanced technology is indistinguishable from magic”.
can be slightly reasonably modified to be
“Any sufficiently advanced AI technology is indistinguishable from human”.
and TBH, that is entirely the point of AI - to be indistinguishable from human.
See also “Turing Test”.
You’ve misused the “slippery slope” fallacy here because there really is a potential logical pathway from one end of the slope to the other.
It’s something to be on the lookout for and precedent isn’t on my side. Gendering your AI is the first step in that direction.
For this to actually happen is not an unreasonable concern.
Good response.
Just want to add that it actually does exist to some extent in English (but this example isn’t unique to English): My brother-in-law was in the Navy and ships are considered female. They’d refer to a ship as she.
Not just in the Navy, ships in the merchant marine are also “she”. Even small recreational boats are.