Yes, but it’s the one that’s freely available to everybody. Not everyone can spin up their own. Possible, probable and feasible aren’t all the same thing.
This is a cop-out argument. A model, to a great extent, is it’s data. This isn’t nearly as trivial as you are suggesting it to be. Do you understand the vast size or data ChatGPT is trained on? The feedback loop to improve it through monitoring means you need to take certain risks initially. Depending on the task or purpose or a model, this might not be a good idea.
Sure, but someone needs to invest time and effort into this, because ChatGPT isn’t currently the answer — which is currently a state of the art model. You can train very specific models for specific tasks, e.g. to identify problematic posts, but you can do this with more basic machine learning. You don’t need a full-blown AI for that.
Not at all. This is an international community. You are not the only one whose English skills are less than ideal. Some people use translation software. Some people just do the best they can with whatever level of English they know. As long as we can figure out what they mean, it’s all good.
In your case, you’ve made an extra effort to be sure that your posts are easy to understand. That’s great! I don’t need to know (or care) why or how you’ve done this. Frankly I wish some native English speakers would do the same.
I think GPT-5 may cause this rule to be disregarded based on what I just watched. The access to GPT-5 will only be available to developers starting in December of this year. Also that the only free version available at the moment is ChatGPT-3.
> Blockquote
"None of us want to do that. Use AI to your hearts content. Don’t post it’s results as answers here.
That’s it.
Making more arguments for AI here isn’t going to change our opinions. The only thing that will change our opinions is when AI answers questions correctly more often than incorrectly without us intervening.
We are moderators, we aren’t here to train an AI."
/> Blockquote
<<sorry, I couldn’t figure out how to put that quote in correctly>>
Word. (Or, a “Drop the mic” moment.)
Actually, I for one wouldn’t mind having a ChatGPT plug-in for my HA server. I saw an interesting (and often humorous) conversation over on Redit that discussed some things that can be done with it, and I’ve thought of some things myself. But I’m not comfortable (yet) trying to create something like that, not just yet, anyway.
Also, I’m not inclined to have a voice interface for my home automation–I’d rather have things happen in response to my entering the space, or where I initiate something and the automation completes it and monitors it, etc.
I’m just not into yelling at the computer to turn the lights on or lock the door.
But eventually, I’ll figure out a way to communicate with an AI via my home automation server.
Homeassistant released a new module Assist - Talking to Home Assistant - Home Assistant as supposedly year of the voice.
It was a really good idea to create a inference based skill for home assistant but for me the choice of implementation doesn’t just skip LLM’s (Large Language Models) such as ChaptGPT and opensource equivalents it even skips the smaller older tech of NLP language models for a fairly anctient method of string based fuzzy search and the effect that gives between the 2 is huge.
You have to be very exact in your sentances and if it aint defined it will deduce nothing.
You could very much train a LLM to start producing YAML or even NLP and confused why develop new with what I consider obselete with NLP never mind GPT like language models.
So sadly Assist turned ouy a bit of a flop for me and guessing the mantra was to gett it all working on a Raspberry Pi.
Problem is after you manage to get one a Raspberry Pi4 is very lacking when it comes to ML as the models have been squeezed down and for research will run on a Pi but far too slow for any real use.
I think that is a mistake as it excludes newer SBC such as RK3588 type boards and micro pc’s to prob what is the ultimate in Home voice and assistant of the Mac Mini.
Due to the diversification of voice commands a single powerful unit should be able top process multiple rooms and race-till-idle with the latest and great in voice and LLM, but the voice and skill routing will likely be external as having inference based skills (Just send em the text) should make interoperable to all if not so stinky (In terms of a Mac m1 user might consider a Pi) as that is loosely the difference in user experience.
There has also been a lot of opensource applying plugins to LLMs so that maybe GPT created YAML could be sent direct.
Its really interesting as even the forum could have a LLM that spiders the forum and provides online help and LLM’s are super exciting, due to how good they can be.
Fair enough, I think that puts you both OT though (as well as half the rest of the thread).
Don’t get me wrong, LLMs have huge potential for how we control our homes, but this thread is about something specific (a forum rule), but we could do with a more general discussion about LLM potential in another thread.
If you want to kickstart a discuss then I will add prob a few possible repo’s I am aware of but haven’t actually done any dev.
Supposedly the Lama/Alpaca you can custom train for few $100 in server rent than the millions of ChatGPT.
I sort of went quiet on Assist, but maybe different versions can be made for different platform destinations (Assist vs Assist-lite?)
I get copying and pasting from current general LLM’s is pretty pointless but still think it would be so cool and likely eventually a nobrainer that AI will be in the forum and product and not pasted to thats all.
Just for talking and playing to GPT like model then whisper.cpp/examples/talk-llama at master · ggerganov/whisper.cpp · GitHub is a great example, as in the reply I did always say that is likely to be seperate ot at least should be.
This topic somehow managed to keep me entertained for hours at work.
Guys this is about posting AI responses on forums as your own. It’s not a open discussion around AI’s, addons, entities, smart assistants, bard, or even the Year of the Voice thing.
Y’all are mixing everything and it really does feel like most haven’t read the main post of the thread.
It’s not really about activity, but subscriptions. I get a notification every time there’s a post here. So, if it’s not related to the topic, it’s spam (spam => not of interest in relation to the topic).