It’s frustrating how well Google hides this tremendously useful feature. The dropdown just next to it (‘All results’) is also super useful, switch it to ‘Verbatim’ if Google starts hallucinating results that have nothing to do with your query again.
Yeah, I don’t get it either. Why isn’t it on the main page?
Luckily also other search engines than google do exist - some even only work as a privacy protecting “proxy” and show un-bubbled google results, like startpage for example.
Including the possibility to choose a time period directly within the results (no cascaded/hidden menus)
Also DuckDuckGo doesn’t hide the time period but this search engine is often not up to the results google typically can deliver
YouTube gives you the time option when you search, but not for the recommendations it lists after you’ve selected one video.
Anyway, back on topic, I’ve finally had a chance to play with ChatGPT a bit, and frankly I see no reason to disagree with the mod’s decision.
Not that it isn’t good at some things, just none of the things I’ve tried. I’ve tried code problems in various different languages, random questions on topics I’ve wondered about, creating text for various purposes. I’ve even asked it to create a limerick. It did none of these things well. It responded with lots of incomplete and incorrect information, even grammatical errors.
I guess the only grey area for me is if you used it to help you create some code, and you reviewed that code to make sure it worked 100% and wasn’t using deprecated or inefficient constructs, then I personally wouldn’t care if you posted it, as long as you explained the provenance. But I don’t blame the mods for not wanting to walk that tightrope. A simple “don’t” is a reasonable approach.
You’re describing my #4. You used ChatGPT for research, tested and validated what it gave you and then wanted to post about the solution you found.
You can use anything you want for research. HA docs, youtube video, ChatGPT, twitter poll, asking a ouji board, etc. If you tested and validated the solution before posting then you’re being a helpful member of the community. If you didn’t then you aren’t.
Although in ChatGPT’s case I would suggest writing the reply in your own words, even if you tested it. That’s kind of a gray area since you did test it before posting but still, don’t make the mods life harder trying to figure out who’s on the right side of the line. Just use your own words.
and the remake
Could you please update the blog post and the part in the ‘how to help us help you’ to better reflect this? It still says ANY use of AI to answer a question is not allowed. Using it for research is still use. Especially if that research consists of you tweaking questions to ChatGTP so that it finally delivers you a working piece of code, which could be posted word for word and still be a valid answer to the question. They way the rule is phrased now is unnecessarily broad and I can understand it triggers people when phrased like this.
I think I should explain my core issue, and I apologize, because it is diverging from main topic. Mods decide to ban ChatGPT advice/code/answers. I have no problem with that. My point, sarcastic though it may be, was that people who are providing ChatGPT (I hope) are providing those answers in good faith. I see constant and frequent complaints from newcomers about the “shrugging off”/“just Google it” response, which I believe is provided in bad faith.
My point was “hey, while your doing some street sweeping, why not knock out those gate-keepers over there on the corner”. I’m not new to home assistant, I have been around awhile, but my background is not technology based, and so I have felt around in the dark for years with much help from this community and tangential ones. So I feel bad when I do see new comers getting rebuffed or pushed away because they didn’t know to refine their Google search or to do this or that.
We live in a crazy world, in crazy times, with crazy people. There are a lot of bad actors and bad states. This place shouldn’t be one of them. I wasn’t going on about AI, and I apologize for confusing the issue.
Rational, and a breath of fresh air. I wish more platforms, orgs, corps, people et al would pause for a minute before diving head-first into the great unknown as they did with cryptocrap. The numbers of shills peddling non-solutions across seemingly every industry with the latest buzzword absolutely beggars belief.
If you knew enough about the code required to recognize a valid piece of HA code from the AI then you likely don’t need the AI in the first place to write your code.
the issue is that there are LOTS of users who have no idea if what the AI spits out at them is valid and working. They would likely first assume it was until it was proven not to work. Then they don’t even know enough to be able to tweak the question to the AI to refine the code until it works.
If you can do that then you don’t need the AI and it seems like a waste of time trying to pull the teeth of the AI to finally get you a valid response.
It is not so much wrong answers, we can all give them, and there are plenty of wrong answers on the forum.
It is the confident and apparently knowledgeable assertion of a wrong answer that is the problem.
Whether or not AI is necessary for anyone to write code is not so relevant to me. I don’t need home automation. I survived just fine for 45 years without it. And to automate my home I don’t need to write code. But I like to tinker with it, even though it costs me more time than if I just went for a commercial out of the box solution.
To me, it is about making rules for a community, writing down these rules in an obviously visible place, and then writing somewhere else, in a less visible place, that the rule is actually different. If you make rules, make sure that they are as clear as possible and easy to find for everyone.
I doubt there is anyone here who would disagree with that.
if that is your only and ultimate point then I completely agree with you.
If I misunderstood any of your prior posts then I apologize.
But for any others who seem to think that ChapGPT is some technical panacea but only if you could eventually figure out how to finesse the question to get the right answer then my points above still stand.
I doubt there is anyone here who would disagree with that.
I’d be surprised about that too. But nobody with editing rights has taken action yet.
I’ve never even tried to get an AI tool to make any code for me. But some in this thread have claimed they managed to get working code out of it. Others have claimed this is impossible. I am completely neutral on that matter.
I do think the clarifications that CentralCommand wrote down are how the rule should be, and that IF they are indeed a good reflection of the policy, someone should replace the simplified blanket statement in the original post and the ‘how to’ with the much clearer and more precise version.
Submit a PR.
dont believe everything posted in this community…
not wanting to stir everything back up, but that claim is simply false, and probably not backed up by any research at all. In fact, it should be banned as such, following the new guidelines.
Havent tested the new algorithms that were implemented the other day, but those could supposedly even further falsify the claim.
I have to say ever since I have discovered ChapGPT I use it all the time (for work - I work in IT ) and for help with HA template syntax for example.
Yes the answer is sometimes wrong but in my experience in 90% is spot on or very close to the correct answer. And even if the answer is not correct it is often quite easy to figure out and fix the mistake.
The most difficult bit is to ask the correct question.
At the end of the day this is just another tool for people to use.
The blog post is here. As with basically everything HA, its on github. If the wording bothers you feel free to submit a suggestion (same with all the others on here).
I guess my two cents is I’m very confused why everyone seems to be interpreting this as something completely unenforceable. The rules of the community apply to what you do on this site only. There are no rules for what you do outside this community (other then the laws of wherever you live). Because a) that wouldn’t make any sense and b) there’s no way for the admins/mods of this community to enforce rules about things done outside the community even if they wanted to.
So the rule could only possibly be about what you can and cannot post here (in this case, a word-for-word copy and paste from ChatGPT). If you want to use ChatGPT (or whatever else) on your own time for research or whatever go for it. There are no rules about that here (nor could there be since that’s off-forum behavior).
I have submitted a PR.
One of the issues ive seen with ChatGPT is it presents a response to you as if its gospel and 100% correct.
The underlying engine likely knows that the answer its giving you might be inaccurate, but it still gives you a response as if it knows its correct.