Using AI to build a system

This is not a post about integrating LLMs into Home Assistant. It’s about using AI to design and build a HA instance.

Before everyone jumps on me, let me say that this is NOT A GOOD IDEA. If you’re not an AI professional with years of experience with HA, you’ll end up with something that doesn’t work which you don’t understand.

Nevertheless, it seems clear from posts in the Forum and from the direction the world is drifting in that people are going to use AI regardless. It may be time to stop sniping and start encouraging good practice and greater understanding. I’m making this post a wiki so that it can be edited, and I invite anyone with an interest in the subject to contribute (constructively?) to the thread.

Do not use AI to create or respond to posts. This is a breach of forum rules and your post will be deleted.

Just thought I’d mention that. :grin:


Anyway. To start things off.

TL;DR If you’re looking for an AI to help with building HA, what “style” should you be looking for? Does it make any difference?

I use an LLM to analyze and document my system, which after five or six years of tinkering is quite chaotic. This is a custom gpt - a local add-on which can read (but not write to) my config and take into account areas, labels, entity notes, descriptions in automations/scripts and comments in templates. It can also access a system journal where I make notes about changes and it has an extensive prompt describing preferred practices. The point is that it is quite difficult to document your own system. The notes you can add are scattered all over the place and difficult to track. A custom AI can bring them together and summarise them.

I’ve been using the gpt-5 API, but this morning I got an email from OpenAI inviting me to try 5.1. This apparently has the new “reasoning_effort = ‘none’ mode” (whatever that means). As usual there are several flavours:

  • gpt-5.1: for everyday coding tasks
  • gpt-5.1-codex: for complex, long-running agentic coding
  • gpt-5.1-codex-mini: for cost-efficient edits and changes

The difference between 5.1 and 5.1-codex-mini seems to be that 5.1 is much more “talky”. It will attempt to explain concepts and offer pros and cons, while codex-mini just gets on with it - it will jump straight to yaml with fewer explanations. They both produce the same results. With the same errors.

My question (if you’re still reading) is: which is better? Most people who are tempted to use AI to build Home Assistant will probably not be using the API, but different styles will still be aparent in ChatGPT, Grok and the rest, particularly if they’re using the prompt effectively. So - given that people are going to use AI anyway whether we like it or not - should we be nudging them towards services that offer explanations or ones that just produce the code?

On the face of it, you’d think that the “talky” style would be better for someone who was learning, but it can be misleading. Ask for an automation to turn the light off when no motion has been detected for 10 minutes and an AI is likely to refer to a “timer” in the trigger - which feeds into all sorts of confusion about triggers and conditions. So (setting aside for the moment the question of whether it’s correct or not) maybe just a chunk of yaml would be better?

Again, people are going to use AI whether “we” on the Forum like it or not. There must be a better way of dealing with it than shouting at them. No doubt AI will improve, but for the time being it’s not nearly as good as people think it is. It’s time to start thinking about making the users better.

5 Likes

Since it is against site rules to use AI to help others with their problems, I’m not sure there is anything I can offer, other than use common sense.
Stay away from PI based stuff. decide where your comfort level is with tech, and use that to guide you.
You want the easier tech route, buy a nuc or beelink type PC, load HAOS, and go. You want to tinker more go real cheap with the computer, load container install, play with it, and upgrade as you go. Or install a VM into something and tinker that way.
Everyone has their own opinion, and an LLM will only give you more options unless you narrow down the requirements… Once you narrow down the requirements, you don’t need the LLM.

1 Like

Aren’t there simple, easy to follow, up to date, getting started guides already showing prominently on the website?
Are we getting so lazy that we cannot even go there first, blindly trusting AI to provide a better alternative? Will a wiki as proposed in the first post make these redundant? Is this an acknowledgement that the guides may be inadequate?

If you want to improve the quality of suggestions uses can get from AI, the LLM packages should be pointed to these carefully manicured guides as higher priority for updating their learning model, and excluded from open unresolved problem descriptions in the forums, and the re-learn frequency should be kept lock step with updates, weekly. Forum moderators may wish to encourage users to summarize solutions and mark the threads as resolved, and then allow the spiders to slurp up the information, presuming it is accurate. This will probably prevent the situation that has become amusingly/sadly evident that the LLMs knowledge databases/models are not refreshed often and often offer obsolete suggestions based on outdated information, especially in the fast paced world of Home Automation where everything changes on at least a monthly cycle, and often on a weekly one.

Are there files similar to robots.txt that can guide LLM spidering to happen as required, leading them to certain sections of the website and excluding others? Something we can have control over, for the greater benefit of all, especially the naive?

1 Like

My opinion on the matter is that it actually doesn’t matter.
Ha is just addon on top of your infrasctructure.
Lets say you are gonna build a house The first thing you do is buying golden pipes for bathroom. That is ha.
First and most important is to build infrastructure on top ha will residue.
So the first think anyone who want to build a smart home should do is

  1. Networking

And that is essential. If your network can’t create vlans go back to 1. and build network that supports vlans. Also using right tool for gateway is also essential. It will probably create firewall rules and be dhcp server for choosing wisely is a must.
When this is done then you can go with ha. Also choose wisely. Haos is easily to get going but very questionable is is better for long run. Docker compose is my choice.
Also people should decide what protocol will be their protocol of choice. Will it be wifi, zigee, zha, matter or something completely different.
And the list goes on and on

Um… OK.

So we have one “don’t use AI”, one “improve AI so that people can use it” and one er… well, not sure what the third one is. :grin:

My point was that more and more people are going to use AI (and probably come to grief). There’s nothing we can do about it. Perhaps instead of calling them lazy we should be teaching them to use AI better.

For example, a Cookbook post on how to write a good prompt?

Ai is a great tool if is used properly. It is good in micro and nano management. But also it can very often do what ai is doing the best, drive you in the circle.
Ai will not build you smart home around ha and this is for sure. It is not tool for that.

AI is good at language. It replicates how we use it, taking the context in which we use the language to predict. This works remarkably well to mimic human interaction, with a semblance of understanding. But it is important to understand that it does not understand. It mimics people that understand.

When used in a programming context (which is language) this can also work pretty well for problems that have been solved before. This is why a LLM can spit out a webshop flawlessly without problems. It has been done a million times. Programming languages do not differ much over time, nor do programming concepts. It can extrapolate this knowledge to some extent, because the structured languages behave very predictable once you know the language structure.

For HA there are problems though. YAML as a language is only describing structured data. Home Assistant determines how to describe sensors, triggers, conditions, actions, automations, scripts, scenes, which is data, written down in YAML structure. These data structures it do not always follow the same rules that structured languages do.

A trigger based template sensor looks like an automation, but there are subtle difference. Because of the rapid development of HA there are a lot of changes to what is valid data. (Note what I say each time: HA configuration is data! Not language!). The release cycle of HA is fast, and with it how things are represented. So there is little training data that is recent, and all training data is in constant flux. What worked last month may not work now. This is fundamentally different from regular programming.

AI will assume HA yaml behaves like language. It will extrapolate constructs that would be valid in language, but not in HA. That is why it will try to apply services like turn_on to a binary sensor, which is impossible.It does not know the data model behind it. It does not understand, it does what others do. And despite the HA community being large, the amount of training data is very limited compared to regular programming languages.

AI can be a valid tool for those who understand the output of the AI to see if that makes sense. That does take a lot of time though. You need to inspect all code to spot mistakes that do not always emerge easily in testing. AI also produces a lot of output. It likes language, so it often uses way more of it than needed. Both in natural language and programming (Which technically: HA configuration is not!).

So while fun, it rarely is actually faster than writing things yourself. People using AI because they lack this knowledge are the most common. They check code by testing only, but that does not always expose problems in the things AI came up with. And they do not learn a thing while using it. Their knowledge does not grow. The solutions created contain weird constructs that are hard to fathom.

AI may not always be able to correct flaws, and nor will inexperienced users. And I’m not only talking about flaws in the first working version, but also to adapt to changes that HA goes through…

Which brings me to what everybody assumes is the case: Will AI improve over time? The technology advances at a huge pace currently. (Though the field has existed for decades). Personally I think: somewhat, but not for long. And that is because AI “shits its own nest”.

The internet is flooded with AI generated content at an alarming rate. This content is flawed by the shortcomings of the emerging technology. And that will end up in the training data, reinforcing bad habbits. And all the while HA keeps changing. So while I think it will probably get better for “regular” programming languages performing reasonably well understood problems, it will not get much better for HA. Certainly not if it is produced by people who cannot review the output like regular programmers do. And while the technology is new, it is fun to watch AI at work and babysit it to fix its mistakes. But the new will wear off, and there is much more reward to solving problems yourself than it is to babysit an AI.

And lastly my personal pet peeve: I like helping people learn to use HA and fix things in what they came up wit. But I absolutely hate when people feed AI code they did not come up with themselves and ask us to fix it.Especially because what AI came up with often makes little sense, but it takes a lot of time to try to understand what is there (because there is no real though behind it). So if this trend continues then I will bow out, and probably so will others. And then it will be up to AI to get the shit out of its own nest.

2 Likes

Well put.

Garbage in = garbage out is how my wise professor put it, all the way back in the punch card era.
Nothing has changed to disprove that mantra since.

For those in the know, and a lot are waking up to it abruptly, artificial intelligence isn’t artificial, nor is it intelligent. It is just a recycler to give you back what you gave it, reworded with less spelling mistakes. The novelty is fading. Reality is here. The silly money poured into it, trillions of dollars, is drying up. A passing fad. Remember BlockChain being toted everywhere, riding on crypto which was there at the same time? Yes, that has found its’ niche, but the bandwagon made a heap of money for a few before it mercilessly rolled on to the new fad, AI.

AI is the new hammer being used to drive in screws. The wrong tool for the wrong job. HomeAssistant configuration assistance is a prime example of where it glaringly falls in a heap. Spectacularly and increasingly for me, hilariously bemusing. We will tire of it, some sooner than others. Yes it has its place, where it can efficiently perform routine tasks very efficiently, and collate disparate data and shuffle it around to pattern match, leaving humans to get on with less boring things.

The AI crash is nearly here, when the bubble bursts. It may be six months, it may be two years, but it will be bigger than the dot com internet crash of 1989.

Be aware, be very aware.

Your challenge is to pick the next fad, and jump on the bandwagon before everybody else. Remember with chagrin when you could mine a few BitCoin in an afternoon on your humble 486 machine? And you did, but wiped your hard drive before backing up the codes? Yes, you smile at the crypto bro’s with a twinge of regret, you were there, right at the beginning, when it was an emerging fad…

I have my revenge, embedding EICAR * strings in everything that may be locked away by blockchain, forever and into eternity, with naive antivirus rollouts discovering the EICAR string and attempting to delete the database, fighting blockchain that won’t allow you to alter anything. A schizophrenic computer, arguing with itself. Burn those electrons baby, burn!

[* EICAR string: A global industry standard sequence of characters, deliberately chosen not to appear in the real world, used to test antivirus software functionality. Benign if not abused by embedding it inside things that may get inadvertently deleted by overreacting antivirus tools. 68 characters long, the string is:
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H* - if this thread survives, it means the forum software is most likely not scanning for viruses. Grins!]

1 Like

:100: And thank you for the thoughtful (and heartfelt :grin:) contribution.

And yet… People will still use AI however much we urge them not to (and however much the nest fills up with shit).

I’m beginning to think this is quite an urgent issue, given that the Forum is the main HA “help desk” (I rarely venture into Discord - is it the same there?).

The mods have their work cut out already without having to remove AI slop, experienced users like yourself are becoming discouraged. Is there nothing we can do to mitigate the problem by helping users generate slop that’s… less sloppy?

Mind you, it’s hard enough explaining HA concepts, without taking on AI as well. :persevere:

Hooray! Someone else who remembers punch cards! :raised_hands: Results back from the data centre a week later with the message Error in line 10.

Becoming? Jack…

We’ve been hacked off about this point for well over a year.

Most of you know my perspective already. And while I share a LOT of Edwins sentiment… I WHOLEHEARTEDLY do not agree that it never will be functional.

Honestly, I believe instead we’re in the wierd square boob phase of Tomb Raider and a lot of people are just fascinated that it resembles what it represents… God help them when we reach GTA3.

So with gpt51 and Gem3pro, Grok 4x etc. You are now squarely in the land of the reasoning (test time compute) llm.

Grounding is what makes it. End. Of. Story. Full.Stop

With v. Current (November 25) as a trained professional…

I can overcome just about any training issue with good grounding and guidelines… With the right model of course.

For the workloads specified… Write my automations…

That’s a coder specialist. Probably Claude code or GPT Codex. Do they make the mistakes Edwin mentioned? Oh buddy yep every single time… Unless you know it’s there and guard for it. And for that reason alone I would not and still do not use LLMs to write HA scripting. I have to fix it too much.

Because HAJinja2 is not Jinja2!!!

…and it takes a lot (gargantuan amounts like damn near the whole context window) of counter statement to make it realize that I before it’ll stop throwing ser() all through your macros and try to read python primitives out of your ents. (because some forum post somewhere ever said they’re OK)

That means a LAYPERSON would be absolutely and utterly confused. The code looks very right (except that map against a list that blanks out all the data you collect in step two of your automation) and gpt says it’s good. (because it thinks it can)

But it’s so convinced it CAN use a JSON primitive or can map against that field or it built a macro and calls it label_entities and shadows the function all the way through the code badly because it wanted a way to look up entitiss by label… And it’s trying to convince you it can. Meanwhile no data…and this is with the literal best paid for. coder agents I can muster.

You wanna vibe code up Minecraft from scratch using Java… knock yourself out it will probably do a decent job. Well known lang with well known good examples…

Now can I get a current agent that supports long Context (hugely important if you’re doing more than say, one type written page or so) to ingest a vast library of… Yes with lots of hard work.

This is where a lot of people have skipped ahead an pointed thier chatbot at the cookbook for good examples and…

Nope. Not enough. Itll still do it. The articles it would refer to also roundabout refer to the bad articles too - you didn’t prune anything… so you’re basically in the same boat… with a better first pass hit rate.

What you need is an isolated knowledge set fed directly to the agent as isolated knowledge and force it to ignore everything it thinks it knows about jinja2 and python. FIRST then reground from scratch (yes it’s as non trivial as it sounds.)

Can I. Yep absolutely.
Am I going to? Nope not at all…nkt even worth my time. AI is supposed to save time and effort. If it’s not doing those things move on.

Because I’m knowledgeable enough in jinja2 and HA script to know immediately when it spits out BS. Combined with I’m currently only comfortable wit it checking my code I feel I’m good where I am.

Because…
Veronica check that… “Looks good boss but you have these structures that look problematic maybe 20% it’s wrong… You should check them…”. Is a VERY different conversation with a bot than

Veronica write Friday’s summarizer from scratched and make it do these things…

One works very well. The other… Nnotsomuch.

One took me 18 months of constant iterative grounding… The other eh.

Now here’s the kicker. I think we’re squarely at the point this Gen and if not now the generation I expect in q1-2 next year. Is absolutely CAPABLE of the work but grounding is SO BAD ( I mean downright awful) for HA data it’s going to take a real effort to get the crap off the public sites. Or it’s going to continue to be confused.

Either that or (and yes it happens) some exec at BigCo Y becomes a fan of HA and after going through what I describe above directs someone to ‘look at why that grounding is so bad in that domain’ and it get better. (believe it or not more likely than you care to think)

Or what I REALLY THINK will happen is someone will come up with a SWEBench like benchmark test for the scenario and publish numbers that the model vendors can compete on.

This is like the store arena scenario and if you follow btw holy crap Gemini is kicking ass. And if that taught people in the industry anything… Make a benchmark to compete and let them publish the numbers. Then it starts to get better. (be a feature they care about and baby agent benchmarks oh yeah they care right now)

1 Like

Well… You know me… Ever the hopeless optimist. Always look on the bright side of life, de dum, de dum de dum de dum…

So, grounding. Can you explain it to me in practical terms (and words of one syllable)? Not how pros like you do it, but what thick laymen like me need to do…?

Remenisces: Yes, that one week delay, wondering if it would run properly -this- time.

You always used fresh rubber bands to keep your punch cards in a bundle, because if the band broke and the cards were now out of sequence… Catastrophe!

I was fortunate that my full time employer invested in me and allowed me one unpaid afternoon a week to attend university to complete my programming studies. Unlike a lot of students, I was in a well paid job, so had money to splurge on my hobbies.

I became the first person to import TRS-DOS COBOL into the country, to run on my twin floppy TRS80 system, specifically so I could compile my uni assignments at home and bring them to class, proudly debugged and running well. I lived in the next street from our local Tandy HQ so got the opportunity to glance in the corporate window display on my way home from uni.The TRS80 model I (the 111th one ever built) cost $799 for the basic unit, and the COBOL compiler, in twin brown four ring binders with gold embossed spine folders cost $999, but it saved having to wait a week while your card job was transcribed by a punch card operator, scheduled, run, and the now familiar three inch thick debugging printout in 15" perforated paper passed back to you, to discover you had omitted a comma in your 132 column coding sheet the punchcard operator faithfully punched in, and you delightfully discover you did it in another place, but she added it in, out of sheer routine.

Later, FORTRAN-80 was added, another two gold lettered brown four ring binders with floppies and manuals inside at $1299 was added to the collection, and I never submitted coding sheets and punch cards again.

The DECWriter LA34, a 15" dot matrix model with lowercase and true descenders (an industry rarity) was another $1500 (a few weeks wages) but well worth it. It was that, or convert a IBM Golfball typewriter to become a printer by placing solenoids above each key, a noisy and messy option, and one I don’t miss, primarily as the wait to get the parts was too long, and the golfball ribbon tape was more expensive to replace. All my assignments printed, not hand written, spell checked, and a title page with large font and bolding - very impressive and got me an extra 10% I suspect. A big fond shout out to Michael Schrayer for his cassette based Electric Pencil word processor that had a printer driver for my little Epson MX80 as well as the DECWriter. I think that was the only software I pirated in my uni days where I went back and bought the original in respect for the author and the quality of the software.

My TRS80 Model 1, serial number A000111, is still functional. Yes it has the LNW expansion box I soldered together myself, and the lowercase extended graphics ROM mods, but the faded grey plastic and black keys (now debounced in software) still survive. It ran CP/M for a while, a throwback to my S100 bus Z80 system with 8" floppies I built myself. The power supply transfomer I got for it seemed to weigh a ton, and was rated for 12 volts at 100 amps, suitable for arc welding. The smoothing capacitors were a foot high and three inches round, thousands of microFarads. The rectifier diode bridge was also huge, and had its own cooling fan.
The day I finally splurged for the Bill Godbout 64k S100 RAM expansion card to plug into the twenty slot S100 backplane, and anxiously waited for it to come from California and clear customs was so exciting. 64K - wow.

Nearly as exciting as my first Intel SDK80 computer, replete with 256 bytes of RAM, and the breadboarding area where I soldiered the added static RAM chips to bring it up to 1024 bytes, so when I went over to my friends place to use his teletype with the paper tape reader and writer, I didn’t have to type everything in again using the buttons and switches to get the computer to print out “Hello world” via the 75bps UART port the teletype was connected to.

My first modem, a 19" rack mounted monster, bought as a pair from a government surplus auction as a computer club bulk buy, complete with multiple plug in circuit boards, all roll gold plated copper, not just the edge connectors but the whole circuit board still sits our in the back shed, all that highly refined gold inside waiting to be melted down, now worth a fortune. Going into the university library and getting rhe librarian to get a photocopy of the modem manual and excitedly discovering that a flip of a DIP switch on one of the boards and the speed jumped dramatically from 50bps to 75bits per second- 75 baud! My current Internet connection achieves an average of 10 million times faster than this.

Excitedly using the TRS80 Cassette port relay to pulse the phone line to automate dialling, tweaking the timing delay loops in the Basic program until it pulsed at 10pps, and then dialling a long distance BBS to see if it worked. The next phone bill was astronomical, but the days of swapping tapes and floppies was finally over. Now the floppy disk has to be carefully punched to make them double sided, and the 1771 controller chip changed to the double density model to bring the capacity up so we could fit all the games and programs on one disk to save disk swaps.

Moving close to an industrial area where a small startup company started selling faster 300bps modems that were affordable. Offering to be their guinea pig to test their prototypes, all the way up to the 56K fax modems years later. Setting up multiple BBS systems for the Tandy, Apple, and IBM PC user groups down the track, and finally connecting to new-fangled Inter-Net on a less publicised telephone number at the local university to receive my first e-Mail. If I concentrate very hard, I think I can still whistle the tones to log into the first BBS, but the password part I have long forgotten.

The paper tapes, in a loop, inside an faded yellow envelope, next to a few punch cards, some still unpunched. We often had some spares so we could sneak into the punch room and type up a fixed card and sneak it back into the pile of cards in the correct sequence and leave it in the out tray when the operator went for a toilet or meal break

Yeah, memories.

1 Like

Do you have a month?

The best way I can put it… Grounding is the art and science of getting the llm to incorporate your data as the truth. It involves making the llm understand for FACT (yes its possible, LLMs always lie is untrue… What IS true is they ALWAYS use the information they have AVAILABLE to provide a coherent statement that may answer the question based in statistics. The fact that the information available is wrong… Taints stats. That’s literally it. A rounding error causes the gradient to fall in the ‘wrong’ place and pop… Glue on pizza.

Grounding is eliminating the wrong first. And building a factual graph where all possible answers are variations of true… Most people don’t start from that perspective. They assume a website is enough to get a good answer.

Fix that and you fix it all.

2 Likes

It happens. I was in Melbourne, Australia when Steve Ballmer was there to launch a pre-release roadshow of a new Windows version to the industry insiders. After the presentation, drinks in the back room, and introductions to VIPs.

As I was IT manager for a large corporation with thousands of Windows desktops, introductions were made. The movers and shakers in the computer industry were there. Excitement and hopes of lots of sales from upgrades.

Steve is a very tall man. Imposing. The handshake was firm and genuine. He really wanted feedback and to interact with real customers. I’m not tiny myself, but had to look up slightly to speak with him, eye to eye.

Steve: How are you planning to roll out the new version in [my company name]?
Me: We have a staged plan, and have the latest beta versions running well in our test environment for over a year now.
How have you found the whole process?
Quite smooth actually, and no major dramas with interoperability with other vendor software like Lotus Notes. We do have minor issues with the backup function that causes support issues, particularly as the restore functionality is unreliable.

Steve frowns, a very fleeting one, turns to one of his ever hovering suit and tie techies and quietly and succinctly says only two words: “fix that”. No drama, but the message has been understood, clearly.

The conversation continues, and ends pleasantly as other VIP guests are also introduced. I return to work and tell the staff that good things are coming, and all should be well.

A week later a letter from Redmond arrives to my desk, personally addressed to me. Inside is a hand written note on Microsoft letterhead paper apologising for our problems with backup problems and a promise to do something about it and keep us in the loop and an offer to have a dedicated team to visit to explore our requirements. I pass that to one of my managers and get a briefing regularly on progress.

New version of Windows eventually is released. Along with the new software is a bundle of utilities, including the new ‘Microsoft Backup’ system. Yes, similar to what Bill Gates did with MSDOS for IBM, when you don’t have a product on hand and time is of the essence, you buy a working one from somebody else outright and market it as your own.

They bought a company reputed to have a robust backup and restore system for the existing version of Windows, and adapted and bundled it for free with their new version as their own.

Our techies were the ones that told Microsoft they should fix their product to be as robust as the alternate from this small company. Little did they know in the intervening weeks that Microsoft would actually buy the company lock stock and barrel and worked with their programmers to integrate it and update the branding.

I moved on later to fresh and new challenges, new careers in different fields, all rewarding.

I’d like to think that chance comment about the annoying backup bug was instrumental in changing the world for the better. I’m sure that ‘fix it’ command issued that afternoon got high priority.

My little contribution to changing the course of computer history

2 Likes

Absolutely happens. And yes Ballmer is an imposing man.

I believe one of the challenges with AI is that people tend to judge it based on traditional technology compute norms, that is, it is either always “correct” and successful or “wrong” and fails. They fail to acknowledge that it’s made to emulate human intelligence and communications patterns and that, frankly, people are always fallible. Including experts.

People who view the world as black/white will hold up its mistakes as an example of its failed promise. Spell “Raspberry” is a common example you’ll find on this forum. As we used to say in the Army, “one aww shit, erases multiple attaboys.”

I agree with Nathan that it will get better. I find it amusing that “If you expend effort to improve your reasoning and communication skills when prompting AI, its responses to you will be improved as well”. To succeed with AI, you need to improve your own skills.

There’s a good chance AI will replace a lot of mundane low-skilled white-collar workers who half-ass their way through the workday focused on their Etsy stores or other hobbies. AI can half-ass its way through the workday at a much lower cost in the long run. Consider the mechanization of agriculture, the only low-skilled jobs remaining are those that can’t be done by machines. Yet.

2 Likes

OK, Nathan. Fix it. :rofl:

*points at the part where I already said no… :rofl:

No is, in fact my favorite English word…

Busy fixing a CoALA-based offline private in-home, self learning, agent right now, thanks.

1 Like

You are all over estimating ai. Ai is el stupido. No it will not replace, not in the near future, so called low skilled workers.
Tesla prooved it.
Ai is just llm. It just talk to much but without understanding as someone pointing this out.
We implemented ai in company that i work. And result was…Well it is great until it isn’t. He didn’t know some basic things. And no, implementation wasn’t a problem. Basic understanding of fundamental terms was.