Not so much a question but more a comment on my recent experience.
I used to be a coder, back in the day. Like really really back in the day. I used to code in Assembler and I drifted out of coding around the time that Object Orientated languages were becoming mainstream.
When I started tinkering with Home Assistant it was quite a curve as YAML\Jinja “does things differently” from the way that I was taught. Anyone who is used to the older languages can probably relate, but I digress.
A little while back I saw a demo of someone using an LLM to write basic code, and I fed it one of my scripts that I was having difficulty with and it found the problem and fixed my code almost in instantly (It was a rookie mistake), and I’ve been using it on and off ever since. Mostly for times when I need to do something but am missing something simple in the syntax of how to do it.
I’ve been using it as a cross between a debugging engine and a wiki search, and steadily putting in more natural language asking it how to do something and less code asking for how to fix it.
The other day I realized that I’d just completed a really significant capability upgrade on my HA setup that does things that HA wasn’t designed to do, and I’ve got no idea how about 60% of it actually functions because I’ve been becoming more and more reliant on the AI to do the code. I’d mostly stopped reading the code or the explanations, and just copying and pasting it in place, and … it just did what I need it to do. So I didn’t bother reading how it did it.
On one hand, I now have a system that’s significantly more complex and functional than I could have written myself in a fraction of the time that I could have written a much more basic system. It runs, and runs very smoothly.
It’s well coded, not sloppy or patched together, and it uses capabilities that I probably wouldn’t have realized existed without some serious background reading. I’ve seen some horror stories with other languages or environments, and heard about AI churning out badly optimized code for low level systems, but for for a high level environment like YAML, I’m getting really good outputs.
On the other hand, I think that I’ve become a worse coder for it because I’ve forgotten a whole bunch of things that I knew, and I’ve effectively just been telling someone else what I want and getting them to do it for me. Like I was commissioning code some software rather than writing it.
I’m not sure how to feel about this.
I’m absolutely against the dumbing down of systems, I started out as a tinkerer and then became a professional programmer, and without that early tinkering to see how things functioned and what they could have done I’d probably have gotten bored and done something else, but having an AI doing the heavy lifting just makes things so much assessible, which I guess is just another way of saying Easier.
I’m both exited about the future possibilities, and terrified that people who might otherwise have become skilled coders will become dependant on tools like this and just copy and paste rather than figuring things out.
There will always be tinkerers and coders and makers, but with tools like AI there will probably be less than ever.
These are my thoughts, what are your?
