That is fine if all you want is a very smart tool.
But that is kind of the point, it is only good as a constrained tool. If AI were a sniper, the prompter would still be the one aiming, spotting the target, checking the wind, and every now and then somehow managing to trip over it’s own shoelaces and ruin the shot.
So yes, that trait is desirable for a tool. It is just not the same thing as independent thought.
Im sure you probably played with some image/video generation models to know what im talking about here
I was bored and had a spotpear ball lying around. Google Gemini turned it into an oscilloscope within 10 minutes. If you can prompt it well and give it info it needs (like the pinouts) it can code quickly without compiling 50 times! It also added gain etc. Oh and three modes, Standard EQ, Radial EQ and the Oscilloscope. The standard EQ looks just like an 80’s stack EQ meter.
I installed automatic1111 and found that it was most useful to me when I constrained it and limited its creativity. I had an idea in my head and wanted it to generate that for me.
an agi might notice that there was a gap in the market for novels aimed at racists who wanted their ideas reinforced.
A human editor or publisher would know why this is a bad idea for society, and an llm would be following prompts and constraints that wouldn’t lead it down thus path.
The current generation of ai is still a route following algorithm at heart, just a really sophisticated one. it is good at automating tasks, and following paths such as predictive speach so that it mimic natural language. or artificial language such as code
agi and consciousness are arguably seperate things from this in the same way that organic life is different from a life simulation.
I used it to create a soli analysis program for my hydroponic system. it would have taken me weeks to hand code in the sensor analysis routines, and half of that would be unpicking typo and syntax problems and would have been very boring to do.
Hypothetically, maybe, but probably not the way you framed it.
The bigger point is that AGI is usually meant as broad, flexible intelligence across many domains, or a system that can learn new skills efficiently instead of just following narrow patterns. That is very different from today’s LLMs, which are still mainly prompt-driven pattern predictors.
So if something ever got close to AGI, the concern would not just be “it might write bad novels.” The real shift is that you are no longer talking about a tool that stays neatly on rails. You are talking about a much more adaptable system with broader initiative and a very different risk profile.
For instance Iron man’s Jarvis is an accurate depiction of AGI rather than todays Ai, and the risk becomes prominent when you start to put ai and agi into robotics…
Lets say chat gpt is in a robot, user can find exploits to do funny but dangerous things like in this funny YouTube demonstration
Now the danger of agi in a robot is it can quickly turn in to the story of Ex Machina
Maybe at some hypothetical time in the future, but right now - and for the foreseeable future - those rails will be really long extension cord, because the battery pack needed to power the Terminator for more than about 15 minutes is still unachievable, and if an AGI is giving us trouble we can simply cut the power to its data center because the idea of an AI being “software” and existing on routers or telephone is also fiction.
There will also almost certainly be “rails” built in to any AGI, just like there are in LLM.
I struggled to find a statistical analysis AI that would give me straight numbers on some topics because the human programmers had through of some of the uncomfortable questions that people might ask and programmed them out. Or added in bias to include or exclude certain information by default. even if the user wanted otherwise
I use coding tools/chatbots all the time, for everything. I ask myself “Is this bad to use chat to walk me through how to set something up”
The thing is, I used to use Google the same way. Try to install Program X. Fails halfway through. Google the error message. Skim 3 different forums. Find what looks like a reasonable answer. Try it.
Now I do the same thing with chat. Google results are absolutely terrible now, and chat bots are faster.
Is there something missing from having to dig myself and find the answers vs having them spoon fed to me? Yes. Also there is something missing in reading all the forum post for context.
Claude can code Home Assistant YAML really good. I have fixed dashboards, alerts, automations using Claude and Cursor.
I can write 500 line dashboards in YAML using card mod in a matter of minutes with minimal prompting and its 99% of what I wanted. This would have taken hours or days doing it by hand.
Its not always perfect, and it screwed up some CSS and was never able to fix it. I finally noticed the actual reason the CSS was broken and changed my prompt and it was immediately resolved. But I cant write CSS at all, so the only way to do it was with Claude.
I do all my prompts in Claude/ Cursor/ Claude Code. I push it to a GitHub repo, and then pull the updated files to Home Assistant.
There is no way I would let it run against my production system without an easy way to roll back.
I think that - overall - a lot of criticism that I’ve seen of people using AI basically comes down to this: People who can code (Or draw, or play an instrument, or whatever) feeling aggrieved that AI allows people who can’t code (or whatever) to automate tasks that previously required years of practice and dedication.
The eventual output might not be perfect, but it significantly lowers the barrier to entry for a lot of tasks.
I remember the outcry when bands started using synthesisers with loop tracks, and especially when TV productions started using synthesisers instead orchestra. Sure, it wasn’t as good as the big multi-piece arrangements that movies had, but it was a big step up for TV shows. I think that this is something similar.
As someone whose coding glory days are firmly behind them, I deinfetly appreciate the automation aspect, and being able to concentrate on the logic while the AI deals with the syntax of languages that I’m not familiar with, and not invested enough in to really master.
I think rankles with people who have more of an emotional investment in something. Like my artist friends getting upset that I use Automatic1111 to make gig T-shirts. Which I’m going to wear for a tour and then discard, and don’t want to spend hour on. Especially since nobody is really going to be paying much attention to them.
I’m personally looking forward to the day when I can get an AI to make throwaway video games. Like a Star Trek Holodeck adventure that gets played for a couple of hours and then forgotten about, but which can provide infinite variety based on how I happen to be feeing at that moment in time.
Computer, make me a short dungeon crawler with a visceral horror theme satirizing modern capitalism, set in 19th century London, for a co-operative party of four. Real time exploration, but turn based combat. 16-bit pixel art visuals. About 2 hours to completion. No supernatural elements.