I’ve been using ChatGPT to generate yaml for some complex automations to optimise my SolaX Inverter and Home Battery set-up.
Over the last 2 months, I’ve learnt a lot about ChatGPT - what it’s good at, and where it regularly fails.
I thought that I would start a thread to discuss how I’m using it.
Edit:
Now on GitHub: GitHub - UKMedia/ChatGPT-HA-Docs: Describes my approach at using ChatGPT within a Home Assistant environment
and my SolaX solution is an example of the code generated by ChatGPT: GitHub - UKMedia/SolaX-Automation-in-Home-Assistant
How I’m Using ChatGPT to Build a Rock-Solid SolaX-Based Energy Setup in Home Assistant
1. Some Context — What’s This Project All About?
I’m running a SolaX-based hybrid solar system — two inverters (master/slave), two battery banks, solar split between house and garage, with dynamic grid export limits and a bunch of smart devices (Smart Home, heat pump, etc.).
Managing all of that in Home Assistant is doable — but I’ve been retired for 12 years and my last role was as a Programme Director, specialising in recovering failing strategic IT programmes - so a long time since I last did any coding!
I had a number of fairly complex automations around, dynamically changing the inverter settings with the goal of:
- Minimise Grid charging,
- Minimising PV clipping, and
- Maximising Export Revenue.
But I wanted to try and incorporate Generative AI into reviewing these automations, so that the system learnt from errors between forecasts and actuals and automatically adjusted the rules for future calculations.
That’s where the “DAI + RBC” approach comes in. DAI = Distributed Automation Infrastructure. RBC = Rule-Based Coordination. It’s a fancy way of saying I treat Home Assistant like a real control system — with historical error correction.
ChatGPT was brilliant at reading a requirements document and generating the code in seconds. All with a test harness embedded, standard logbook entries, etc. It did however, have a habit of embellishing code with every bell and whistle it could think off and some of these automations were anything up to 500 lines long.
But where it really fell down was in bug fixing and code maintenance. Instead of editing code, it would regenerate the entire code block. The result was that it would randomly removed logic and functionality whilst carrying out minor code edits. And even if I asked for a logic audit between versions, it would confirm that all logic remained. So I reverted back to work mode and introduced a governance layer, with High Level & Detailed Design docs, coding standards, and interaction rules, backups/archiving of previous versions, change/defect tracking, and logic that can’t just break silently.
2. How I’m Using ChatGPT in This Setup
I’m not just asking ChatGPT random YAML questions. Instead, I’ve given it a set of governance rules and documents — like templates for how I want code structured, how it should respond, and defined workflows for New Functionality, Change Requests, Defect Reports, and Documentation Changes.
Here’s what it helps with:
- Writing clean automations using my naming conventions
- Generating scripts that follow my patterns
- Checking if a request is even possible (before wasting time)
- Drafting and formatting change control records
- Helping tweak dashboards without breaking them
So it’s become a structured architecture driven, CMMI model based development environment - which it follows most of the time but I still have to really concentrate that it doesn’t deviate.
3. What’s Working — And What Still Trips It Up
Wins
- YAML output is really solid when it follows the framework
- Automations and scripts are generated fast and clean - average time less than 30 secs
- I can track what changed and why — super helpful months later
- It stops me from chasing dead ends by doing feasibility checks first
Rough Edges
- Sometimes on new chats, it forgets to load the governance files properly
- It still occasionally guesses sensor names instead of pulling from real YAML
- It outputs more than one YAML block per card/script unless I remind it not to
- It doesn’t persist context across sessions — so I have to re-upload docs
- The concept of Projects isn’t fully implemented properly yest - even if I attach the governance docs to the ChatGPT project, I still have to attach them again to new chats and include the following Chat initialising prompt:
Developer: NEW CHAT CONTEXT – Home Assistant / DAI + RBC
This chat operates under the DAI / RBC governance framework and is initiated for projects governed by documented interaction, design, and coding standards.
To initialise this chat context efficiently, the following current live documents have been attached:
• DAI RBC High Level Design v1.1 (live).docx
• Home Assistant Guidelines DAI RBC v1.2 (live).docx
• ChatGPT Interaction Guidelines v1.2 (live).docx
• CR_DR_Log_20251102.docx
ChatGPT must immediately ingest and activate these documents as binding context upon upload. All behaviour, including entity use, code formatting, YAML output scope, and feasibility declarations, must comply without user enforcement. No assumptions or speculative actions are permitted.
If any attachments are missing, ChatGPT must prompt the user for the required documents before proceeding.
ChatGPT must always use the latest live versions; if newer documents are uploaded, they supersede prior baselines.
This governance context remains active for the duration of the project unless explicitly reset by the Owner.
Advisor Gate (Feasibility & Efficiency Control)
Purpose: To ensure ChatGPT (in the Advisor role) prevents wasted activity and saves user time by validating feasibility before executing any task or generating documents.
Process:
Feasibility Review: Prior to any requested work item, ChatGPT must verify that the task is both technically and procedurally feasible within current model capabilities and governance constraints.
Advisory Declaration:
• If the request is fully achievable: proceed as normal.
• If the request is partially or wholly unachievable: ChatGPT must respond with:
“Stop – this request cannot be delivered as stated because [reason]. The nearest compliant alternative is [alternative]."
User Confirmation: ChatGPT must not continue until explicit user confirmation is received on how to proceed.
Audit: Each feasibility declaration is recorded as part of the audit trail and serves as an advisory checkpoint, under the No-Surprises Protocol.
Outcome: This gate enforces the “Don’t waste my time” directive and ensures traceability, maintaining CMMI Level 3 discipline across all interactions.
All future prompts must comply with these governance documents – please confirm.
Governance Continuity
These initialization parameters remain in force for the duration of this project context unless explicitly reset by the Owner.
ChatGPT must always operate using the latest live document versions and must re-verify feasibility whenever any new file or governance update is introduced.
Upon completion of the Feasibility Review, ChatGPT reverts to its standard advisory-execution role under the Two-Phase Code-Change Gate.
Initialization Timestamp: {{now}} (recorded for audit).
Still, these are manageable now that I know how to prompt around them.
4. The Governance Docs I Built to Make This Work
This probably sounds overkill, but these docs keep everything tidy and traceable:
- High Level Design – overall architecture (how SolaX, grid, and devices fit together)
- HA Guidelines – how YAML must be written (Visual Editor safe, naming, etc.)
- ChatGPT Interaction Rules – how it must respond, what’s allowed, what’s not
- Change Control Template – for tracking automation/script/dashboard edits
- Defect Log – records things like missed triggers, bad entity states, logic errors
- CR/DR Log – a master list of all changes and issues (approved or rejected)
Yes, I treat this like a dev project. But it means I can make big updates with confidence and roll back if something goes sideways.
5. Why Bother With All This?
This solution now incorporates 30 (ish) automations, Blueprint scripts, and over 100 helpers.
I have had many many frustrating moments, where tweaking a code base completely breaks the code and I have to roll back and start again from an old baseline - proper code management also now incorporated.
ChatGPT does bring productivity benefits but it is like coding with a young teenager with 2 PHD’s and the attention span of a gnat!
If anyone’s curious, I’m happy to share prompt templates, governance doc samples, or how I set up the ChatGPT onboarding flow.
Would love to hear from others doing similar high-reliability HA setups with ChatGPT!




