Power Automate does this really well - dump in a heap of text about what you want and it automagically creates everything you need.
Home assistant already knows what entities are in play, what rooms these are in and everything about them - all we need now is a connection to some sort of LLM to create the logic based on a user explanation.
This is 100% the final fronter of “Home approved” and “Automation” - putting this feature in will be a generational improvement on the ecosystem.
I agree, this is the future. I would think this would require heavy redesign and GPT access to all entities - which may be a privacy concern. Would a local LLM be powerful enough to handle this kind of workload? It would be a beta like experience.
This is a MASSIVE undertaking. I won’t say it’s not a great idea and wouldn’t be awesome… But
First HA doesnt know as much as you think it does. (in this context, yes it knows things and stuff about HA but there’s more than that, you mentioned Power Automate, you then know how much is actually stored in the MS Graph under your SharePoint data.
Second, because of the nature of HA and it’s contributors - it would require maintenance of a monthly RAG (Retrieval Augmented Generation) package until such time as Test time Compute and Test time Training become a real thing on the regular. (time+money, for something that if yih wait about 12-18 months you probably won’t need)
Imagine the amount of horsepower MS has put behind power automate. (unlimited cash)
Not saying it’s not a good idea. But we (humans) are currently very bad at assuming ‘AI’ can solve all things in it’s current iteration. To put it simply, it cannot and would require a LOT of expensive support coding to make it work. (I mean a Ton) MS just happened to have the resources to build that because they want (and frankly, need) consumers to want it. (it’s the right call if you have unlimited funding) If you’re the Open Home foundation, or Nabu Casa. You have way better things to spend money and resources on when the target is moving incredibly fast.
Instead of we’re slightly patient, the AI models (post reasoning models, probably ones that incorporate TTT) will advance enough to handle it without as much expensive support code. I’d put this on the back burner and re-ask this question in about 18 months…