Has anyone created an AI Agent/Gem with *full* context and understanding of their personal HA setup?

I’ve found ChatGPT and, increasingly, Google Gemini incredibly powerful for brainstorming new automation ideas, crafting quick template sensors, making automations more efficient, etc.

I’ve been playing around with creating a Gemini Gem (which is Gemini’s versions of “agents”) to be an HA expert that can help me with HA and make recommendations for how to use automations more effectively. Part of that is teaching the agent about its role, and during this step I discovered I can give my HA agent live access to google drive files to help it better understand its tasks. This got me thinking – what if I automate a process to parse as much detail about my HA setup as possible into a Google Doc or Sheet, and then share it with the agent?

Would my interactions with the agent become more meaningful and helpful if it understood my setup, without me having to explain specific details over and over? And since (in theory) the process is automated, every time I change my setup, the agent would nigh-immediately understand the changed context.

I’ve been using the agent to make recommendations about how to export the data in question to a google sheet, but I wanted to see if anyone is ahead of me in this experiment and has found it to be useful? A waste of time?

I’m progressing with ways to export lists of installed add-ons, integrations, and entities. I’m also trying to find ways to export node red info so that it can have a look at how I’ve specifically automated my house.

Any best practices for making all of this work efficiently?

I’m fully aware this could be a meaningless waste of time, but it’s a fun experiment to test what these things are really capable of.

Here’s a couple of threads:

@NathanCu has his own local AI (Friday), but the second thread is applicable to cloud AI.

I think this comes down to the amount of context in the prompt, but the second thread above describes a number of tools you can add - these are essentially scripts that the AI runs to gather information.

2 Likes

Thanks for your reply – I think the details you highlighted from the second post about how to parse info from HA will be helpful and worth looking into much deeper.

To clarify – I am not interested in using AI to help run my smart home. I’m not at a level of comfort with the tech where I’m ready to give away any level of control, even to a self-hosted solution. And frankly I don’t want to invest in the required hardware yet, either.

But what I would like to do is use AI/LLMs as a sort of home automation coach that advises me as I build out my smart home – an expert that knows everything about HA / home automation / and my own specific setup so that I can bounce ideas off of it, have it help me craft automations, and make recommendations for how to achieve some of my HA goals

– then I take the output and deploy things the old fashioned way, not have an LLM take the reins for me.

 

As far as I can tell this is different to what other people are talking about at this point, including in those two links you shared . But I do find it really interesting to read what people who are running their own LLMs are able to achieve, and continue to keep an eye on that space – for me, that’ll probably be one day… but not today!

Do be careful. An LLM is likely to be a year or so out of date (ask it what it’s training cut-off date is), and as far as HA goes it will largely have been trained on posts in this forum - which means that about 60% of them are wrong as well.

An LLM has no way of distinguishing between a problem posted here and its solution - it’s just a statistical model of what words usually come next to each other. It is not an expert.

Problem? Getting the best solution from AI

1 Like

I’m no LLM expert, but an HA configuration feels like a pretty niche data set.

And, the actual HA configuration is likely to be obscured from the LLM at the input interface because the schema is not stable. (That’s a good thing: HA is under active development; it’s the rapid development in the home automation arena that is causing the instability, not HA.).

My suspicion/worry is that for every step that your LLM saves you, even if you accomplish your stated immediate goal, it sets you two steps back in ways that are only later apparent.

2 Likes

The Claude desktop client has access to a few handy extensions.

Setup file access to the home assistant config directory so it can read / edit yaml, logs etc.

Then connect to a home assistant MCP and it has access to entities and history.

It works quite well but sometimes I don’t think it understands what access it has, I could probably fix this with a proper intro.

Yes, I uploaded all the folders and configuration files to a private repository and gave it to the gem, but it’s acting even crazier than usual. It even invents entities even though it has a way to find where they are defined and which ones are correct. So it’s not very useful