The home-generative-agent project uses LangChain and LangGraph to create a generative AI agent that interacts with and automates tasks within a HA smart home environment. The agent understands your home’s context, learns your preferences, and interacts with you and your home to accomplish activities you find valuable. Key features include creating automations, analyzing images, and managing home states using various LLMs (Large Language Models). The architecture involves both cloud-based and edge-based models for optimal performance and cost-effectiveness. Installation instructions, configuration details, and information on the project’s architecture and the different models used are included. The project is open-source and welcomes contributions.
hi i like your project very much.
i installed in hass. when i go to Integrations
installing Home Generative Agent got this error Config flow could not be loaded: {“message”:“Invalid handler specified”}
please help me have a nice day
Hi, I would love to test it but got stuck on pgvector install hahaha
Need docker?
How to suppose do this installation?
Nice project! Unfortunetly i have some problems setting this up, see here:
Any ideas?
Best regards,
Chris
I also received the same error as you. Anyone can help me.
Hello,
There seems to be a couple issues being reported here, one related to missing requirements in the manifest and the other running with docker or HAOS.
I updated the manifest.json some time ago. This should solve the former.
I have not tested this project with docker or HAOS, only with HA Core in a manual environment under Raspberry PI OS.
Happy to elaborate or help more if I can.
Hello, what problem did you find installing pgvector? I have not tested this project under docker, only with HA Core.
Just installed the integration successfully. Pgvector was added as an additional docker container within my container installation. Also added all the these models to my ollama and also imported the blueprints.
But now i am a bit lost. The documentation misses the next configuration steps.
Do i need to create automations out of the blueprints? Do i need to expose entities as i need it to be done with other assists? What exactly can this agent see and what it doesn‘t?
Explain the difference between this agent and the standard llm based assistant in more detail please.
Regards
Lukas
Hi, Lucas! I’m very new to the Home Generative Agent integration, so rather than try to explain what I’ve learned, I’m just going to copy/paste from my discussion with Gemini…
Me: Do I need to expose an automation for Home Generative Agent (HGA) to be able to evaluate it, or do I just copy and paste the automation into the conversation space like I do with you? I’m looking for an experience more like with Copilot using VS Code, where it can see into my files, and suggest changes, but change nothing without my permission…
Gemini: That’s an excellent question, and it gets to the heart of how different AI models and systems interact with Home Assistant. The distinction you’re making is spot-on, and the answer is more nuanced than a simple yes or no.
You are correct that the “copy and paste” method works for general-purpose models like the one I’m using, or even a local language model (LLM) you might be chatting with. You provide the YAML, and the model can analyze, explain, or suggest changes based on the text alone. This is useful for getting help with a specific block of code, but it doesn’t give the AI a complete picture of your home.
For a Home Generative Agent (HGA) to have a “Copilot-like” experience where it can “see into your files,” the process is different and more integrated. Here’s a breakdown:
Exposing Entities vs. Automations
You don’t “expose” an entire automation in the same way you would a light or a switch in the Settings > Assist > Expose list. That “Expose” list is primarily for giving voice assistants like Google Assistant, Alexa, or the Home Assistant Assist a list of entities they can directly control (e.g., “turn on the living room light”).
How HGAs “See” Your Automations
A true HGA, like the Home Assistant integration you’re referring to, works more deeply within the Home Assistant environment. It’s not a separate service you just talk to; it’s a part of your system. Instead of “exposing” the automation, the HGA integration is designed to:
- Access the Home Assistant API and File System: The HGA integration, when properly set up, is granted access to the Home Assistant’s internal APIs. This allows it to fetch information about your automations, scripts, scenes, and the state of your entities. It’s this API access, not the “expose” list, that gives it a comprehensive view of your home’s configuration.
- Use a “Tool” or “Agent”: The HGA is often configured as an “agent” that has specific “tools” at its disposal. One of these tools is the ability to read and potentially modify your Home Assistant configuration files, including your automations.yaml or scripts.yaml.
- Process Your YAML: The HGA can process and understand the YAML code because its training or its programmed “tool” has been specifically designed to interpret the Home Assistant YAML syntax. This allows it to evaluate an automation’s triggers, conditions, and actions in the context of your entire home setup.
The Copilot-like Experience
The “Copilot” experience you’re looking for, where the AI can suggest changes without making them, is a core feature of how these more advanced agents are designed. They typically follow a pattern of: - Observing: The agent monitors your automations and system state.
- Analyzing: You ask the agent a question, or it identifies an opportunity, and it analyzes the relevant YAML and entity states.
- Proposing: The agent generates a new or modified block of YAML code and presents it to you.
- Awaiting Confirmation: The agent waits for your explicit permission to apply the changes. This is the critical safety and user-permission step.
In summary, you don’t use the Expose list for this. Instead, the Home Generative Agent integration, by its very nature and its deep integration with Home Assistant’s internal architecture, is granted the necessary permissions to read your configuration files and understand your automations. This is what allows it to function as a powerful, context-aware assistant for your smart home.
Me: I’ve only just installed the HGA, and I know its models are accessible in HA… What would be a simple test to prove HGA is aware of my system?
Gemini: This is a great question because it helps you confirm that the HGA is properly integrated and not just a glorified chatbot. The key is to ask it a question that it could only know if it has access to your Home Assistant entities and their states.
Here are some simple tests, ranging from the very basic to slightly more complex:
The “What’s the state of X?” Test
This is the most direct and simplest test. You are asking the HGA to perform a “lookup” on one of your entities.
- Choose a specific entity: Pick a light, a switch, or a sensor that has a clear, unambiguous state. For example:
- light.living_room_lamp
- binary_sensor.front_door
- sensor.office_temperature
- Ask the HGA a question about its state: Go to your HGA’s conversation interface within Home Assistant and ask:
- “What is the state of the living room lamp?”
- “Is the front door open?”
- “What is the current temperature in the office?”
- Evaluate the response:
- If the HGA responds with the correct state (“The living room lamp is off,” “The front door is closed,” or “The office temperature is 72 degrees”), you’ve successfully proven it has access to your system’s entity states.
- If it responds with something like, “I’m sorry, I don’t know what that is,” or “I can’t access that information,” then the integration is likely not fully set up or the HGA doesn’t have the necessary permissions.
The “List Entities in an Area” Test
This test is slightly more complex and proves that the HGA understands the Home Assistant concept of “Areas.”
- Choose an Area: Pick an area in your home that has multiple entities assigned to it, like “Living Room” or “Bedroom.”
- Ask the HGA a list-based question:
- “What entities are in the kitchen?”
- “List all the lights in the living room.”
- “Tell me which devices are currently on in the garage.”
- Evaluate the response:
- A successful response would be a list of the entities that are actually in that area, confirming that the HGA can query your system’s structure.
The “Suggest an Automation” Test
This is a step up in complexity and proves that the HGA can not only see your entities and their states but can also formulate a valid Home Assistant YAML automation based on a scenario you describe.
- A successful response would be a list of the entities that are actually in that area, confirming that the HGA can query your system’s structure.
- Pose a hypothetical scenario:
- “When the front door opens after 10 PM, turn on the living room lamp.”
- “Create an automation that turns on the kitchen light when the motion sensor in the kitchen detects motion, but only if the sun is below the horizon.”
- Evaluate the response:
- The HGA should generate a YAML code block that is syntactically correct and references the exact entity IDs from your system (e.g., binary_sensor.front_door, light.living_room_lamp).
- It’s a good sign if the HGA asks for clarification, for example, “Which living room lamp would you like to use?” or “What is the entity ID of your kitchen motion sensor?” This shows it is actively using its “tool” to query your system.
If these simple tests work, you can be confident that your HGA is correctly integrated and can be used as a powerful, context-aware assistant for your Home Assistant system.
Me: How can I create aliases for an entity to more easily converse with Assist, without giving it control of that entity?
Gemini: This is a great question because it highlights a potential gap in the current Assist functionality. By design, the process for creating aliases is tied directly to the “Expose” list. The idea is that an alias is a different name for something you want to control with a voice command.
Here’s the core issue and a few workarounds:
The Challenge: Aliases are Tied to Exposure
Home Assistant’s user interface currently ties the creation of aliases to the exposure of an entity. When you go to Settings > Voice Assistants > Assist > Expose, you select an entity, and in that same menu, you can add an alias for it. The system’s logic assumes that if you’re giving something an alias, it’s for the purpose of a voice command, which means it should be exposed. There is no official way to create a conversational alias for a non-exposed entity.
Why This Is a Problem for Your Use Case
You want to be able to talk about a device (like a sensor) and get information about it without giving Assist the ability to control it. For example, you might want to say, “What’s the temperature on the back porch?” but you don’t want a misheard command to accidentally turn on a light that’s in the same area. This is a very valid concern.
Workarounds and Solutions
Since the official method doesn’t support this, you have to use a workaround. The most common and effective solution is to create a helper or a template sensor that you can then expose and give an alias.
Method 1: The “Dummy” Helper Entity
This is a simple and clean way to do it. You create an input_text or input_boolean helper that serves no other purpose than to act as a conversational alias.
- Create a Helper:
- Go to Settings > Devices & Services > Helpers.
- Click “Create Helper”.
- Select a simple entity like “Text” or “Toggle”.
- Give it a generic name and entity ID, like input_text.back_porch_temp_info.
- Expose the Helper with an Alias:
- Go to Settings > Voice Assistants > Assist > Expose.
- Find the new helper you just created (input_text.back_porch_temp_info) and check the box to expose it to Assist.
- Click on the helper’s name to open its details.
- Under “Aliases,” add the phrases you want to use, such as “back porch temperature” or “porch temp.”
- Create an Automation to Respond:
- Create a new automation.
- Trigger: Choose the conversation trigger.
- Intent: Set the intent to HassGetState. This is a built-in intent for getting the state of an entity.
- Entity: Set the entity to your helper, input_text.back_porch_temp_info.
- Action: Now, you can use the homeassistant.turn_off service to have Assist respond. In the text_to_speech service call, you would use a template to get the actual temperature from your real sensor. For example:
- text: ‘The temperature on the back porch is {{ states(“sensor.back_porch_temperature”) }} degrees.’
This method gives you complete control. You’re exposing a dummy entity, and you can program an automation to respond to it however you like, without any risk of a misheard command controlling your other devices.
Method 2: Using Template Sensors
If you just want to get information from a sensor, you can create a template sensor and expose that with an alias.
- text: ‘The temperature on the back porch is {{ states(“sensor.back_porch_temperature”) }} degrees.’
- Create a Template Sensor:
- You’ll need to add a block to your configuration.yaml file (or a separate file if you use packages).
- The code would look something like this:
template:
- sensor:
- name: "Back Porch Temp Info"
unique_id: "back_porch_temp_info"
state: "{{ states('sensor.back_porch_temperature') }}"
unit_of_measurement: "°F" # or whatever you use
icon: mdi:thermometer
- Expose and Alias the Template Sensor:
- After restarting Home Assistant, go to Settings > Voice Assistants > Assist > Expose.
- Find your new template sensor (sensor.back_porch_temp_info) and expose it to Assist.
- Add your aliases like “back porch temperature” or “porch temp.”
This method is cleaner because the template sensor’s state is the information you want to convey, so you don’t need a separate automation. When you ask Assist, “What’s the back porch temperature?”, Assist can directly respond with the state of that template sensor.
In both cases, you’re not directly exposing the entity you’re trying to protect (light.back_porch_light), but you are exposing a different entity that you can use for conversation. This is the current best practice for achieving what you’re trying to do.
Lucas, I’m not far into this, and I’m a 70 yo with limited intelligence, but what I am finding is that the HGA performs like the voice assistant for exposed entities, and digs deeper into the system for state and attribute type info. I really came here to see what I could learn, rather than share my limited knowledge. For what it’s worth, I hope this helps!
For context, I am running HAOS on an Intel N100 16gb mini computer, Ollama serving Qwen3-1.7b to voice assistant on another Intel N100 16gb, and Ollama serving the standard HGArecommended suite of models, Qwen3-8b, Qwen3-1.7b, Qwen3-2.5…, and the mxbai embedder model mentioned in yours or another previous post on a Mac Mini M4 24gb.
It’s mostly working, but though I think the Qwen3-4b model might run better on my hardware, it doesn’t seem to be a current option for the chat agent.
Running the voice assistant on a separate server was something that came up in discussion with Gemini, a means to take that load off the HGA and make common tasks faster… Remains to be seen…
Hi, regarding the setup. You need to add Home Generative Agent as an assistant in Settings → Voice Assistants. My configuration is shown in the snapshot below. See the project’s readme for what you can do with it. You can also use the two provided blueprints to create automation that converse with the agent on a periodic basis, for example providing a summary of the home’s state with push notifications.
Hi!
I am trying to get the integration installed and configured as I love what it is setup to achieve. I am having several issues.
I’ve tried configuring it with both ollama and directly with openAI as the models. I had an issue with ollama returning an error based around intent understanding so I switched to openAI only thinking that my ollama configuration could be part of the issue.
If I just use the HGA conversation created under HGA, not the full voice assistant tool chain. If I enable Assist to try and control devices I get this error:
Model must support tool calling, model = NullChat
If I disable assist, I get this error:
Unexpected error during intent recognition
I have a very simple prompt, compared to what I had to use previously with Open AI Extended Conversations where you need to describe the JSON of how devices expose themselves. I assume this simple prompt is OK based on the default prompt that showed up when I configured the installation.
Do not ask for confirmation unless the user is uncertain.
When answering general questions, use simple, natural language — clear and brief.
Never restate the user’s request or add commentary. Do not thank, greet, or summarize.
Your description earlier about how the agent is deeply integrated into the home assistant tool flow made me think that the more complicated prompt wasn’t required.
On the installation steps, I didn’t download all the blueprints, but I assumed since I was just trying to do a hello world type test, it wasn’t required.
Also one minor annoyance is that everytime I configure the integration it fills in the face recognition server with the default server which I need to delete. The other servers, like ollama don’t show that behavior of reverting to the default config.
Any thoughts on how to proceed? I am running HAOS inside a VM on proxmox.
Managed to get this set up and I’m working on getting a feel for how it works. Wanted to offer my 2 cents on a couple things.
I found step 4 under HACS a bit confusing, took me a while to figure out you meant take the 2 files in the Blueprints folder of your repo and add them to the blueprints folder in HA.
It would also be nice to have customizable keep alive timer for the ollama models
I finally got everything connected with this but whenever I ask Assist I get “Model must support tool calling, model = NullChat”
Im using OpenAI model and tried various options instead of “use recommended settings” but no luck…
