HA Text AI: Transforming Home Automation through Multi-LLM Integration
Hey everyone,
I’m thrilled to introduce HA Text AI, an advanced AI integration designed to elevate your home automation experience! What started as a personal project has now evolved into a powerful tool that I’m excited to share with all of you. Screenshots.
Core Concept
HA Text AI leverages multiple AI providers, including OpenAI and Anthropic, to generate intelligent event descriptions, create smart notifications, and retrieve structured information. This integration transforms your smart home into a more responsive and intuitive environment.
Multi-Provider Support
In addition to standard providers like OpenAI and Anthropic, HA Text AI supports custom OpenAI-compatible REST API endpoints, allowing for greater flexibility and integration with various AI models.
Compatibility Requirements:
OpenAI-like REST API structure
JSON request/response format
Standard authentication methods
Similar model parameter handling
Advanced Control Mechanisms
This integration offers a range of features to optimize your AI interactions:
Context Depth Management: Customize conversation history (1-20 previous messages).
Token Usage Control: Manage memory and optimize API interactions.
Configurable Request Intervals: Implement rate limiting and monitor token usage.
Granular Response Configuration: Adjust max tokens per response and temperature (creativity) settings.
Use Case Scenarios
Intelligent sensor event descriptions
Context-aware notifications
Transformation of raw data into human-readable text
JSON response generation for further automation
Flexible Integration
HA Text AI is lightweight and extensible, supporting:
Multiple AI providers
Customizable response formats
Easy integration with existing automations
Community Invitation
I’m eager to hear your thoughts and ideas! Let’s collaborate and explore how AI can revolutionize home automation together. Your feedback and creative suggestions are invaluable!
Full Documentation
For more details, check out the GitHub repository for comprehensive documentation.
Looking forward to your insights and potential use cases!
Hi! Great job!
Can you give us some examples of use?
Yes, definitely, as soon as I have a bit of free time, I’ll prepare several Blueprints from my automations. For now, I’ll describe them in text, and maybe you can quickly create something yourselves.
I was tired of trying to configure various automation scenarios and account for everything, so the idea was born from my desire to control home climate for comfort. I send a request to AI with a list of humidity and temperature sensors, open/closed window and door statuses, and a history of people’s presence by rooms for a week (which is also automatically generated weekly by a query to the integration, with the response coming in JSON - you can specify the response format to LLM in text, of course there can be comments, but JSON is always correct and easily extractable). Essentially, the trigger is people’s presence or approaching time of presence. AI generates a response in JSON according to a template, indicating how and for how long to turn on the air conditioner, humidifiers, and if windows are open, it sends a push notification that they should be closed. Accordingly, in rooms where people are present, the air conditioner’s power is reduced, and in rooms where people are not yet but will be, it works more intensively.
The main point is that AI perfectly understands what is comfortable, taking into account indoor and outdoor humidity, which mode is better to enable, and so on. Requests, of course, cost money, but it comes out to no more than a couple of dollars per month, and it’s always comfortable.
In the children’s room and bedroom, I have CO2 sensors, and if there’s an excess, a request goes to AI with the growth dynamics, and I receive a push notification that ventilation is needed for 30 minutes or more, indicating at the current dynamics when harmful concentration will occur in N minutes or hours.
Similar to the first automation, during cold times I control thermostats. There’s an additional benefit in that I send AI a weather forecast for the day and current outdoor sensor readings. Since my automation maintains message context, it allows for smooth management.
Using the integration - GitHub - valentinfrlch/ha-llmvision: Let Home Assistant see!, I form a typical schedule of people’s arrivals/departures, and if something deviates from it, I receive notifications. I use people descriptions and calendar from LLM Vision to find something atypical. Free models on OpenRouter are sufficient for many tasks. From my observations, Sonnet Haiku works excellently for more complex tasks.
In the mornings, based on the calendar and weather forecast, I create a daily timeline for myself and my spouse. I tried using schedule data, but ultimately models calculate average times quite well - when and where to travel, etc.
Based on the weather forecast and other data, I create a card and message for the nanny about how to dress the child. If the UVI is high, AI adds a reminder about SPF and which one to use.
Again, based on people’s presence schedule, AI forms a task for the robot vacuum by cleaning zones (its single charge has long been insufficient for everything, so charging and idle time are taken into account).
Based on the weather forecast, current readings, history of turn-ons/turn-offs, and lawn description, automatic watering occurs.
Depending on the time of day, weather, season, and current home environment, it smoothly controls the color temperature of lighting in the work area.
Again, based on LLM Vision and Frigate, when a child cries at night, for example, the volume is reduced if we’re watching a series or movie, the light turns on, and this happens only if there are no adults in the room according to LLM Vision’s description. This can be configured through automations locally, but I find it convenient that AI can send everything in JSON, based on which management occurs, clearly understanding what should and should not be done.
Depending on which room people are in, a command is sent to the speaker for TTS about someone arriving, standing at the door, i.e., if I’m far from the door and might not hear the doorbell, or it’s turned off, the speaker reports a description of what’s happening.
To avoid information overload, notifications about adverse weather phenomena from government services are loaded into Home Assistant, and if such a notification exists and is truly important, family members receive it. But if it’s just a warning about fog at night, then no.
Through the nanny’s phone tracking, I monitor how many hours she was at home, when she left and arrived. Obviously, you can count everything and write it down, but AI prepares us a weekly summary so we don’t forget to pay for any overtime.
Also, for the sake of economy, AI provides an account of where and when lights are most often on, other integrated equipment works when presence sensors show no one is around. This happens once a week, just so we don’t forget about nature and energy saving, and comes to the messenger.
In general, I used to have many different notifications in the “if, then…” format, but now they’ve become much fewer and more useful. For example, if it’s evening, AQI is high, many pollutants, and all windows are closed, then there’s no need to write about it. AI aggregates all basic indicators together on a timer, and if something is really wrong, it writes about it. Again, I request a response in JSON with a parameter of importance and notification necessity. I’m an allergy sufferer, and I deleted all pollen level apps, canceled subscriptions, and now simply retrieve open data and form notifications based on allergens if something will create discomfort for me.
Based on Frigate and LLM Vision, I’ve set up filtering of false fire alarm notifications. For instance, Frigate often confuses things with a vacuum cleaner and music sounds. In case of a trigger, LLM Vision makes snapshots from all camera streams, forms a description of what’s happening, then I upload this to AI, and if there’s no panic, people are home, etc., I avoid receiving critical notifications in vain.
I’ve highlighted only the main points, hoping this will inspire ideas that might be useful to others. In Russian originally.
P.S.: There’s also a button in the child’s room that, when pressed, makes the speaker tell a kind and funny poem about him or an interesting fact about something, understandable for his age and interesting. The nanny isn’t always happy, and neither are we, when he decides to play with the button for 20 minutes straight :))
Looks cool, will see if I can get it working with Ollama
Yep, it’s in the “Potentially Compatible” list. Theoretically should work if you configure Ollama to expose an OpenAI-compatible API endpoint. No guarantees, but worth experimenting.
Pro tips for smooth Ollama integration:
Use Ollama’s OpenAI compatibility mode
Ensure JSON request/response format
Enable OpenAI-compatible API in Ollama settings
Run Ollama with OLLAMA_HOST=0.0.0.0:11434 ollama serve
Configure custom endpoint in HA integration: http://your_ollama_ip:11434/v1
Potential challenges:
Ensure exact JSON structure matches OpenAI
Some advanced parameters might not translate perfectly
Just realized that I need to be on 2024.11, so will work on upgrading to that first
It should work on earlier versions too. I’ve primarily tested on recent versions, so can’t give 100% guarantee, but compatibility looks solid from around Home Assistant 2023.10 onwards.
How do the sensors work? I don’t see where you set them in the yaml or call them via automation (at least via the GitHub). The only thing I see about sensors is the naming convention, but nothing else (unless I missed it somewhere)
How do the sensors work? I don’t see where you set them in the yaml or call them via automation (at least via the GitHub). The only thing I see about sensors is the naming convention, but nothing else (unless I missed it somewhere)
The sensor attributes can be found directly in the sensor’s state. I recommend checking the available attributes here:
Here’s a basic automation example to demonstrate sensor usage:
AI_Weather_Recommendation.yaml
alias: AI Weather Recommendation
description: Get AI-powered clothing recommendation based on weather conditions
mode: single
max_exceeded: silent
triggers:
- at: "07:30:00"
trigger: time
actions:
- data:
instance: sensor.ha_text_ai_openrouter
context_messages: 5
question: |
Based on current weather conditions, provide a detailed clothing recommendation:
Temperature: {{ state_attr('weather.local', 'temperature') }}°C
Feels Like: {{ state_attr('weather.local', 'apparent_temperature') }}°C
Cloud Coverage: {{ state_attr('weather.local', 'cloud_coverage') }}%
Humidity: {{ state_attr('weather.local', 'humidity') }}%
Wind: {{ state_attr('weather.local', 'wind_speed') }} km/h,
Direction {{ state_attr('weather.local', 'wind_bearing') }}°
Pressure: {{ state_attr('weather.local', 'pressure') }} mmHg
Please suggest:
1. Optimal clothing layers
2. Recommended accessories
3. Tips for staying comfortable outdoors
action: ha_text_ai.ask_question
- wait_for_trigger:
- entity_id: sensor.ha_text_ai_openrouter
attribute: response
trigger: state
- choose:
- conditions:
- condition: template
value_template: >-
{{ state_attr('sensor.ha_text_ai_openrouter', 'response') is
defined }}
sequence:
- data:
title: 🧥 Today's Clothing Recommendation
message: "{{ state_attr('sensor.ha_text_ai_openrouter', 'response') }}"
action: notify.mobile_app_someone
- conditions:
- condition: template
value_template: "{{ true }}"
sequence:
- data:
title: ❌ AI Recommendation Error
message: Unable to generate weather-based recommendation
action: notify.mobile_app_someone
UPD: I realized that the configuration block in YAML was not very clearly described on GitHub, so I added a detailed description and a parameter table to the documentation.
Yes, of course, as soon as I’m confident that the current version is stable and reliable. At the moment, Gemini can be used via OpenRouter, for example — this combination works great.